Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
So, I've seen a few similar questions on Stack Overflow, but nothing seems to address my issue, or the general case. So, hopefully this question fixes that, and stops my headaches. I have a git repo of the form: ``` repo/ __init__.py sub1/ __init__.py sub1a/ __init.py mod1.py sub2/ __init__.py mod2.py ``` How do I import mod2.py from mod1.py and vice versa, and how does this change depending on whether mod1.py or mod2.py are scripts (when each respectively is importing-- not being imported)?
The simplest solution is to put the directory containing `repo` in your `PYTHONPATH`, and then just use absolute-path imports, e.g. `import repo.sub2.mod2` and so on. Any other solution is going to involve some hackery if you want it to cover cases where you're invoking both the python files directly as scripts from arbitrary directories - most likely `sys.path` mangling to effectively accomplish the same thing as setting `PYTHONPATH`, but without having to have the user set it.
If you are using Python 2.6+, you have two choices: * Relative imports * Adding `repo` to your `PYTHONPATH` With relative imports, a special dot syntax is used: *in package sub1:* ``` from .sub2.mod2 import thing ``` *in package sub1a:* ``` from ..sub2.mod2 import otherthing ``` Note that plain import statements (`import module`) don't work with relative imports. A better solution would be using absolute imports with your Python path set correctly (example in `bash`): ``` export PYTHONPATH=/where/your/project/is:$PYTHONPATH ``` More info: * [How to do relative imports in Python?](https://stackoverflow.com/questions/72852/how-to-do-relative-imports-in-python) * [Permanently add a directory to PYTHONPATH](https://stackoverflow.com/questions/3402168/permanently-add-a-directory-to-pythonpath/3402176#3402176) * [Import a module from a relative path](https://stackoverflow.com/questions/279237/python-import-a-module-from-a-folder/6098238#6098238)
Importing From Sister Subdirectories in Python?
[ "", "python", "import", "" ]
Given the following table: ``` +----+----------+-----------+----------+ | ID | date1 | date2 | date3 | +----+----------+-----------+----------+ | 1 | 3/2/2013 | 5/6/2013 | | | 2 | | 12/1/2011 | 6/5/2010 | | 3 | 1/1/1936 | 1/5/1936 | 1/9/1945 | | 4 | 2/1/2014 | | | +----+----------+-----------+----------+ ``` I want a query that returns the earliest date in each row. At least one of the date columns will be populated. I've tried: ``` SELECT id, iif(date1<date2 and date1<date3, date1, iif(date2<date1 and date2<date3, date2, date3)) as dateEarliest FROM tbl; ``` But it seems that this only returns the correct result if `date3` is the earliest; otherwise it returns a blank.
You can't compare against a NULL value. Use the Nz function to convert the NULLs to a value you can compare to. The syntax for Nz is: `Nz ( variant, [ value_if_null ] )`. So you'll use something like this: ``` iif(Nz(date1,#1/1/2999#) < Nz(date2,#1/1/2999#) and Nz(date1,#1/1/2999#) < (Nz(date3,#1/1/2999#) ``` If this is being used in Access, and you know VBA, you could also create a function that returned the value you want. This may be neater as it would look like: ``` Select id, LowDate([date1],[date2],[date3]) as dateEarliest ``` Here's a function that should work for you. ``` Function LowDate(D1, D2, D3) D1 = Nz(D1, #1/1/2999#) D2 = Nz(D2, #1/1/2999#) D3 = Nz(D3, #1/1/2999#) If D1 < D2 And D1 < D3 Then LowDate = D1 ElseIf D2 < D1 And D2 < D3 Then LowDate = D2 Else LowDate = D3 End If End Function ```
This may not be the "best" way to do it, but one way to do it would be to unpivot your data, so that it looks something like this: ``` id date 1 1 3/2/2013 1 5/6/2013 2 2 6/5/2010 2 12/1/2011 3 1/1/1936 3 1/5/1936 3 1/9/1945 4 4 2/1/2014 ``` This can be done like so: ``` SELECT id, date1 from tbl UNION SELECT id, date2 as date1 from tbl UNION SELECT id, date3 as date1 from tbl ``` (Note: I named the date field date1, since date is a reserved keyword.) From here, you can use aggregate functions such as min: ``` select id, min(date1) as dateEarliest from (SELECT id, date1 from tbl UNION SELECT id, date2 as date1 from tbl UNION SELECT id, date3 as date1 from tbl) unpivottbl group by id; ``` Which will give you what you want.
How to query earliest date from several possibly blank date fields?
[ "", "sql", "ms-access", "" ]
I have a following table Students ``` Id StudentId Subject Date Grade 1 001 Math 02/20/2013 A 2 001 Literature 03/02/2013 B 3 002 Biology 01/01/2013 A 4 003 Biology 04/08/2013 A 5 001 Biology 05/01/2013 B 6 002 Math 03/10/2013 C ``` I need result into **another table** called `StudentReport` as shown below. This table is the cumulative report of all students records in chronological order by date. ``` Id StudentId Report 1 001 #Biology;B;05/01/2013#Literature;B;03/02/2013#Math;A;02/20/2013 2 002 #Math;C;03/10/2013#Biology;A;01/01/2013 3 003 #Biology;A;04/08/2013 ```
Typically you would not store this data in a table, you have all the data needed to generate the report. SQL Server does not have an easy way to generate a comma-separated list so you will have to use `FOR XML PATH` to create the list: ``` ;with cte as ( select id, studentid, date, '#'+subject+';'+grade+';'+convert(varchar(10), date, 101) report from student ) -- insert into studentreport select distinct studentid, STUFF( (SELECT cast(t2.report as varchar(50)) FROM cte t2 where c.StudentId = t2.StudentId order by t2.date desc FOR XML PATH ('')) , 1, 0, '') AS report from cte c; ``` See [SQL Fiddle with Demo](http://www.sqlfiddle.com/#!3/58bdd/2) (includes an insert into the new table). Give a result: ``` | ID | STUDENTID | REPORT | ------------------------------------------------------------------------------------ | 10 | 1 | #Biology;B;05/01/2013#Literature;B;03/02/2013#Math;A;02/20/2013 | | 11 | 2 | #Math;C;03/10/2013#Biology;A;01/01/2013 | | 12 | 3 | #Biology;A;04/08/2013 | ```
If you are looking to insert the data from 'Students' into 'Student Report', try: ``` INSERT INTO StudentReport (ID, StudentID, Report) SELECT ID, StudentID, '#' + subject + ';' + grade + ';' + date AS report FROM Students ```
SQL Query how to summarize students record by date?
[ "", "sql", "sql-server", "" ]
I have a regex that matches all three characters words in a string: ``` \b[^\s]{3}\b ``` When I use it with the string: ``` And the tiger attacked you. ``` this is the result: ``` regex = re.compile("\b[^\s]{3}\b") regex.findall(string) [u'And', u'the', u'you'] ``` As you can see it matches you as a word of three characters, but I want the expression to take "you." with the "." as a 4 chars word. I have the same problem with ",", ";", ":", etc. I'm pretty new with regex but I guess it happens because those characters are treated like word boundaries. Is there a way of doing this? Thanks in advance, # EDIT Thaks to the answers of @BrenBarn and @Kendall Frey I managed to get to the regex I was looking for: ``` (?<!\w)[^\s]{3}(?=$|\s) ```
If you want to make sure the word is preceded and followed by a space (and not a period like is happening in your case), then use [lookaround](http://www.regular-expressions.info/lookaround.html). ``` (?<=\s)\w{3}(?=\s) ``` If you need it to match punctuation as part of words (such as 'in.') then `\w` won't be adequate, and you can use `\S` (anything but a space) ``` (?<=\s)\S{3}(?=\s) ```
This would be my approach. Also matches words that come right after punctuations. ``` import re r = r''' \b # word boundary ( # capturing parentheses [^\s]{3} # anything but whitespace 3 times \b # word boundary (?=[^\.,;:]|$) # dont allow . or , or ; or : after word boundary but allow end of string | # OR [^\s]{2} # anything but whitespace 2 times [\.,;:] # a . or , or ; or : ) ''' s = 'And the tiger attacked you. on,bla tw; th: fo.tes' print re.findall(r, s, re.X) ``` output: ``` ['And', 'the', 'on,', 'bla', 'tw;', 'th:', 'fo.', 'tes'] ```
Python Regex doesn't match . (dot) as a character
[ "", "python", "regex", "" ]
I have 2 CSV files: 'Data' and 'Mapping': * 'Mapping' file has 4 columns: `Device_Name`, `GDN`, `Device_Type`, and `Device_OS`. All four columns are populated. * 'Data' file has these same columns, with `Device_Name` column populated and the other three columns blank. * I want my Python code to open both files and for each `Device_Name` in the Data file, map its `GDN`, `Device_Type`, and `Device_OS` value from the Mapping file. I know how to use dict when only 2 columns are present (1 is needed to be mapped) but I don't know how to accomplish this when 3 columns need to be mapped. Following is the code using which I tried to accomplish mapping of `Device_Type`: ``` x = dict([]) with open("Pricing Mapping_2013-04-22.csv", "rb") as in_file1: file_map = csv.reader(in_file1, delimiter=',') for row in file_map: typemap = [row[0],row[2]] x.append(typemap) with open("Pricing_Updated_Cleaned.csv", "rb") as in_file2, open("Data Scraper_GDN.csv", "wb") as out_file: writer = csv.writer(out_file, delimiter=',') for row in csv.reader(in_file2, delimiter=','): try: row[27] = x[row[11]] except KeyError: row[27] = "" writer.writerow(row) ``` It returns `Attribute Error`. After some researching, I think I need to create a nested dict, but I don't have any idea how to do this.
A nested dict is a dictionary within a dictionary. A very simple thing. ``` >>> d = {} >>> d['dict1'] = {} >>> d['dict1']['innerkey'] = 'value' >>> d['dict1']['innerkey2'] = 'value2' >>> d {'dict1': {'innerkey': 'value', 'innerkey2': 'value2'}} ``` You can also use a [`defaultdict`](https://docs.python.org/2/library/collections.html#collections.defaultdict) from the [`collections`](https://docs.python.org/2/library/collections.html) package to facilitate creating nested dictionaries. ``` >>> import collections >>> d = collections.defaultdict(dict) >>> d['dict1']['innerkey'] = 'value' >>> d # currently a defaultdict type defaultdict(<type 'dict'>, {'dict1': {'innerkey': 'value'}}) >>> dict(d) # but is exactly like a normal dictionary. {'dict1': {'innerkey': 'value'}} ``` --- You can populate that however you want. I would recommend in your code something *like* the following: ``` d = {} # can use defaultdict(dict) instead for row in file_map: # derive row key from something # when using defaultdict, we can skip the next step creating a dictionary on row_key d[row_key] = {} for idx, col in enumerate(row): d[row_key][idx] = col ``` --- According to your [comment](https://stackoverflow.com/questions/16333296/how-do-you-create-nested-dict-in-python/16333441?noredirect=1#comment23400008_16333441): > may be above code is confusing the question. My problem in nutshell: I > have 2 files a.csv b.csv, a.csv has 4 columns i j k l, b.csv also has > these columns. i is kind of key columns for these csvs'. j k l column > is empty in a.csv but populated in b.csv. I want to map values of j k > l columns using 'i` as key column from b.csv to a.csv file My suggestion would be something *like* this (without using defaultdict): ``` a_file = "path/to/a.csv" b_file = "path/to/b.csv" # read from file a.csv with open(a_file) as f: # skip headers f.next() # get first colum as keys keys = (line.split(',')[0] for line in f) # create empty dictionary: d = {} # read from file b.csv with open(b_file) as f: # gather headers except first key header headers = f.next().split(',')[1:] # iterate lines for line in f: # gather the colums cols = line.strip().split(',') # check to make sure this key should be mapped. if cols[0] not in keys: continue # add key to dict d[cols[0]] = dict( # inner keys are the header names, values are columns (headers[idx], v) for idx, v in enumerate(cols[1:])) ``` Please note though, that for parsing csv files there is a [csv module](https://docs.python.org/2/library/csv.html).
**UPDATE**: For an arbitrary length of a nested dictionary, go to [this answer](https://stackoverflow.com/a/8702435/1757417). Use the defaultdict function from the collections. High performance: "if key not in dict" is very expensive when the data set is large. Low maintenance: make the code more readable and can be easily extended. ``` from collections import defaultdict target_dict = defaultdict(dict) target_dict[key1][key2] = val ```
How do you create nested dict in Python?
[ "", "python", "python-2.7", "dictionary", "mapping", "nested", "" ]
I need to have a function which returns a guaranteed input of type `float`. To implement this I came up with a recursive way, but only seconds later I realized I could use a `while`-loop just as well. So, which one is preferred? Is there a reason to use one over the other? I'm pretty sure I should go with the `while` loop due to Python's lack of tail calls (afaik?), but I'm still not 100% confident. Here are both my functions: ``` def inputf(prompt=None): try: return float(input(prompt)) except ValueError: return inputf(prompt) def inputf2(prompt=None): while True: try: return float(input(prompt)) except ValueError: pass ```
I would say that both are acceptable, and there is no huge advantage over either of them. It is unlikely (but you never know) that a user fail to input a number over a hundred times with the recursive function. If you want to stick with recursion, but worried about reaching the [recursion limit](http://docs.python.org/2/library/sys.html#sys.getrecursionlimit) you can set it to a higher value, but that would be for an extreme case scenario. ``` import sys sys.setrecursionlimit(10000) ``` Although, I would probably go with the while loop as it is probably easier to understand.
I guess with the recursive function, you'll end up with this exception: ``` RuntimeError: maximum recursion depth exceeded ``` But that's after 256 recursions (or even much more, depends on the system you're running Python on).
Two versions of my input function, recursion vs while-loop?
[ "", "python", "recursion", "input", "while-loop", "" ]
I'm writing a program to solve quadratic equations using the quadratic formula but it only works when a = 1 but i want it to work when a is more than 1 Here is my code: ``` import math def solve(a, b, c): x = ((-1)* b + math.sqrt(b**2-4*a*c))/2*a print "x = %s" %x print "or" y = ((-1)* b - math.sqrt(b**2-4*a*c))/2*a print "x = %s" %x while 1: a = int(raw_input("Enter A :")) b = int(raw_input("Enter B :")) c = int(raw_input("Enter C :")) solve(a, b, c) ``` It works with 1 but when I use a number more than one when i use say 4 i get this error ``` Traceback (most recent call last): File "C:\Documents and Settings\User\Desktop\Factor.py", line 18, in <module> solve(a, b, c) File "C:\Documents and Settings\User\Desktop\Factor.py", line 5, in solve x = ((-1)* b + math.sqrt(b**2-4*a*c))/2*a ValueError: math domain error ``` is there a way around this if so help!!
The problems are here: 1. operator precedence: your `/2*a` should be `/(2*a)` to work correctly. 2. the domain of `sqrt`: `math.sqrt` bails on negative numbers. 3. **Edit 2:** `y = ...` just after `print "or"` should be `x = ...` To fix the latter, you'll need some sort of conditional: ``` disc = b**2 - 4*a*c sqrtdisc = math.sqrt(disc) if disc >= 0 else math.sqrt(-disc)*1j ``` **Edit:** You could also use `cmath.sqrt`, which automatically handles negative numbers: ``` disc = b**2 - 4*a*c sqrtdisc = cmath.sqrt(disc) ``` (Thanks to various other answerers for effectively letting me know that `cmath` exists.)
The reason for why you're getting `ValueError` is that your expression `b**2-4*a*c` is returning a negative value, which is not allowed for `math.sqrt`. ``` >>> math.sqrt(-1) Traceback (most recent call last): File "<ipython-input-38-5234f21f3b4d>", line 1, in <module> math.sqrt(-1) ValueError: math domain error ``` use `cmath.sqrt` to handle negative values as well: ``` >>> import cmath >>> cmath.sqrt(-1) 1j ```
Solving Quadratic Formula
[ "", "python", "math", "quadratic", "equation-solving", "" ]
I am trying to insert ID field from one table to another using below query: ``` INSERT INTO `srikprag_db`.`acbalance` SELECT `id` FROM `srikprag_mlm`.`member_table` ``` Error is showing: > #1136 - Column count doesn't match value count at row 1 What is the reason for this error?
You did not define the destination column on where the values from the `SELECT` statement will be saved, eg. ``` INSERT INTO srikprag_db.acbalance (ID) -- <<== destination column SELECT id FROM srikprag_mlm.member_table ``` probably you want to manipulate records across database.
The problem is with your query you are not assigning any value to the column. You have 1 column with zero value.
#1136 - Column count doesn't match value count at row 1
[ "", "mysql", "sql", "" ]
Given the following table structure ``` Col1, Col2, EventType, DateTime ``` How can I select the records per grouping of `Col1`, `Col2` that occur after the top record where `EventType = 3` for that particular group of `Col1`, `Col2`. For example with the following data ``` Col1, Col2, EventType, DateTime A B 1 2012-1-1 A B 3 2011-1-1 A B 1 2010-1-1 C D 1 2012-1-1 C D 2 2011-1-1 C D 2 2010-1-1 C D 3 2009-1-1 C D 2 2008-1-1 C D 3 2007-1-1 C D 1 2006-1-1 C D 2 2005-1-1 ``` I want to select ``` Col1, Col2, EventType, DateTime A B 1 2012-1-1 C D 1 2012-1-1 C D 2 2011-1-1 C D 2 2010-1-1 ```
You can use the max function over a subquery: ``` SELECT Col1, Col2, EventType, DateTime FROM theTable A WHERE DateTime > (SELECT MAX(DateTime) FROM theTable SUB WHERE EventType = 3 AND SUB.COL1 = A.COL1 AND SUB.COL2 = A.COL2) ```
It is possible to solve this using [`ROW_NUMBER()`](http://msdn.microsoft.com/en-us/library/ms186734.aspx "ROW_NUMBER (Transact-SQL)"): 1. Partition the rows into groups of `(Col1, Col2)` and rank rows in every group in the ascending order of `DateTime`. ``` Col1 Col2 EventType DateTime EventRank ---- ---- --------- -------- --------- A B 1 2012-1-1 3 A B 3 2011-1-1 2 A B 1 2010-1-1 1 C D 1 2012-1-1 8 C D 2 2011-1-1 7 C D 2 2010-1-1 6 C D 3 2009-1-1 5 C D 2 2008-1-1 4 C D 3 2007-1-1 3 C D 1 2006-1-1 2 C D 2 2005-1-1 1 ``` 2. Also, partition the rows by `(Col1, Col2, EventType)` and rank them in the *descending* order of `DateTime`. ``` Col1 Col2 EventType DateTime EventRank EventSubRank ---- ---- --------- -------- --------- ------------ A B 1 2012-1-1 3 1 A B 3 2011-1-1 2 1 A B 1 2010-1-1 1 2 C D 1 2012-1-1 8 1 C D 2 2011-1-1 7 1 C D 2 2010-1-1 6 2 C D 3 2009-1-1 5 1 C D 2 2008-1-1 4 3 C D 3 2007-1-1 3 2 C D 1 2006-1-1 2 2 C D 2 2005-1-1 1 4 ``` 3. Select a subset where `EventType = 3 AND EventSubRank = 1`. ``` Col1 Col2 EventType DateTime EventRank EventSubRank ---- ---- --------- -------- --------- ------------ A B 3 2011-1-1 2 1 C D 3 2009-1-1 5 1 ``` 4. Use it as a filter by joining it back to the ranked row set and selecting rows of the latter whose `EventRank` values are greater than the corresponding ones in the subset. Here's a complete query: ``` WITH ranked AS ( SELECT *, EventRank = ROW_NUMBER() OVER (PARTITION BY Col1, Col2 ORDER BY DateTime ASC ), EventSubRank = ROW_NUMBER() OVER (PARTITION BY Col1, Col2, EventType ORDER BY DateTime DESC) FROM atable ), filtered AS ( SELECT * FROM ranked WHERE EventType = 3 AND EventSubRank = 1 ) SELECT r.Col1, r.Col2, r.EventType, r.DateTime FROM ranked INNER JOIN filtered f ON r.Col1 = f.Col1 AND r.col2 = f.Col2 AND r.EventRank > f.EventRank ; ```
How to select all records after a certain record within groups in sql?
[ "", "sql", "sql-server-2008", "" ]
I read up on the functions provided by subprocess - call, check\_call, check\_output, and understand how each works and differs in functionality from one another. I am currently using check\_output, so I can have access to the stdout, and used "try block" to catch the exception, as follows: ``` # "cmnd" is a string that contains the command along with it's arguments. try: cmnd_output = check_output(cmnd, stderr=STDOUT, shell=True, timeout=3, universal_newlines=True); except CalledProcessError: print("Status : FAIL") print("Output: \n{}\n".format(cmnd_output)) ``` The issue I am running into is when an exception is thrown, "cmnd\_output" is not initialized and don't have access to stderr, and I get the following error message: ``` print("Output: \n{}\n".format(cmnd_output)) UnboundLocalError: local variable 'cmnd_output' referenced before assignment ``` I think thats because the exception causes the "check\_output" to bail immediately without any further processing, aka assignment to "cmnd\_output", in the try block. Please correct me if I am wrong. Is there any way I can get access to stderr (it's ok if it's sent to stout) and have access to the exit code. I can manually check for pass/fail based on exit code with out the exception being throuwn. Thank you, Ahmed.
Try this version: ``` import subprocess try: output = subprocess.check_output( cmnd, stderr=subprocess.STDOUT, shell=True, timeout=3, universal_newlines=True) except subprocess.CalledProcessError as exc: print("Status : FAIL", exc.returncode, exc.output) else: print("Output: \n{}\n".format(output)) ``` This way you will print the output only if the call was successful. In case of a [`CalledProcessError`](http://docs.python.org/3/library/subprocess.html#subprocess.CalledProcessError) you print the return code and the output.
The accepted solution covers the case in which you are ok mixing `stdout` and `stderr`, but in cases in which the child process (for whatever reason) decides to use `stderr` IN ADDITION to `stdout` for a non failed output (i.e. to output a non-critical warning), then the given solution may not desirable. For example, if you will be doing additional processing on the output, like converting to JSON, and you mix in the `stderr`, then the overall process will fail since the output will not be pure JSON because of the added `stderr` output. I've found the following to work in that case: ``` cmd_args = ... what you want to execute ... pipes = subprocess.Popen(cmd_args, stdout=subprocess.PIPE, stderr=subprocess.PIPE) #If you are using python 2.x, you need to include shell=True in the above line std_out, std_err = pipes.communicate() if pipes.returncode != 0: # an error happened! err_msg = "%s. Code: %s" % (std_err.strip(), pipes.returncode) raise Exception(err_msg) elif len(std_err): # return code is 0 (no error), but we may want to # do something with the info on std_err # i.e. logger.warning(std_err) # do whatever you want with std_out # i.e. json.loads(std_out) ```
Get exit code and stderr from subprocess call
[ "", "python", "python-3.x", "" ]
I tend to use only forward slashes for paths ('/') and python is happy with it also on windows. In the description of `os.path.join` it says that is the correct way if you want to go cross-platform. But when I use it I get mixed slashes: ``` import os a = 'c:/' b = 'myFirstDirectory/' c = 'mySecondDirectory' d = 'myThirdDirectory' e = 'myExecutable.exe' print os.path.join(a, b, c, d, e) # Result: c:/myFirstDirectory/mySecondDirectory\myThirdDirectory\myExecutable.exe ``` Is this correct? Should I check and correct it afterward or there is a better way? Thanks EDIT: I also get mixed slashes when asking for paths ``` import sys for item in sys.path: print item # Result: C:\Program Files\Autodesk\Maya2013.5\bin C:\Program Files\Autodesk\Maya2013.5\mentalray\scripts\AETemplates C:\Program Files\Autodesk\Maya2013.5\Python C:\Program Files\Autodesk\Maya2013.5\Python\lib\site-packages C:\Program Files\Autodesk\Maya2013.5\bin\python26.zip\lib-tk C:/Users/nookie/Documents/maya/2013.5-x64/prefs/scripts C:/Users/nookie/Documents/maya/2013.5-x64/scripts C:/Users/nookie/Documents/maya/scripts C:\Program Files\Nuke7.0v4\lib\site-packages C:\Program Files\Nuke7.0v4/plugins/modules ```
You are now providing some of the slashes yourself and letting [`os.path.join`](http://docs.python.org/2/library/os.path.html) pick others. It's better to let python pick all of them or provide them all yourself. Python uses backslashes for the latter part of the path, because backslashes are the default on Windows. ``` import os a = 'c:' # removed slash b = 'myFirstDirectory' # removed slash c = 'mySecondDirectory' d = 'myThirdDirectory' e = 'myExecutable.exe' print os.path.join(a + os.sep, b, c, d, e) ``` I haven't tested this, but I hope this helps. It's more common to have a base path and only having to join one other element, mostly files. By the way; you can use [`os.sep`](http://docs.python.org/2/library/os.html) for those moments you want to have the best separator for the operating system python is running on. **Edit:** as dash-tom-bang states, apparently for Windows you do need to include a separator for the root of the path. Otherwise you create a relative path instead of an absolute one.
You can use `.replace()` after `path.join()` to ensure the slashes are correct: ``` # .replace() all backslashes with forwardslashes print os.path.join(a, b, c, d, e).replace("\\","/") ``` This gives the output: ``` c:/myFirstDirectory/mySecondDirectory/myThirdDirectory/myExecutable.exe ``` As @sharpcloud suggested, it would be better to remove the slashes from your input strings, however this is an alternative.
mixed slashes with os.path.join on windows
[ "", "python", "windows", "path", "" ]
I am trying to execute a script "test.py" as below but running into the following error. ``` import subprocess from subprocess import Popen, PIPE, STDOUT verifygerritscmd = ("python test.py --InputFile=list.txt --Project=location/code --branch=master") gerritPipe = Popen(verifygerritscmd, stdout=PIPE, stderr=PIPE) (output, error) = gerritPipe.communicate() if error != "": print error raise IOError, "gerrit command %s failed" % (gerritCmd) print output ``` Error: ``` Traceback (most recent call last): File "test_anoop.py", line 4, in <module> gerritPipe = Popen(verifygerritscmd, stdout=PIPE, stderr=PIPE) File "/usr/lib/python2.6/subprocess.py", line 633, in __init__ errread, errwrite) File "/usr/lib/python2.6/subprocess.py", line 1139, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory ``` Any idea what could be wrong here?
From the [documentation](http://docs.python.org/2/library/subprocess.html#popen-constructor): > args should be a sequence of program arguments or else a single > string. By default, the program to execute is the first item in args > if args is a sequence. If args is a string, the interpretation is > platform-dependent and described below. See the shell and executable > arguments for additional differences from the default behavior. Unless > otherwise stated, it is recommended to pass args as a sequence. Which means that you need to do this: ``` verifygerritscmd = ("python test.py --InputFile=list.txt --Project=location/code --branch=master").split() ``` To create a list of args. Or you can specify `shell=True` in the Popen constructor. ``` gerritPipe = Popen(verifygerritscmd, stdout=PIPE, stderr=PIPE, shell=True) ``` --- Also you should make sure that your path is correct. When using the subprocess module it is best not to be ambigous about the paths you use. so calling "test.py" should probably be changed to "/path/to/test.py"
Try to set parameter shell=True: ``` gerritPipe = Popen(verifygerritscmd, stdout=PIPE, stderr=PIPE, shell=True) ``` explanation: without that you try to execute script that is named `python test.py ...` You can also try to define verifygerritscmd as list: ``` verifygerritscmd = ['/usr/bin/python', 'test.py', 'argument1', 'argument2' ... ] ```
subprocess error while running an external python script
[ "", "python", "subprocess", "" ]
I would like to disable or make invisible like the first 100.000 records in a table just to make the SQL search faster, I don't want to delete because it is important to be there, I also know I can move it to another table, but I would like an easier way to make it easier and more organized. What i'm saying is, if for example I use some command like: `DISABLE FROM table WHERE id <= 100000` , so next time I do a search "SELECT \* FROM table LIMIT 1" , it returns me the 100001 row without having to walk on all the 100000 rows since they are disabled. Is this possible at all with MySQL? Thanks!
Add a bit column, and set the bit to 0 when "disabled", and add that to your WHERE clause
If you do SELECT \* FROM Table WHERE id > 10000 LIMIT 1 then as long as id is indexed, the first 10000 rows won't cause any performance issues.
Making some SQL rows invisible/disabled in order to improve the SQL search (for large databases)
[ "", "mysql", "sql", "" ]
I have a table `T1` with some Date/Time fields. I've combined several queries on this table with a UNION query, and made a new table `T2` using SELECT INTO on the union as follows: ``` SELECT * INTO T2 FROM (select * from query1_T1 union select * from query2_T1) ``` The problem is that `query1_T1` substitutes a blank string constant for some of the date fields, which results in T2 having Text fields instead of Date/Time fields. To illustrate: ``` query1_T1: select myUDF(someTextField),"" as newDateField from T1 query2_T1: select anotherUDF(someTextField),oldDateField from T1 ``` where `oldDateField` is a Date/Time. Is there a way that I can structure the SELECT INTO, or change `query1_T1`, so that I'll still get the same results from the query but `newDateField` will end up as a Date/Time?
You can always create the table separately from adding the data into it. First, define all the fields with the appropriate data types. Then use `INSERT INTO (columns) SELECT * FROM` to populate it. **UPDATED:** Or you can do a hybrid approach. First do your `SELECT INTO` with **no rows at all**: ``` SELECT * INTO T2 FROM query2_T1 WHERE 1=0 ``` This will create most of your structure. Then you can go and manually adjust any data types that didn't come through properly. With the structure properly adjusted, you can do this safely: ``` INSERT INTO T2 SELECT * FROM query1_T1 UNION SELECT * FROM query2_T1 ```
You can work around the issue by simply * changing `query1_T1` to return `Null` instead of an empty string as the second column, and * reversing the order of the queries that you UNION together That is, ``` SELECT * INTO T2 FROM ( select * from query2_T1 UNION ALL select * from query1_T1 ) ``` That way, the second column contains *some* date values when the table structure of T2 is determined by the first of the UNIONed queries, and the second query does not force the column to Text afterwards.
How to preserve datatype following a SELECT INTO?
[ "", "sql", "ms-access", "" ]
I am using python [Requests](http://docs.python-requests.org/en/latest/). I need to debug some `OAuth` activity, and for that I would like it to log all requests being performed. I could get this information with `ngrep`, but unfortunately it is not possible to grep https connections (which are needed for `OAuth`) How can I activate logging of all URLs (+ parameters) that `Requests` is accessing?
The underlying `urllib3` library logs all new connections and URLs with the [`logging` module](http://docs.python.org/2/library/logging.html), but not `POST` bodies. For `GET` requests this should be enough: ``` import logging logging.basicConfig(level=logging.DEBUG) ``` which gives you the most verbose logging option; see the [logging HOWTO](http://docs.python.org/2/howto/logging.html) for more details on how to configure logging levels and destinations. Short demo: ``` >>> import requests >>> import logging >>> logging.basicConfig(level=logging.DEBUG) >>> r = requests.get('http://httpbin.org/get?foo=bar&baz=python') DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): httpbin.org:80 DEBUG:urllib3.connectionpool:http://httpbin.org:80 "GET /get?foo=bar&baz=python HTTP/1.1" 200 366 ``` Depending on the exact version of urllib3, the following messages are logged: * `INFO`: Redirects * `WARN`: Connection pool full (if this happens often increase the connection pool size) * `WARN`: Failed to parse headers (response headers with invalid format) * `WARN`: Retrying the connection * `WARN`: Certificate did not match expected hostname * `WARN`: Received response with both Content-Length and Transfer-Encoding, when processing a chunked response * `DEBUG`: New connections (HTTP or HTTPS) * `DEBUG`: Dropped connections * `DEBUG`: Connection details: method, path, HTTP version, status code and response length * `DEBUG`: Retry count increments This doesn't include headers or bodies. `urllib3` uses the `http.client.HTTPConnection` class to do the grunt-work, but that class doesn't support logging, it can normally only be configured to *print* to stdout. However, you can rig it to send all debug information to logging instead by introducing an alternative `print` name into that module: ``` import logging import http.client httpclient_logger = logging.getLogger("http.client") def httpclient_logging_patch(level=logging.DEBUG): """Enable HTTPConnection debug logging to the logging framework""" def httpclient_log(*args): httpclient_logger.log(level, " ".join(args)) # mask the print() built-in in the http.client module to use # logging instead http.client.print = httpclient_log # enable debugging http.client.HTTPConnection.debuglevel = 1 ``` Calling `httpclient_logging_patch()` causes `http.client` connections to output all debug information to a standard logger, and so are picked up by `logging.basicConfig()`: ``` >>> httpclient_logging_patch() >>> r = requests.get('http://httpbin.org/get?foo=bar&baz=python') DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): httpbin.org:80 DEBUG:http.client:send: b'GET /get?foo=bar&baz=python HTTP/1.1\r\nHost: httpbin.org\r\nUser-Agent: python-requests/2.22.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n' DEBUG:http.client:reply: 'HTTP/1.1 200 OK\r\n' DEBUG:http.client:header: Date: Tue, 04 Feb 2020 13:36:53 GMT DEBUG:http.client:header: Content-Type: application/json DEBUG:http.client:header: Content-Length: 366 DEBUG:http.client:header: Connection: keep-alive DEBUG:http.client:header: Server: gunicorn/19.9.0 DEBUG:http.client:header: Access-Control-Allow-Origin: * DEBUG:http.client:header: Access-Control-Allow-Credentials: true DEBUG:urllib3.connectionpool:http://httpbin.org:80 "GET /get?foo=bar&baz=python HTTP/1.1" 200 366 ```
You need to enable debugging at `httplib` level (`requests` β†’ `urllib3` β†’ `httplib`). Here's some functions to both toggle (`..._on()` and `..._off()`) or temporarily have it on: ``` import logging import contextlib from http.client import HTTPConnection def debug_requests_on(): '''Switches on logging of the requests module.''' HTTPConnection.debuglevel = 1 logging.basicConfig() logging.getLogger().setLevel(logging.DEBUG) requests_log = logging.getLogger("requests.packages.urllib3") requests_log.setLevel(logging.DEBUG) requests_log.propagate = True def debug_requests_off(): '''Switches off logging of the requests module, might be some side-effects''' HTTPConnection.debuglevel = 0 root_logger = logging.getLogger() root_logger.setLevel(logging.WARNING) root_logger.handlers = [] requests_log = logging.getLogger("requests.packages.urllib3") requests_log.setLevel(logging.WARNING) requests_log.propagate = False @contextlib.contextmanager def debug_requests(): '''Use with 'with'!''' debug_requests_on() yield debug_requests_off() ``` Demo use: ``` >>> requests.get('http://httpbin.org/') <Response [200]> >>> debug_requests_on() >>> requests.get('http://httpbin.org/') INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): httpbin.org DEBUG:requests.packages.urllib3.connectionpool:"GET / HTTP/1.1" 200 12150 send: 'GET / HTTP/1.1\r\nHost: httpbin.org\r\nConnection: keep-alive\r\nAccept- Encoding: gzip, deflate\r\nAccept: */*\r\nUser-Agent: python-requests/2.11.1\r\n\r\n' reply: 'HTTP/1.1 200 OK\r\n' header: Server: nginx ... <Response [200]> >>> debug_requests_off() >>> requests.get('http://httpbin.org/') <Response [200]> >>> with debug_requests(): ... requests.get('http://httpbin.org/') INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): httpbin.org ... <Response [200]> ``` You will see the REQUEST, including HEADERS and DATA, and RESPONSE with HEADERS but without DATA. The only thing missing will be the response.body which is not logged. [Source](http://www.python-requests.org/en/latest/api/)
Log all requests from the python-requests module
[ "", "python", "logging", "python-requests", "python-logging", "" ]
I'm trying to build a little num pad using tkinter, to be able to enter a password code on a pure touchsystem. This is what I have so far: import tkinter from tkinter import ttk ``` def main(): root = tkinter.Tk() numpad = NumPad(root) root.mainloop() btn_list = [ '7', '8', '9', '4', '5', '6', '1', '2', '3', '0'] class NumPad(ttk.Frame): def __init__(self, root): ttk.Frame.__init__(self, root) self.grid() self.numpad_create() def numpad_create(self): r = 1 c = 0 for b in btn_list: cmd= lambda: print(b) self.b= ttk.Button(self, text=b,width=5,command=cmd).grid(row=r,column=c) print(b) c += 1 if c > 4: c = 0 r += 1 main() ``` If I start the programm, it prints the numbers b to the shell, so this works. Also the Buttons are displayed as wished. But if I click them, I get always 0 as output to the shell. I can't find the mistake I've done, I think it might has something to do with garbage-collecting, but I am not sure though. Could somebody help me? It would be really great! Thank you!
`cmd`, your lambda, does not catch the value of `b` when the lambda itself was declared. the final state of `b` is `0`, and that is why all your buttons end up printing `b`.
Your lambda should be specified like this: ``` cmd= lambda b=b: print(b) ``` This will bind the current value of `b` to be the default value of the parameter named `b` in your lambda. Without this binding, the lambda will use the value of `b` at the time it is pressed rather than the value at the time it was created. Personally I find it confusing to use the same variable in both contexts; I personally would write it like this, which is functionally identical but a little easier to understand: ``` cmd= lambda button=b: print(button) ```
Little NumPad in tkinter
[ "", "python", "variables", "numbers", "tkinter", "" ]
Trying to learn tkinter and python. I want to display line number for the Text widget in an adjacent frame ``` from Tkinter import * root = Tk() txt = Text(root) txt.pack(expand=YES, fill=BOTH) frame= Frame(root, width=25) # frame.pack(expand=NO, fill=Y, side=LEFT) root.mainloop() ``` I have seen an example on a site called unpythonic but its assumes that line height of txt is 6 pixels. I am trying something like this: 1) Binding Any-KeyPress event to a function that returns the line on which the keypress occurs: ``` textPad.bind("<Any-KeyPress>", linenumber) def linenumber(event=None): line, column = textPad.index('end').split('.') #creating line number toolbar try: linelabel.pack_forget() linelabel.destroy() lnbar.pack_forget() lnbar.destroy() except: pass lnbar = Frame(root, width=25) for i in range(0, len(line)): linelabel= Label(lnbar, text=i) linelabel.pack(side=LEFT) lnbar.pack(expand=NO, fill=X, side=LEFT) ``` Unfortunately this is giving some weird numbers on the frame. Is there a simpler solution? How to approach this?
I have a relatively foolproof solution, but it's complex and will likely be hard to understand because it requires some knowledge of how Tkinter and the underlying tcl/tk text widget works. I'll present it here as a complete solution that you can use as-is because I think it illustrates a unique approach that works quite well. Note that this solution works no matter what font you use, and whether or not you use different fonts on different lines, have embedded widgets, and so on. ## Importing Tkinter Before we get started, the following code assumes tkinter is imported like this if you're using python 3.0 or greater: ``` import tkinter as tk ``` ... or this, for python 2.x: ``` import Tkinter as tk ``` ## The line number widget Let's tackle the display of the line numbers. What we want to do is use a canvas so that we can position the numbers precisely. We'll create a custom class, and give it a new method named `redraw` that will redraw the line numbers for an associated text widget. We also give it a method `attach`, for associating a text widget with this widget. This method takes advantage of the fact that the text widget itself can tell us exactly where a line of text starts and ends via the `dlineinfo` method. This can tell us precisely where to draw the line numbers on our canvas. It also takes advantage of the fact that `dlineinfo` returns `None` if a line is not visible, which we can use to know when to stop displaying line numbers. ``` class TextLineNumbers(tk.Canvas): def __init__(self, *args, **kwargs): tk.Canvas.__init__(self, *args, **kwargs) self.textwidget = None def attach(self, text_widget): self.textwidget = text_widget def redraw(self, *args): '''redraw line numbers''' self.delete("all") i = self.textwidget.index("@0,0") while True : dline= self.textwidget.dlineinfo(i) if dline is None: break y = dline[1] linenum = str(i).split(".")[0] self.create_text(2,y,anchor="nw", text=linenum) i = self.textwidget.index("%s+1line" % i) ``` If you associate this with a text widget and then call the `redraw` method, it should display the line numbers just fine. ## Automatically updating the line numbers This works, but has a fatal flaw: you have to know when to call `redraw`. You could create a binding that fires on every key press, but you also have to fire on mouse buttons, and you have to handle the case where a user presses a key and uses the auto-repeat function, etc. The line numbers also need to be redrawn if the window is grown or shrunk or the user scrolls, so we fall into a rabbit hole of trying to figure out every possible event that could cause the numbers to change. There is another solution, which is to have the text widget fire an event whenever something changes. Unfortunately, the text widget doesn't have direct support for notifying the program of changes. To get around that, we can use a proxy to intercept changes to the text widget and generate an event for us. In an answer to the question "https://stackoverflow.com/q/13835207/7432" I offered a similar solution that shows how to have a text widget call a callback whenever something changes. This time, instead of a callback we'll generate an event since our needs are a little different. ## A custom text class Here is a class that creates a custom text widget that will generate a `<<Change>>` event whenever text is inserted or deleted, or when the view is scrolled. ``` class CustomText(tk.Text): def __init__(self, *args, **kwargs): tk.Text.__init__(self, *args, **kwargs) # create a proxy for the underlying widget self._orig = self._w + "_orig" self.tk.call("rename", self._w, self._orig) self.tk.createcommand(self._w, self._proxy) def _proxy(self, *args): # let the actual widget perform the requested action cmd = (self._orig,) + args result = self.tk.call(cmd) # generate an event if something was added or deleted, # or the cursor position changed if (args[0] in ("insert", "replace", "delete") or args[0:3] == ("mark", "set", "insert") or args[0:2] == ("xview", "moveto") or args[0:2] == ("xview", "scroll") or args[0:2] == ("yview", "moveto") or args[0:2] == ("yview", "scroll") ): self.event_generate("<<Change>>", when="tail") # return what the actual widget returned return result ``` ## Putting it all together Finally, here is an example program which uses these two classes: ``` class Example(tk.Frame): def __init__(self, *args, **kwargs): tk.Frame.__init__(self, *args, **kwargs) self.text = CustomText(self) self.vsb = tk.Scrollbar(self, orient="vertical", command=self.text.yview) self.text.configure(yscrollcommand=self.vsb.set) self.text.tag_configure("bigfont", font=("Helvetica", "24", "bold")) self.linenumbers = TextLineNumbers(self, width=30) self.linenumbers.attach(self.text) self.vsb.pack(side="right", fill="y") self.linenumbers.pack(side="left", fill="y") self.text.pack(side="right", fill="both", expand=True) self.text.bind("<<Change>>", self._on_change) self.text.bind("<Configure>", self._on_change) self.text.insert("end", "one\ntwo\nthree\n") self.text.insert("end", "four\n",("bigfont",)) self.text.insert("end", "five\n") def _on_change(self, event): self.linenumbers.redraw() ``` ... and, of course, add this at the end of the file to bootstrap it: ``` if __name__ == "__main__": root = tk.Tk() Example(root).pack(side="top", fill="both", expand=True) root.mainloop() ```
Here's my attempt at doing the same thing. I tried Bryan Oakley's answer above, it looks and works great, but it comes at a price with performance. Everytime I'm loading lots of lines into the widget, it takes a long time to do that. In order to work around this, I used a normal `Text` widget to draw the line numbers, here's how I did it: Create the Text widget and grid it to the left of the main text widget that you're adding the lines for, let's call it `textarea`. Make sure you also use the same font you use for `textarea`: ``` self.linenumbers = Text(self, width=3) self.linenumbers.grid(row=__textrow, column=__linenumberscol, sticky=NS) self.linenumbers.config(font=self.__myfont) ``` Add a tag to right-justify all lines added to the line numbers widget, let's call it `line`: ``` self.linenumbers.tag_configure('line', justify='right') ``` Disable the widget so that it cannot be edited by the user ``` self.linenumbers.config(state=DISABLED) ``` Now the tricky part is adding one scrollbar, let's call it `uniscrollbar` to control both the main text widget as well as the line numbers text widget. In order to do that, we first need two methods, one to be called by the scrollbar, which can then update the two text widgets to reflect the new position, and the other to be called whenever a text area is scrolled, which will update the scrollbar: ``` def __scrollBoth(self, action, position, type=None): self.textarea.yview_moveto(position) self.linenumbers.yview_moveto(position) def __updateScroll(self, first, last, type=None): self.textarea.yview_moveto(first) self.linenumbers.yview_moveto(first) self.uniscrollbar.set(first, last) ``` Now we're ready to create the `uniscrollbar`: ``` self.uniscrollbar= Scrollbar(self) self.uniscrollbar.grid(row=self.__uniscrollbarRow, column=self.__uniscrollbarCol, sticky=NS) self.uniscrollbar.config(command=self.__scrollBoth) self.textarea.config(yscrollcommand=self.__updateScroll) self.linenumbers.config(yscrollcommand=self.__updateScroll) ``` Voila! You now have a very lightweight text widget with line numbers: [![enter image description here](https://i.stack.imgur.com/yDpnJ.png)](https://i.stack.imgur.com/yDpnJ.png)
Tkinter adding line number to text widget
[ "", "python", "tkinter", "" ]
I receive data in a certain format. Dates are numeric(8,0). For example `20120101 = YYYYMMDD` There exists rows with values like `(0,1,2,3,6)` in that date field, thus not a date. I want to check if it is a date and convert it, else it can be null. Now the following code works, but I was hoping there is a better way. ``` (CASE WHEN [invoice_date] LIKE '________' --There are 8 underscores THEN convert(datetime, cast([invoice_date] as char(8))) END) AS Invoice_Date ``` Any help will be appreciated.
use isdate function like .. ``` (CASE WHEN ISDATE (invoice_date) = 1 THEN convert(datetime, cast([invoice_date] as char(8))) END) AS Invoice_Date ```
You can use the ISDATE function, but what would do for non-date values? In my suggested solution you can choose to return a null: ``` select (case when ISDATE (invoice_date)=1 then convert(datetime, invoice_date) else null end) AS Invoice_Date from your_table ```
Check if value is date and convert it
[ "", "sql", "sql-server", "sqldatetime", "" ]
I work with spatial data that is output to text files with the following format: ``` COMPANY NAME P.O. BOX 999999 ZIP CODE , CITY +99 999 9999 23 April 2013 09:27:55 PROJECT: Link Ref -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Design DTM is 30MB 2.5X2.5 Stripping applied to design is 0.000 Point Number Easting Northing R.L. Design R.L. Difference Tol Name 3224808 422092.700 6096059.380 2.520 -19.066 -21.586 -- 3224809 422092.200 6096059.030 2.510 -19.065 -21.575 -- <Remainder of lines> 3273093 422698.920 6096372.550 1.240 -20.057 -21.297 -- Average height difference is -21.390 RMS is 21.596 0.00 % above tolerance 98.37 % below tolerance End of Report ``` As shown, the files have a header and a footer. The data is delimited by spaces, but not an equal amount between the columns. What I need, is comma delimited files with Easting, Northing and Difference. I'd like to prevent having to modify several hundred large files by hand and am writing a small script to process the files. This is what I have so far: ``` #! /usr/bin/env python import csv,glob,os from itertools import islice list_of_files = glob.glob('C:/test/*.txt') for filename in list_of_files: (short_filename, extension )= os.path.splitext(filename) print short_filename file_out_name = short_filename + '_ed' + extension with open (filename, 'rb') as source: reader = csv.reader( source) for row in islice(reader, 10, None): file_out= open (file_out_name, 'wb') writer= csv.writer(file_out) writer.writerows(reader) print 'Created file: '+ file_out_name file_out.close() print 'All done!' ``` Questions: * How can I let the line starting with 'Point number' become the header in the output file? I'm trying to put DictReader in place of the reader/writer bit but can't get it to work. * Writing the output file with delimiter ',' does work but writes a comma in place of each space, giving way too much empty columns in my output file. How do I circumvent this? * How do I remove the footer?
I can see a problem with your code, you are creating a new `writer` for each row; so you will end up only with the last one. Your code could be something like this, without the need of CSV readers or writers, as it's simple enough to be parsed as simple text (problem would arise if you had text columns, with escaped characters and so on). ``` def process_file(source, dest): found_header = False for line in source: line = line.strip() if not header_found: #ignore everything until we find this text header_found = line.starswith('Point Number') elif not line: return #we are done when we find an empty line, I guess else: #write the needed columns columns = line.split() dest.writeline(','.join(columns[i] for i in (1, 2, 5))) for filename in list_of_files: short_filename, extension = os.path.splitext(filename) file_out_name = short_filename + '_ed' + extension with open(filename, 'r') as source: with open(file_out_name. 'w') as dest: process_file(source, dest) ```
This worked: ``` #! /usr/bin/env python import glob,os list_of_files = glob.glob('C:/test/*.txt') def process_file(source, dest): header_found = False for line in source: line = line.strip() if not header_found: #ignore everything until we find this text header_found = line.startswith('Stripping applied') #otherwise, header is lost elif not line: return #we are done when we find an empty line else: #write the needed columns columns = line.split() dest.writelines(','.join(columns[i] for i in (1, 2, 5))+"\n") #newline character adding was necessary for filename in list_of_files: short_filename, extension = os.path.splitext(filename) file_out_name = short_filename + '_ed' + ".csv" with open(filename, 'r') as source: with open(file_out_name, 'wb') as dest: process_file(source, dest) ```
CSV file processing in Python
[ "", "python", "file", "csv", "" ]
Give lists a, b ``` a = [5, 8, 9] b = [6, 1, 0] ``` I want to create a generator gen such that: ``` for x in gen: print x ``` outputs ``` 5, 8, 9, 6, 1, 0 ```
You could use `itertools.chain`: ``` >>> from itertools import chain >>> a = [5, 8, 9] >>> b = [6, 1, 0] >>> it=chain(a,b) >>> for x in it: print x, ... 5 8 9 6 1 0 ```
``` def chain(*args): for arg in args: for item in arg: yield item a = [5, 8, 9] b = [6, 1, 0] for x in chain(a,b): print x, print ', '.join(map(str,chain(a,b))) ```
How do I make a generator from two lists in python?
[ "", "python", "generator", "" ]
I have the following code where I scan every line and put in a list. If the line matches a string "New changes", I don't want to put in the list. Any suggestion on how to achieve this? ``` with open('file.txt', 'rb') as f: mainlist = [line.strip() for line in f] ```
List comprehensions can also do filtering: ``` mainlist = [line.strip() for line in f if "New changes" not in line] ```
You can filter within the list comprehension : ``` mainlist = [line.strip() for line in f if line.strip() != "New changes"] ```
Read a file, skip unwanted lines & add into a List
[ "", "python", "" ]
For example, I have a list of values in a string: 'a', 'c', 'b', 'd' From the data table, I got a column result like: ``` Result 'a' 'b' ``` how to write a sql which will return the values not in the table: 'c', 'd' or ``` NewResult 'c' 'd' ``` ? It is also ok if other simple tools than sql can be used. I only need the result. Thanks!
``` Create FUNCTION F_SplitAsTable ( @txt varchar(max) ) RETURNS @tab TABLE ( ID Varchar(2000) ) AS BEGIN declare @i int declare @s varchar(2000) Set @i = CHARINDEX(',',@txt) While @i>1 begin set @s = LEFT(@txt,@i-1) insert into @tab (id) values (@s) Set @txt=RIGHT(@txt,Len(@txt)-@i) Set @i = CHARINDEX(',',@txt) end insert into @tab (id) values (@txt) RETURN END GO Declare @a table (Ch varchar(10)) insert into @a Values ('a') insert into @a Values ('b') Select s.* from dbo.F_SplitAsTable('a,b,c,d') s left join @a a on a.Ch=s.ID where a.Ch is NULL ```
Step 1: Load the search values in a temp table. ``` DECLARE @Search table (SearchFor char(1) not null) INSERT @Search values ('a'), ('b'), ('c'), ('d') ``` (There are any number of ways to set this up, this is just the fastest to type) Run a query like so: ``` SELECT SearchFor from @Search except select SearchIn from DataTable ``` (Again, there are many forms that "in a not in b" queries can take.) This will return everything in the first set (your temp table) that is not also found in the second set.
How to find values stored in a string not in a sql data table
[ "", "sql", "sql-server", "" ]
I am just learning regex and I'm a bit confused here. I've got a string from which I want to extract an int with at least 4 digits and at most 7 digits. I tried it as follows: ``` >>> import re >>> teststring = 'abcd123efg123456' >>> re.match(r"[0-9]{4,7}$", teststring) ``` Where I was expecting 123456, unfortunately this results in nothing at all. Could anybody help me out a little bit here?
@ExplosionPills is correct, but there would still be two problems with your regex. First, `$` matches the *end* of the string. I'm guessing you'd like to be able to extract an int in the middle of the string as well, *e.g.* `abcd123456efg789` to return `123456`. To fix that, you want this: ``` r"[0-9]{4,7}(?![0-9])" ^^^^^^^^^ ``` The added portion is a *negative lookahead assertion*, meaning, "...not followed by any more numbers." Let me simplify that by the use of `\d` though: ``` r"\d{4,7}(?!\d)" ``` That's better. Now, the second problem. You have no constraint on the left side of your regex, so given a string like `abcd123efg123456789`, you'd actually match `3456789`. So, you need a *negative lookbehind assertion* as well: ``` r"(?<!\d)\d{4,7}(?!\d)" ```
`.match` will only match if the string *starts* with the pattern. Use `.search`.
Python regex for int with at least 4 digits
[ "", "python", "regex", "int", "match", "" ]
I have a dictionary like this: ``` dict1={'a':4,'d':2} ``` I have a list of dictionaries like this: ``` diclist=[{'b':3,'c':3},{'e':1,'f':1}] ``` As an output, I want to let dict1 be like this: ``` dict1={'a':4,'b':3,'c':3,'d':2,'e':1,'f':1} ``` so, I need to 1. compare dict1's values with diclist's value 2. if a value of a dict in diclist is smaller than the one of dict1, insert the dict to dict1 3. iterate 2 in diclist This might be easy, however, if you are willing to help with this, it'll be greatly appreciated.
Since your keys in your example are all unique, how is it different from just merging all the dicts? ``` dict1 = {'a': 4, 'd': 2} diclist = [{'b': 3, 'c': 3}, {'e': 1, 'f': 1}] for d in diclist: dict1.update(d) ``` Here's a general approach. Consider providing more comprehensive examples in future ``` >>> dict1={'a':4,'d':2} >>> diclist=[{'b':3,'c':3},{'e':1,'f':1}] >>> >>> for d in diclist: ... for k, v in d.items(): ... if k not in dict1 or v < dict1[k]: ... dict1[k] = v ... >>> dict1 {'a': 4, 'c': 3, 'b': 3, 'e': 1, 'd': 2, 'f': 1} ```
``` for d in diclist: # iterate over the dicts for k, v in d.items(): # iterate over the elements v2 = dict1.set_default(k, v) # set the value if the key does not exist # and return the value (existing or newly set) if v < v2: # compare with the existing value and the new value dict1[k] = v ``` This is probably most concise and readable.
insert dictionaries to a dictionary in Python
[ "", "python", "dictionary", "insert", "" ]
I have a list of tuples: ``` my_lst = [('2.0', '1.01', '0.9'), ('-2.0', '1.12', '0.99')] ``` I'm looking for a solution to unpack each value so that it prints out a comma separated line of values: ``` 2.0, 1.01, 0.9, -2.0, 1.12, 0.99 ``` The catch is, the lenght of lists vary.
Use `join` twice: ``` >>> lis=[('2.0', '1.01', '0.9'), ('-2.0', '1.12', '0.99')] >>> ", ".join(", ".join(x) for x in lis) '2.0, 1.01, 0.9, -2.0, 1.12, 0.99' ```
Use can use itertools `chain(..)`. ``` >>> from itertools import chain >>> my_lst = [('2.0', '1.01', '0.9'), ('-2.0', '1.12', '0.99')] >>> list(chain(*my_lst)) ['2.0', '1.01', '0.9', '-2.0', '1.12', '0.99'] ``` And then join them with a ",". ``` >>> ",".join(list(chain(*my_lst))) '2.0,1.01,0.9,-2.0,1.12,0.99' ```
How to unpack a list of indefinite length
[ "", "python", "list", "python-2.7", "" ]
I have a query that looks like this: ``` SELECT * FROM employees e, departments d1, departments d2, departments d3, departments d4 WHERE e.dep1 = d1.dep(+) AND e.dep2 = d2.dep(+) AND e.dep3 = d3.dep(+) AND e.dep4 = d4.dep(+); ``` Is there a better way to write this so that I don't need to use the same table in my query multiple times? I know Oracle's optimizer probably works around this rather nicely, but if there is a more efficient way to write a query like this, I'm all ears. Keep in mind that the above is merely an example, my actual query has a lot more going on in it. I'm using Oracle 11.2.0.3 on Windows 2003 x64. Thanks, Tom
This is perfectly valid. The contrived example me and my colleague often use is `city_of_birth` and `city_of_residence`. Let's assume we want to query employees who now live in a different country to which they were born in (and that all countries have states). You would query this as follows: ``` select e.* from employees e, cities city_of_birth, cities city_of_residence, states state_of_birth, states state_of_residence where e.city_of_birth_id = city_of_birth.id and e.city_of_residence_id = city_of_residence.id and city_of_birth.state_id = state_of_birth.id and city_of_residence.state_id = state_of_residence.id and state_of_birth.country_id != state_of_residence.country_id; ``` The thing to note is that tables need to be referenced in the `from` clause of a query as often as there are different meanings to them. Another way to think about it: you need to reference the same table multiple times in the `from` clause if you're going to be selecting different rows from each "instance".
If you want the details to all the departments flat in one row you need several joins but I suggest to write it this way: ``` SELECT * FROM employees e LEFT JOIN departments d1 on ( d1.dep = e.dep1 ) LEFT JOIN departments d2 on ( d2.dep = e.dep1 ) LEFT JOIN departments d3 on ( d3.dep = e.dep1 ) LEFT JOIN departments d4 on ( d4.dep = e.dep1 ) ```
Duplicate table in Oracle query
[ "", "sql", "oracle", "" ]
I'm trying to install Django in my mac. when I run the command python manage.py runserver. I get the error RuntimeError: maximum recursion depth exceeded in cmp. I have pasted my error message below. I even increased the setrecursion limit to 2000 and tried, it didn't work. Any of your help in fixing this is appreciated... Validating models... ``` Unhandled exception in thread started by <bound method Command.inner_run of <django.contrib.staticfiles.management.commands.runserver.Command object at 0x1087f4a10>> Traceback (most recent call last): File "/Library/Python/2.7/site-packages/django/core/management/commands/runserver.py", line 92, in inner_run self.validate(display_num_errors=True) File "/Library/Python/2.7/site-packages/django/core/management/base.py", line 280, in validate num_errors = get_validation_errors(s, app) File "/Library/Python/2.7/site-packages/django/core/management/validation.py", line 35, in get_validation_errors for (app_name, error) in get_app_errors().items(): File "/Library/Python/2.7/site-packages/django/db/models/loading.py", line 166, in get_app_errors self._populate() File "/Library/Python/2.7/site-packages/django/db/models/loading.py", line 72, in _populate self.load_app(app_name, True) File "/Library/Python/2.7/site-packages/django/db/models/loading.py", line 96, in load_app models = import_module('.models', app_name) File "/Library/Python/2.7/site-packages/django/utils/importlib.py", line 35, in import_module __import__(name) File "/Library/Python/2.7/site-packages/django/contrib/auth/models.py", line 370, in <module> class AbstractUser(AbstractBaseUser, PermissionsMixin): File "/Library/Python/2.7/site-packages/django/db/models/base.py", line 213, in __new__ new_class.add_to_class(field.name, copy.deepcopy(field)) File "/Library/Python/2.7/site-packages/django/db/models/base.py", line 265, in add_to_class value.contribute_to_class(cls, name) File "/Library/Python/2.7/site-packages/django/db/models/fields/__init__.py", line 257, in contribute_to_class cls._meta.add_field(self) File "/Library/Python/2.7/site-packages/django/db/models/options.py", line 179, in add_field self.local_fields.insert(bisect(self.local_fields, field), field) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> '__lt__': [('__gt__', lambda self, other: other < self), File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> '__lt__': [('__gt__', lambda self, other: other < self), File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> '__lt__': [('__gt__', lambda self, other: other < self), File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> '__lt__': [('__gt__', lambda self, other: other < self), File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> '__lt__': [('__gt__', lambda self, other: other < self), File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> '__lt__': [('__gt__', lambda self, other: other < self), File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> '__lt__': [('__gt__', lambda self, other: other < self), File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> '__lt__': [('__gt__', lambda self, other: other < self), File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> '__lt__': [('__gt__', lambda self, other: other < self), File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> '__lt__': [('__gt__', lambda self, other: other < self), File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> '__lt__': [('__gt__', lambda self, other: other < self), File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> '__lt__': [('__gt__', lambda self, other: other < self), File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> '__lt__': [('__gt__', lambda self, other: other < self), File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", line 56, in <lambda> '__lt__': [('__gt__', lambda self, other: other < self), RuntimeError: maximum recursion depth exceeded in cmp ```
If you haven't already, try installing `python 2.7.5` I had a similar issue when using django version 1.5.1 and python version 2.7.2. The issue was resolved when I switched to 2.7.5. To get python 2.7.5 running on your mac, go [here](http://www.python.org/download/releases/2.7.5/) and download the Mac Installer for your system. After installing, go to the "Python 2.7" subfolder of the system Applications folder, and double-click on the "Update Shell Profile" to use 2.7.5 from the command line. After doing that, type `python --version` from the command line to confirm you're using 2.7.5 Hope that helps!
The problem is in functools.py file. This file is from Python. I have just installed a new version of python 2.7.5 and the file is wrong (I have another older instalation of python 2.7.5 and ther the file functools.py is correct) To fix the problem replace this (about line 56 in python\Lib\fuctools.py): ``` convert = { '__lt__': [('__gt__', lambda self, other: other < self), ('__le__', lambda self, other: not other < self), ('__ge__', lambda self, other: not self < other)], '__le__': [('__ge__', lambda self, other: other <= self), ('__lt__', lambda self, other: not other <= self), ('__gt__', lambda self, other: not self <= other)], '__gt__': [('__lt__', lambda self, other: other > self), ('__ge__', lambda self, other: not other > self), ('__le__', lambda self, other: not self > other)], '__ge__': [('__le__', lambda self, other: other >= self), ('__gt__', lambda self, other: not other >= self), ('__lt__', lambda self, other: not self >= other)] } ``` to that: ``` convert = { '__lt__': [('__gt__', lambda self, other: not (self < other or self == other)), ('__le__', lambda self, other: self < other or self == other), ('__ge__', lambda self, other: not self < other)], '__le__': [('__ge__', lambda self, other: not self <= other or self == other), ('__lt__', lambda self, other: self <= other and not self == other), ('__gt__', lambda self, other: not self <= other)], '__gt__': [('__lt__', lambda self, other: not (self > other or self == other)), ('__ge__', lambda self, other: self > other or self == other), ('__le__', lambda self, other: not self > other)], '__ge__': [('__le__', lambda self, other: (not self >= other) or self == other), ('__gt__', lambda self, other: self >= other and not self == other), ('__lt__', lambda self, other: not self >= other)] } ```
maximum recursion depth exceeded in cmp error while executing python manage.py runserver
[ "", "python", "django", "" ]
I use my python script for pentest and I want to call another script in a new terminal. I'm getting the following error. > There was an error creating the child process for this terminal. If I use this line with space, it only opens a new terminal with python shell but it doesn't read the path of the new script `/root/Desktop/script/WPA1TKIP.py`: ``` os.system("gnome-terminal -e python /root/Desktop/script/WPA1TKIP.py") ```
Try to quote the command you pass to `-e`: ``` os.system("gnome-terminal -e 'python /root/Desktop/script/WPA1TKIP.py'") ``` Otherwise the argument to `-e` is ony `python`, the rest is silently ignored by `gnome-terminal`.
That's because the command you are using is malformed, the command you are running contains a space character, so you need to quote the `python [filename]` part: ``` gnome-terminal -e "python /root/Desktop/script/WPA1TKIP.py" ``` Also, don't use `os.system` use `subprocess`. So you'll use similar commands in the end: subprocess.call(['gnome-terminal', '-e', 'python /root/Desktop/script/WPA1TKIP.py']) Note that in that case, subprocess takes care of the escaping, you just pass a list of parameters/command parts.
Python call another Python script
[ "", "python", "" ]
This is my code, it contains no recursion, but it hits maximum recursion depth on first pickle... Code: ``` #!/usr/bin/env python from bs4 import BeautifulSoup from urllib2 import urlopen import pickle # open page and return soup list def get_page_startups(page_url): html = urlopen(page_url).read() soup = BeautifulSoup(html, "lxml") return soup.find_all("div","startup item") # # Get certain text from startup soup # def get_name(startup): return startup.find("a", "profile").string def get_website(startup): return startup.find("a", "visit")["href"] def get_status(startup): return startup.find("p","status").strong.string[8:] def get_twitter(startup): return startup.find("a", "comment").string def get_high_concept_pitch(startup): return startup.find("div","headline").find_all("em")[1].string def get_elevator_pitch(startup): startup_soup = BeautifulSoup(urlopen("http://startupli.st" + startup.find("a","profile")["href"]).read(),"lxml") return startup_soup.find("p", "desc").string.rstrip().lstrip() def get_tags(startup): return startup.find("p","tags").string def get_blog(startup): try: return startup.find("a","visit blog")["href"] except TypeError: return None def get_facebook(startup): try: return startup.find("a","visit facebook")["href"] except TypeError: return None def get_angellist(startup): try: return startup.find("a","visit angellist")["href"] except TypeError: return None def get_linkedin(startup): try: return startup.find("a","visit linkedin")["href"] except TypeError: return None def get_crunchbase(startup): try: return startup.find("a","visit crunchbase")["href"] except TypeError: return None # site to scrape BASE_URL = "http://startupli.st/startups/latest/" # scrape all pages for page_no in xrange(1,142): startups = get_page_startups(BASE_URL + str(page_no)) # search soup and pickle data for i, startup in enumerate(startups): s = {} s['name'] = get_name(startup) s['website'] = get_website(startup) s['status'] = get_status(startup) s['high_concept_pitch'] = get_high_concept_pitch(startup) s['elevator_pitch'] = get_elevator_pitch(startup) s['tags'] = get_tags(startup) s['twitter'] = get_twitter(startup) s['facebook'] = get_facebook(startup) s['blog'] = get_blog(startup) s['angellist'] = get_angellist(startup) s['linkedin'] = get_linkedin(startup) s['crunchbase'] = get_crunchbase(startup) f = open(str(i)+".pkl", "wb") pickle.dump(s,f) f.close() print "Done " + str(page_no) ``` This is the content of `0.pkl` after the exception is raised: <http://pastebin.com/DVS1GKzz> Thousand lines long! There's some HTML from the BASE\_URL in the pickle... but I didn't pickle any html strings...
BeautifulSoup `.string` attributes aren't actually strings: ``` >>> from bs4 import BeautifulSoup >>> soup = BeautifulSoup('<div>Foo</div>') >>> soup.find('div').string u'Foo' >>> type(soup.find('div').string) bs4.element.NavigableString ``` Try using `str(soup.find('div').string)` instead and see if it helps. Also, I don't think Pickle is really the best format here. JSON is much easier in this case.
Most likely pickle is doing recursion internally, and the file you are trying parse is to big. You could try to increase the limit of the number of recursions allowed. ``` import sys sys.setrecursionlimit(10000) ``` This is however not recommended for any type of production ready application, as it may mask actual issue, but could help highlight issue(s) during debugging.
Why does this pickle reach maximum recursion depth without recursion?
[ "", "python", "pickle", "" ]
Is there a way to prevent sql from throwing an error when I try to save a record that already exists. I've got a composite key table for a many-to-many relationship that has only the two values, when I update a model from my application, it tries to save all records, the records that already exist throw an error `Cannot insert duplicate key` is there a way of having the database ignore these, or do I have to handle it in the application?
you are calling an `INSERT` and trying to add duplicated keys. This error is by design, and essential. The DB is throwing an exception for an exceptional and erroneous condition. If you are, instead, trying to perform an "upsert" you may need to use a stored procedure or use the [MERGE](http://technet.microsoft.com/en-us/library/bb510625.aspx) syntax. If, instead, you don't want to `UPDATE` but to just ignore rows already in the table, then you need to simply add an exception to your `INSERT` statement... such as ``` .... WHERE table.Key <> interting.key ```
Try something like this with your insert statement. ``` insert into foo (x,y) select @x,@y except select x,y from foo ``` This will add a record to foo, ONLY if it is not already in the table.
sql server, composite keys - ignoring duplicate
[ "", "sql", "sql-server", "composite-key", "" ]
I am getting a weird error when I try to open a file in my python program even though they are in the same directory. Here is my code: ``` def main(): #filename = input("Enter the name of the file of grades: ") file = open("g.py", "r") for line in file: points = 0 array = line.split() if array[1] == 'A': points = array[2] * 4 elif array[1] == 'B': points = array[2] * 3 elif array[1] == 'C': points = array[2] * 2 elif array[1] == 'D': points = array[2] * 1 totalpoints += points totalpointspossible += array[2]*4 gpa = (totalpoints/totalpointspossible)*4 print("The GPA is ", gpa) file.close() main() ``` and this is the error I am getting: ``` Traceback (most recent call last): File "yotam2.py", line 51, in <module> main() File "yotam2.py", line 28, in main file = open(g.py, "r") NameError: global name 'g' is not defined ``` I am not quite sure why it is saying g is not defined, even though it is in the same directory as my python file.
`g.py` should be a string: ``` file = open("g.py", "r") ``` Also, `array` is a list of strings. Multiplying strings by integers just duplicates them: ``` >>> "1" * 4 "1111" ``` You have to convert `array` (which isn't an array, by the way) into a list of numbers: ``` array = [int(n) for n in line.split()] ```
You might want to enclose g.py in quotes. The interpreter thinks g is a variable you are referencing rather than a filename you intend it to reference.
Opening files in Python
[ "", "python", "file-io", "" ]
This is my first question. I am having users upload their own image to a database. That image is stored as a BLOB. I was able to do this successfully. I am using MySQL for the database. The part I am having trouble with is displaying that BLOB as an image on the website when its called upon. Right now only the Binary data, lots of weird symbols are being displayed. I think its a problem with the HTTP header. Right now its in : ``` print "Content-Type: text/html" ``` I've tried: ``` print "Content-Type: image/jpeg" ``` I am using Python to connect with the database and write the HTML. Edit: Code: ``` def showFile(): # do SQL to retrieve blob where filename conn, cursor = getConnectionAndCursor() sql = """ select data from upload where id=1 """ cursor.execute(sql) data = cursor.fetchone() blob = data[0] print "<hr>" print "This is what I'm trying" print """<img src="data:image/jpeg;base64,%s/>""" % data ###################################################################### if __name__ == "__main__": form = cgi.FieldStorage() if "show_file" in form: print "Content-Type: text/html" print printHeaders("Image upload example") showFile() printFooter() ```
image is stored in database in binary format so once it comes to server using decode function to get it back to image ``` image.decode('base64') ``` this will convert your blob to image
Depending on how its encoded, you can also possibly just use a Data URI for the image. Something like this might work if they are encoded as base64 PNGs. ``` <img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADIA..." /> ``` As @Alok says, you might need to first convert it from binary blob to base64, then use the Data URI.
Converting BLOB, stored on a database, to an image on an HTML website
[ "", "python", "html", "database", "image", "blob", "" ]
I'm trying to make a simple line of code that looks for a certain element in a list and gives a true or false. I want to know if it's in the list . . . it can be any position in the list BUT if it's the last one, it has to be at another position as well. examples: ``` my_list = ['abc','def','ghi','jkl','def'] #meets criteria my_list2 = ['abc','ghi','jkl','def'] #does not meet criteria my_search = 'def' ``` my\_list has 'def' in the middle of the list and at the end so that would meet the criteria my\_list2 only has 'def' at the end so it WOULD NOT meet the criteria I was trying something along the lines of, ``` (('def' in my_list) and (my_list[-1] == 'def')) or (('def' in my_list) and (my_list[-1] !+ 'def')) ``` but I feel like that is the long way to do it.
So you can just check if it is in the list minus the last element: ``` >> my_list = ['abc','def','ghi','jkl','def'] >>> 'def' in my_list[:-1] True >>> my_list2 = ['abc','ghi','jkl','def'] >>> 'def' in my_list2[:-1] False ```
Try using a sub list... ``` 'def' in my_list[:-1] ```
Boolean. Is string in list and not last position ? Python
[ "", "python", "if-statement", "boolean", "" ]
I am reading the Python Essential Reference 4th ed. and I cannot figure out how to fix a problem in the following code ``` class Account(object): num_accounts = 0 def __init__(self, name, balance): self.name = name self.balance = balance Account.num_accounts += 1 def __del__(self): Account.num_accounts -= 1 def deposit(self, amt): self.balance += amt def withdraw(self, amt): self.balance -= amt def inquire(self): return self.balance class EvilAccount(Account): def inquire(self): if random.randint(0,4) == 1: return self.balance * 1.1 else: return self.balance ea = EvilAccount('Joe',400) ``` If I understand correctly, the ea object goes out of scope when the program ends and the inherited `__del__` function should be called, correct? I receive a `'NoneType' object has no attribute num_accounts` in `__del__`. Why doesn't it complain earlier then in the `__init__` function?
From [the docs](http://docs.python.org/2/reference/datamodel.html#object.__del__): > **Warning**: Due to the precarious circumstances under which `__del__()` methods are invoked, exceptions that occur during their execution are ignored, and a warning is printed to `sys.stderr` instead. **Also, when `__del__()` is invoked in response to a module being deleted (e.g., when execution of the program is done), other globals referenced by the `__del__()` method may already have been deleted or in the process of being torn down** (e.g. the import machinery shutting down). For this reason, `__del__()` methods should do the absolute minimum needed to maintain external invariants. Starting with version 1.5, Python guarantees that globals whose name begins with a single underscore are deleted from their module before other globals are deleted; if no other references to such globals exist, this may help in assuring that imported modules are still available at the time when the `__del__()` method is called.
Others have answered why this happened, but as to what you should do instead, try this: ``` import weakref class classproperty(object): def __init__(self, f): self.f = f def __get__(self, obj, owner): return self.f(owner) class Account(object): _active = weakref.WeakSet() @classproperty def num_accounts(self): return len(Account._active) def __init__(self, name, balance): self.name = name self.balance = balance Account._active.add(self) def deposit(self, amt): self.balance += amt def withdraw(self, amt): self.balance -= amt def inquire(self): return self.balance >>> Account.num_accounts 0 >>> a = [Account('a', 0), Account('b', 0)] >>> Account.num_accounts 2 >>> a.pop() <__main__.Account object at 0x02F8D650> >>> Account.num_accounts # Interactive session is holding onto the popped account in '_' 2 >>> Account.num_accounts # but now it has gone. 1 ``` So instead of counting how many instances exist, just keep a collection of all the current instances. The `WeakSet` won't prevent them being destroyed so it will accurately track only the instances that are still alive. Do be aware though that it is very easy for instances to stick around after you think you've lost them: if anything throws an exception then all the local variables in the stack frame will remain alive until the next exception is thrown. In this case you probably also want an explicit `close()` method that you can use when someone closes the account and that explicitly removes the instance from the active set.
How to inherit __del__ function
[ "", "python", "inheritance", "del", "" ]
I'm looking at the Apple docs for PassBook, and I need to: * create a "PKCS #7 detached signature of the manifest file". I'd ideally like to do this with Python and I'd ideally like to use pycrypto for the task, the trouble is, I can't find any example code on the web on how to do this, there's lots of code like this: ``` from Crypto.Cipher import PKCS1_v1_5 from Crypto.PublicKey import RSA from Crypto.Hash import SHA message = 'To be encrypted' h = SHA.new(message) key = RSA.importKey(open('pubkey.der').read()) cipher = PKCS1_v1_5.new(key) ciphertext = cipher.encrypt(message+h.digest()) ``` but not knowing enough about the specifics of PKCS#7, I don't know what I need to do... Does anyone have any ideas? Thanks
With a lot of help from [this answer](https://stackoverflow.com/a/33726421), I did it! I know op probably moved past this, but this answer is nowhere in the internet, so here's the solution, for posterity's sake! ``` from OpenSSL import crypto with open(cert) as cert_file: cert_buf = cert_file.read() with open(key) as key_file: key_buf = key_file.read() pkey = crypto.load_privatekey(crypto.FILETYPE_PEM, key_buf) signcert = crypto.load_certificate(crypto.FILETYPE_PEM, cert_buf) bio_in = crypto._new_mem_buf(text.encode()) PKCS7_NOSIGS = 0x4 # defined in pkcs7.h pkcs7 = crypto._lib.PKCS7_sign(signcert._x509, pkey._pkey, crypto._ffi.NULL, bio_in, PKCS7_NOSIGS) # noqa bio_out = crypto._new_mem_buf() crypto._lib.i2d_PKCS7_bio(bio_out, pkcs7) sigbytes = crypto._bio_to_string(bio_out) ``` Keep in mind, it uses some undocumented pyopenssl functions.
This works for me , I was trying to sign a string for NSDL, ``` from OpenSSL import crypto import base64 try: p12 = crypto.load_pkcs12(open("/DSCPFX.pfx", 'rb').read(), "XXXX") # print("p12 : ", p12) signcert = p12.get_certificate() pkey = p12.get_privatekey() text = "This is the text to be signed" bio_in = crypto._new_mem_buf(text.encode()) PKCS7_NOSIGS = 0x4 pkcs7 = crypto._lib.PKCS7_sign(signcert._x509, pkey._pkey, crypto._ffi.NULL, bio_in, PKCS7_NOSIGS) bio_out = crypto._new_mem_buf() crypto._lib.i2d_PKCS7_bio(bio_out, pkcs7) sigbytes = crypto._bio_to_string(bio_out) signed_data = base64.b64encode(sigbytes) return SUCCESS, signed_data except Exception as err: print("Exception happens in sign_data and error is: ", err) return 0, str(err) ```
Using pycrypto PKCS#7 to create a signature
[ "", "python", "pkcs#7", "" ]
Today, I've participated in google code jam round 1B. In code jam, there was a problem called 'Osmos': <https://code.google.com/codejam/contest/2434486/dashboard> **problem description** The problem states that there's a game, where the player is a thing, that can only eat things smaller than it, and will become larger by the size of the thing he ate. For example, if the player has size 10 and eats something of size 8, it becomes size 18. Now, given the size the player starts off at, and the sizes of all the other things in the game, you should give the minimum number of operations required to make the game beatable, which means that you're eventually able to eat everything. An operation can either be adding some thing, or removing some thing. **The code I used** `write_case` is a function that writes the output for every test case in the right format. I've used it in other code jam rounds, and I know it is correct, and `inp` is the file object of the input file. ``` cases = int(inp.readline()) for case in xrange(cases): # armin is the variable containing the size of the player, armin, n = int(inp.readline.split()[0]) # others is a list of sizes of other things others = [int(x) for x in inp.readline().split()] # changes obviously holds the number of required changes changes = 0 for other in sorted(others): #we loop over all the other things in order. if other < armin: #if the other thing is smaller, eat it, no change needed. armin += other else: # we'll need to make some changes # adding is the biggest size thing the player can eat, adding = armin - 1 if other < armin + adding: #if adding such a thing is good enough # to eat the other thing armin += adding + other # add it and eat them changes += 1 #we've made a change. else: # we can't add a big enough thing # we have to elete it from the game (skip it from the loop) # which is a change changes += 1 write_case(case + 1, changes ) #output the changes. ``` **My logic behind it** By looping over te other things froom small to large, the player first eats everything it can normally eat. When we encounter something we can't eat, we've eaten everything that's smaller than it, so we'll have to add a new thing so we can grow. If we're adding something new, we might as well make it as big as possible, since the size of the thing we add doesn't change the number of operations. The largest eatable thing I can add, is the player's size -1. If that's enough to eat the next thing, we add it, eat the thing we added, and than eat the thing we previously couldn't eat. If the addition wouldn't be enough, we don't add it, and just delete (ignore) the current thing). At this point, I could just break from the loop to skip all the other things (since the'll all be too large to eat, the list is sorted.), but add their number to the number of operationns to speed up my sollution, but that wouldn't change the outcome. This code correctly computes the values for the sample input, but it's incorrect for the contest input. Any idea why?
My approach was that each time I found a block, I figured out how many additions were required to continue. I then built a log of the number of additions and the number of remaining. After I had completed the set, I looped through the log in reverse to determine if it was more efficient to add motes to continue, or remove the remaining motes at each block point. Looking at this now, I can see a number of places I could optimize, but I passed both the small and large with the below C# code. ``` protected string Solve(string Line1, string Line2) { string[] Inputs = Line1.Split(); uint A = uint.Parse(Inputs[0]); byte N = byte.Parse(Inputs[1]); Inputs = Line2.Split(); List<uint> Motes = new List<uint>(N); foreach (string Size in Inputs) { Motes.Add(uint.Parse(Size)); } Motes.Sort(); List<Action> Actions = new List<Action>(); while (Motes.Count > 0) { if (A > Motes[0]) { A += Motes[0]; Motes.RemoveAt(0); } else if(A > 1) { uint I; for (I = 0; A <= Motes[0]; I++) { A = (A << 1) - 1; } Actions.Add(new Action(I, Motes.Count)); } else { Actions.Add(new Action(101, Motes.Count)); break; } } uint TotalInserts = 0; int TotalRemoved = 0; for (int I = Actions.Count - 1; I >= 0; I--) { int StepRemaining = Actions[I].Remaining - TotalRemoved; uint StepInsert = Actions[I].Inserts; if (StepInsert >= StepRemaining) { TotalRemoved += StepRemaining; TotalInserts = 0; } else { TotalInserts += StepInsert; if (TotalInserts >= Actions[I].Remaining) { TotalRemoved = Actions[I].Remaining; TotalInserts = 0; } } } return (TotalInserts + TotalRemoved).ToString(); } struct Action { public uint Inserts; public int Remaining; public Action(uint inserts, int remaining) { Inserts = inserts; Remaining = remaining; } } ```
Here're what I thought the key points were: 1. It's never desirable to remove from the center of the list. That's a wasted operation. Consider `armin, others == 2, [1, 10, 11]` - removing the `10` will never make the 11 easier to get to 2. Given this, it's only valid to remove all the remaining items in the list. Therefore, if you have to add more items than there are remaining in the list to progress to the next element, it's more efficient just to remove those large ones. My ~~correct~~ solution was implemented as: ``` def solve(armin_size, sizes): sizes = sorted(sizes) steps = 0 for i, size in enumerate(sizes): add_count = 0 remove_count = len(sizes) - i while armin_size <= size: armin_size += armin_size - 1 add_count += 1 if add_count >= remove_count: return steps + remove_count armin_size += size steps += add_count return steps ``` **EDIT:** Just checked the scoreboard again - I got it wrong. Whoops.
What's incorrect about my solution in this google code jam exercise?
[ "", "python", "algorithm", "" ]
I have 3 lists, one for hours, one for minutes, and one for seconds. What I have done is create a function that will take 3 lists as input and calculate the total amount of time. My issue is that the function is so redundant and my question to you, is simpy: what is a better way to do this. Here is my function: ``` def final_time(hours,minutes,seconds): draft_hours = sum(hours) draft_minutes = sum(minutes) draft_seconds = sum(seconds) adding_seconds = str(draft_seconds/60.0) second_converting = adding_seconds.split(".") seconds_to_minutes = int(second_converting[0]) seconds_to_seconds = draft_seconds - (seconds_to_minutes * 60) total_seconds = str(seconds_to_seconds) more_minutes = draft_minutes + seconds_to_minutes adding_minutes = str(more_minutes/60.0) minute_converting = adding_minutes.split(".") minutes_to_hours = int(minute_converting[0]) minutes_to_minutes = more_minutes - (minutes_to_hours * 60) total_minutes = str(minutes_to_minutes) total_hours = str(draft_hours + minutes_to_hours) return total_hours + " hours, " + total_minutes + " minutes, and " + total_seconds + " seconds." ``` here is an example: ``` my_hours = [5, 17, 4, 8] my_minutes = [40, 51, 5, 24] my_seconds = [55, 31, 20, 33] print final_time(my_hours,my_minutes,my_seconds) ``` above returns: ``` 36 hours, 2 minutes, and 19 seconds. ``` so it does work, but as you can see, the function is just not a pythonic nor efficient function...**what would a better method be?**
``` s = sum(hours)*3600+sum(minutes)*60+sum(seconds) return '%d hours %d minutes %d seconds'%( s/3600, (s%3600)/60, s%60) ```
I would probably convert it all to seconds first: ``` seconds = sum(3600*h + 60*m + s for (h,m,s) in zip(hours,minutes,seconds) ``` Now break it back down: ``` n_hours,minutes = divmod(seconds,3600) n_minutes,n_seconds = divmod(minutes,60) ```
converting a list of times to total time
[ "", "python", "list", "function", "time", "" ]
I have some problems with the Pandas apply function, when using multiple columns with the following dataframe ``` df = DataFrame ({'a' : np.random.randn(6), 'b' : ['foo', 'bar'] * 3, 'c' : np.random.randn(6)}) ``` and the following function ``` def my_test(a, b): return a % b ``` When I try to apply this function with : ``` df['Value'] = df.apply(lambda row: my_test(row[a], row[c]), axis=1) ``` I get the error message: ``` NameError: ("global name 'a' is not defined", u'occurred at index 0') ``` I do not understand this message, I defined the name properly. I would highly appreciate any help on this issue Update Thanks for your help. I made indeed some syntax mistakes with the code, the index should be put ''. However I still get the same issue using a more complex function such as: ``` def my_test(a): cum_diff = 0 for ix in df.index(): cum_diff = cum_diff + (a - df['a'][ix]) return cum_diff ```
Seems you forgot the `''` of your string. ``` In [43]: df['Value'] = df.apply(lambda row: my_test(row['a'], row['c']), axis=1) In [44]: df Out[44]: a b c Value 0 -1.674308 foo 0.343801 0.044698 1 -2.163236 bar -2.046438 -0.116798 2 -0.199115 foo -0.458050 -0.199115 3 0.918646 bar -0.007185 -0.001006 4 1.336830 foo 0.534292 0.268245 5 0.976844 bar -0.773630 -0.570417 ``` BTW, in my opinion, following way is more elegant: ``` In [53]: def my_test2(row): ....: return row['a'] % row['c'] ....: In [54]: df['Value'] = df.apply(my_test2, axis=1) ```
If you just want to compute (column a) % (column b), you don't need `apply`, just do it directly: ``` In [7]: df['a'] % df['c'] Out[7]: 0 -1.132022 1 -0.939493 2 0.201931 3 0.511374 4 -0.694647 5 -0.023486 Name: a ```
Why isn't my Pandas 'apply' function referencing multiple columns working?
[ "", "python", "python-2.7", "pandas", "dataframe", "apply", "" ]
I have a webpage, from which I get its text using the resources module in Python. But, I'm not getting it, how to get a pattern of numbers like 126.23.73.34 from the document and extract it out using the re module?
You can use the regex for IPs `d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}` ``` text = "126.23.73.34"; match = re.search(r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}', text) if match: print "match.group(1) : ", match.group(0) ``` If you are looking for a complete regex to get IPv4 addresses you can find the most appropriate regex [here](http://www.regular-expressions.info/examples.html). To restrict all 4 numbers in the IP address to 0-255, you can use this one taken from the source above: ``` (25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?) ```
If if it is an html text; you could use an html parser (such as [`BeautifulSoup`](http://www.crummy.com/software/BeautifulSoup/bs4/doc/)) to parse it, a regex to select some strings that look like an ip, and `socket` module to validate ips: ``` import re import socket from bs4 import BeautifulSoup # pip install beautifulsoup4 def isvalid(addr): try: socket.inet_aton(addr) except socket.error: return False else: return True soup = BeautifulSoup(webpage) ipre = re.compile(r"\b\d+(?:\.\d+){3}\b") # matches some ips and more ip_addresses = [ip for ips in map(ipre.findall, soup(text=ipre)) for ip in ips if isvalid(ip)] ``` Note: it extracts ips only from text e.g., it ignores ips in html attributes.
How do I find a pattern like 252.63.71.62 in a text in Python with re(gex)?
[ "", "python", "regex", "" ]
I am using switch statement in access 2007, i want to know how I can specify default value ``` select switch ( MyCol = 1, 'Value is One', MyCol = 2, 'Value is Two' ) from MyTable ``` Thanks
``` SELECT Switch(MyTable.[MyCol]='1','Terrestrial', MyTable.[MyCol]='2','Value is two',MyTable.[MyCol]='3','Value is three', True,'Error') AS ColumnName FROM MyTable; ```
``` select switch ( MyCol = 1, 'Value is One', MyCol = 2, 'Value is Two' True,"Default" ) from MyT ``` **Refer:** <http://www.utteraccess.com/forum/Switch-statement-default-t453140.html>
default value for switch in access 2007
[ "", "sql", "ms-access", "ms-access-2007", "" ]
I have a scenario like this: ``` a = ["file1","file2","file3","file1","file2","file1","file5"] set_flag = 0 for i in range (len(a)): file_name = '%s.txt' %(a[i]) file_write = open('%s'%(file_name),'w') ``` The above works and writes files. However,I want to include a logic such that even though some file names appear more than once in the above list the file\_write should happen only once. There should not be multiple file writes. E.g. if file1 appears 4 times it should only be written once. With that the set\_flag should be set to say "1" so that if I try writing the file1 anywhere in my code it should should bypass the file write.Any ideas how to accomplish this and set such a flag..?
use `set()`, it only stores a single copy of an item: ``` >>> a=["file1","file2","file3","file1","file2","file1","file5"] >>> set(a) set(['file3', 'file2', 'file1', 'file5']) ``` for your code, this will maintain the order as well.: ``` a=["file1","file2","file3","file1","file2","file1","file5"] seen=set() for file in a: #you can iterate over a list itself if file not in seen: #if file is not in the set then write file_name= '%s.txt' %(file) file_write= open('%s'%(file_name),'w') seen.add(file) #add file to seen ```
A set is a good idea, so you just iterate through the unique file names in your list. Also, don't use `range(len(...))` and some other cleanups: ``` a = ["file1","file2","file3","file1","file2","file1","file5"] set_flag = 0 for file_name in set(a): file_write = open(file_name + '.txt', 'w') ```
Writing files only once
[ "", "python", "" ]
I have the problem that I get a set of pictures and need to classify those. The thing is, i do not really have any knowledge of these images. So i plan on using as many descriptors as I can find and then do a PCA on those to identify only the descriptors that are of use to me. I can do supervised learning on a lot of datapoints, if that helps. However there is a chance that pictures are connected to each other. Meaning there could be a development from Image X to Image X+1, although I kinda hope this gets sorted out with the information in each Image. My question are: 1. How do i do this best when using Python? (I want to make a proof of concept first where speed is a non-issue). What libraries should i use? 2. Are there examples already for an image Classification of such a kind? Example of using a bunch of descriptors and cooking them down via PCA? This part is kinda scary for me, to be honest. Although I think python should already do something like this for me. Edit: I have found a neat kit that i am currently trying out for this: <http://scikit-image.org/> There seem to be some descriptors in there. Is there a way to do automatic feature extraction and rank the features according to their descriptive power towards the target classification? PCA should be able to rank automatically. Edit 2: I have my framework for the storage of the data now a bit more refined. I will be using the Fat system as a database. I will have one folder for each instance of a combination of classes. So if an image belongs to class 1 and 2, there will be a folder img12 that contains those images. This way i can better control the amount of data i have for each class. Edit 3: I found an example of a libary (sklearn) for python that does some sort of what i want to do. it is about recognizing hand-written digits. I am trying to convert my dataset into something that i can use with this. here is the example i found using sklearn: ``` import pylab as pl # Import datasets, classifiers and performance metrics from sklearn import datasets, svm, metrics # The digits dataset digits = datasets.load_digits() # The data that we are interested in is made of 8x8 images of digits, # let's have a look at the first 3 images, stored in the `images` # attribute of the dataset. If we were working from image files, we # could load them using pylab.imread. For these images know which # digit they represent: it is given in the 'target' of the dataset. for index, (image, label) in enumerate(zip(digits.images, digits.target)[:4]): pl.subplot(2, 4, index + 1) pl.axis('off') pl.imshow(image, cmap=pl.cm.gray_r, interpolation='nearest') pl.title('Training: %i' % label) # To apply an classifier on this data, we need to flatten the image, to # turn the data in a (samples, feature) matrix: n_samples = len(digits.images) data = digits.images.reshape((n_samples, -1)) # Create a classifier: a support vector classifier classifier = svm.SVC(gamma=0.001) # We learn the digits on the first half of the digits classifier.fit(data[:n_samples / 2], digits.target[:n_samples / 2]) # Now predict the value of the digit on the second half: expected = digits.target[n_samples / 2:] predicted = classifier.predict(data[n_samples / 2:]) print("Classification report for classifier %s:\n%s\n" % (classifier, metrics.classification_report(expected, predicted))) print("Confusion matrix:\n%s" % metrics.confusion_matrix(expected, predicted)) for index, (image, prediction) in enumerate( zip(digits.images[n_samples / 2:], predicted)[:4]): pl.subplot(2, 4, index + 5) pl.axis('off') pl.imshow(image, cmap=pl.cm.gray_r, interpolation='nearest') pl.title('Prediction: %i' % prediction) pl.show() ```
You can convert a picture to a vector of pixels, and perform PCA on that vector. This might be easier than trying to manually find descriptors. You can use numPy and sciPy in python. For example: ``` import scipy.io from numpy import * #every row in the *.mat file is 256*256 numbers representing gray scale values #for each pixel in an image. i.e. if XTrain.mat has 1000 lines than each line #will be made up of 256*256 numbers and there would be 1000 images in the file. #The following loads the image into a sciPy matrix where each row is a vector #of length 256*256, representing an image. This code will need to be switched #out if you have a different method of storing images. Xtrain = scipy.io.loadmat('Xtrain.mat')["Xtrain"] Ytrain = scipy.io.loadmat('Ytrain.mat')["Ytrain"] Xtest = scipy.io.loadmat('Xtest.mat')["Xtest"] Ytest = scipy.io.loadmat('Ytest.mat')["Ytest"] learn(Xtest,Xtrain,Ytest,Ytrain,5) #this lowers the dimension from 256*256 to 5 def learn(testX,trainX,testY,trainY,n): pcmat = PCA(trainX,n) lowdimtrain=mat(trainX)*pcmat #lower the dimension of trainX lowdimtest=mat(testX)*pcmat #lower the dimension of testX #run some learning algorithm here using the low dimension matrices for example trainset = [] knnres = KNN(lowdimtrain, trainY, lowdimtest ,k) numloss=0 for i in range(len(knnres)): if knnres[i]!=testY[i]: numloss+=1 return numloss def PCA(Xparam, n): X = mat(Xparam) Xtranspose = X.transpose() A=Xtranspose*X return eigs(A,n) def eigs(M,k): [vals,vecs]=LA.eig(M) return LM2ML(vecs[:k]) def LM2ML(lm): U=[[]] temp = [] for i in lm: for j in range(size(i)): temp.append(i[0,j]) U.append(temp) temp = [] U=U[1:] return U ``` In order to classify your image you can used k-nearest neighbors. i.e. you find the k nearest images and label your image with by majority vote over the k nearest images. For example: ``` def KNN(trainset, Ytrainvec, testset, k): eucdist = scidist.cdist(testset,trainset,'sqeuclidean') res=[] for dists in eucdist: distup = zip(dists, Ytrainvec) minVals = [] sumLabel=0; for it in range(k): minIndex = index_min(dists) (minVal,minLabel) = distup[minIndex] del distup[minIndex] dists=numpy.delete(dists,minIndex,0) if minLabel == 1: sumLabel+=1 else: sumLabel-=1 if(sumLabel>0): res.append(1) else: res.append(0) return res ```
I know I'm not answering your question directly. But images vary greatly:remote sensing, objects, scenes, fMRI, biomedial, faces, etc... It would help if you narrow your categorization a bit and let us know. What descriptors are you computing? Most of the code I use (as well as the computer vision community) is in MATLAB, not in python, but I'm sure there are similar codes available (pycv module & <http://www.pythonware.com/products/pil/>). Try out this descriptor benchmark that has precompiled state-out-the-art code from the people at MIT: <http://people.csail.mit.edu/jxiao/SUN/> Try looking at GIST,HOG and SIFT, those are pretty standard depending on what you wanto to analyze: scenes, objects or points respectively.
Classifiying a set of Images into Classes
[ "", "python", "image", "classification", "descriptor", "pca", "" ]
I have an array of distances called `dists`. I want to select `dists` which are within a range. ``` dists[(np.where(dists >= r)) and (np.where(dists <= r + dr))] ``` However, this selects only for the condition ``` (np.where(dists <= r + dr)) ``` If I do the commands sequentially by using a temporary variable it works fine. Why does the above code not work, and how do I get it to work?
The best way in **your particular case** would just be to change your two criteria to one criterion: ``` dists[abs(dists - r - dr/2.) <= dr/2.] ``` It only creates one boolean array, and in my opinion is easier to read because it says, *is `dist` within a `dr` or `r`?* (Though I'd redefine `r` to be the center of your region of interest instead of the beginning, so `r = r + dr/2.`) But that doesn't answer your question. --- **The answer to your question:** You don't actually need `where` if you're just trying to filter out the elements of `dists` that don't fit your criteria: ``` dists[(dists >= r) & (dists <= r+dr)] ``` Because the `&` will give you an elementwise `and` (the parentheses are necessary). Or, if you do want to use `where` for some reason, you can do: ``` dists[(np.where((dists >= r) & (dists <= r + dr)))] ``` --- **Why:** The reason it doesn't work is because `np.where` returns a list of indices, not a boolean array. You're trying to get `and` between two lists of numbers, which of course doesn't have the `True`/`False` values that you expect. If `a` and `b` are both `True` values, then `a and b` returns `b`. So saying something like `[0,1,2] and [2,3,4]` will just give you `[2,3,4]`. Here it is in action: ``` In [230]: dists = np.arange(0,10,.5) In [231]: r = 5 In [232]: dr = 1 In [233]: np.where(dists >= r) Out[233]: (array([10, 11, 12, 13, 14, 15, 16, 17, 18, 19]),) In [234]: np.where(dists <= r+dr) Out[234]: (array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]),) In [235]: np.where(dists >= r) and np.where(dists <= r+dr) Out[235]: (array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]),) ``` What you were expecting to compare was simply the boolean array, for example ``` In [236]: dists >= r Out[236]: array([False, False, False, False, False, False, False, False, False, False, True, True, True, True, True, True, True, True, True, True], dtype=bool) In [237]: dists <= r + dr Out[237]: array([ True, True, True, True, True, True, True, True, True, True, True, True, True, False, False, False, False, False, False, False], dtype=bool) In [238]: (dists >= r) & (dists <= r + dr) Out[238]: array([False, False, False, False, False, False, False, False, False, False, True, True, True, False, False, False, False, False, False, False], dtype=bool) ``` Now you can call `np.where` on the combined boolean array: ``` In [239]: np.where((dists >= r) & (dists <= r + dr)) Out[239]: (array([10, 11, 12]),) In [240]: dists[np.where((dists >= r) & (dists <= r + dr))] Out[240]: array([ 5. , 5.5, 6. ]) ``` Or simply index the original array with the boolean array using [fancy indexing](http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html) ``` In [241]: dists[(dists >= r) & (dists <= r + dr)] Out[241]: array([ 5. , 5.5, 6. ]) ```
The accepted answer explained the problem well enough. However, the more Numpythonic approach for applying multiple conditions is to use [numpy logical functions](http://docs.scipy.org/doc/numpy/reference/routines.logic.html). In this case, you can use `np.logical_and`: ``` np.where(np.logical_and(np.greater_equal(dists,r),np.greater_equal(dists,r + dr))) ```
Numpy where function multiple conditions
[ "", "python", "numpy", "" ]
`list = ['12345','23456']` I have a script `"test.py"`,I need to pass the values in a given list above as parameters to this script with `"pick"` as option, can anyone provide input on how this can be done? Final goal is to run the script like the following: ``` test.py pick 12345 23445 ```
Use subprocess ``` import subprocess lst = ['12345','23456'] cmd = ['test.py', 'pick'] cmd.extend(lst) subprocess.call(cmd) ``` Try this code. This will invoke the script test.py with args pick 12345 23456
You should parse the arguments with sys.argv <http://docs.python.org/2/library/sys.html#sys.argv> If you want to run the script from another script you can use [os.system](http://docs.python.org/2/library/os.html#os.system) ``` os.system("script2.py 1") ```
Passing List Values as Parameters
[ "", "python", "" ]
Can someone please explain? ``` import numpy a = ([1,2,3,45]) b = ([6,7,8,9,10]) numpy.savetxt('test.txt',(a,b)) ``` This script can save well the data. But when I am running it through a loop it can print all but cannot not save all. why? ``` import numpy a = ([1,2,3,4,5]) b = ([6,7,8,9,10]) for i,j in zip(a,b): print i,j numpy.savetxt('test.txt',(i,j)) ```
You overwrite the previous data each time you call `numpy.savetext()`. A solution, using a temporary buffer array : ``` import numpy a = ([1,2,3,4,5]) b = ([6,7,8,9,10]) out = [] for i,j in zip(a,b): print i,j out.append( (i,j) ) numpy.savetxt('test.txt',out) ```
`numpy.savetxt` will overwrite the previously written file, so you only get the result of the last iteration.
savetxt save only last loop data
[ "", "python", "numpy", "" ]
I am creating a COM client within a thread and performing several operations with this client. Each thread is spawned from a server that uses Python's `socketserver` module which has built-in threading support. When I am loading and using this COM object there is an expected spike in memory usage by python.exe. With 10 concurrent threads there is a peak in memory usage of about 500Mb. However when the operations are finished and the COM object is apparently released, there are 50Mb of additional memory used by the process than before. If I then spawn 10 additional threads using the same server there are 13Mb additional used by python.exe after those COM objects are closed. Eventually every 10 additional concurrent threads adds approximately 6Mb after they are done. When I end the entire python.exe process, all the memory is released. I have simplified the following code to mimic how the socketserver uses `threadding` and the problem is the exact same. ``` import win32com.client import threading import pythoncom def CreateTom(): pythoncom.CoInitialize() tom = win32com.client.Dispatch("TOM.Document") tom.Dataset.Load("FileName") tom.Clear() pythoncom.CoUninitialize() for i in range(50): t = threading.Thread(target = CreateTom) t.daemon = False t.start() ``` I understand that I will unlikely get any support here around the specific COM library (it is an IBM product used in Market Research known as the TablesObjectModel). However I want to know if there is anything, ANYTHING, additional that I can do to release this memory. I have read about Apartments in COM but it sounds like pythoncom.CoInitialize should take care of this for me. Any help would be appreciated.
As it turns out this increase in Memory was in fact due to the COM object written in .NET and had nothing to do with threading. [Here](http://www.itwriting.com/dotnetmem.php) is a detailed description of Task Manager giving misleading information about memory usage for .NET apps. To resolve this issue I added the following to my code and I am all set. Hopefully someone else reads this response before they start tearing their hair out trying to find a memory leak in their code. ``` from win32process import SetProcessWorkingSetSize from win32api import GetCurrentProcessId, OpenProcess from win32con import PROCESS_ALL_ACCESS import win32com.client import threading import pythoncom def CreateTom(): pythoncom.CoInitialize() tom = win32com.client.Dispatch("TOM.Document") tom.Dataset.Load("FileName") tom.Clear() pythoncom.CoUninitialize() SetProcessWorkingSetSize(handle,-1,-1) #Releases memory after every use pid = GetCurrentProcessId() handle = OpenProcess(PROCESS_ALL_ACCESS, True, pid) for i in range(50): t = threading.Thread(target = CreateTom) t.daemon = False t.start() ```
here it is link may help you [release COM in python win32](http://mail.python.org/pipermail/python-win32/2005-August/003661.html)
Memory Leak in Threaded COM Object with Python
[ "", "python", "multithreading", "com", "memory-leaks", "win32com", "" ]
We have some objects with properties ``` class my_object: def __init__(self, type, name): self.type = type self.name = name ``` And a list which contains many object with different `type` and `name` values. What I need is a comprehension which does something like: ``` if my_object.type == 'type 1' in object_list: object_list.remove(all objects with name == 'some_name') ```
I think what you need is : ``` if any(obj.type == 'type 1' for obj in object_list): object_list = [obj for obj in object_list if obj.name != 'some_name'] ```
I think you are looking for: ``` object_list = filter(lambda x: x.name != 'some_name', object_list) ```
How to check if an object in a list has a certain property
[ "", "python", "list", "" ]
I'm trying to do a join using [SQL::Abstract::More](https://metacpan.org/) that has an `and and then a literal value, not on a table column. ``` =>{table.table_id=table_id,table_log.date>table.date,table_log.event_id=1} gd_audit_log ``` the resulting output that I want ``` LEFT OUTER JOIN table_log ON ( table_log.date > table.date AND table.table_id = table_log.table_id AND table_log.event_id = 1 ) ``` this code works except for ``` AND table_log.event_id = 1 ``` the error is ``` ... failed: Unknown column 'table_log.1' in 'on clause' ``` obviously it's generating the wrong SQL, what I'm trying to figure out is how to get it to generate the SQL I need.
From [RT Bug 84972](https://rt.cpan.org/Public/Bug/Display.html?id=84972). To insert a literal value, you need to use the hashref syntax, instead of the string syntax : ``` my $result = $sqla->join( 'table', { operator => '=>', condition => { '%1$s.table_id' => {-ident => '%2$s.table_id'}, '%2$s.date' => {'>' => {-ident => '%1$s.date'}}, '%2$s.event_id' => 1}}, 'table_log' ); ```
Seems to me that `table_log.event_id = 1` isn't a valid `join` clause, but should be in a `where` clause.
join with AND id=1 with SQL::Abstract::More
[ "", "sql", "perl", "" ]
I am trying out SQLAlchemy and I am using this connection string: ``` engine = create_engine('sqlite:///C:\\sqlitedbs\\database.db') ``` Does SQLAlchemy create an SQLite database if one is not already present in a directory it was supposed to fetch the database file?
Yes,sqlalchemy does create a database for you.I confirmed it on windows using this code ``` from sqlalchemy import create_engine, ForeignKey from sqlalchemy import Column, Date, Integer, String from sqlalchemy.ext.declarative import declarative_base engine = create_engine('sqlite:///C:\\sqlitedbs\\school.db', echo=True) Base = declarative_base() class School(Base): __tablename__ = "woot" id = Column(Integer, primary_key=True) name = Column(String) def __init__(self, name): self.name = name Base.metadata.create_all(engine) ```
As others have posted, SQLAlchemy will do this automatically. I encountered this error, however, when I [didn't use enough slashes!](https://docs.sqlalchemy.org/en/13/core/engines.html#sqlite) I used `SQLALCHEMY_DATABASE_URI="sqlite:///path/to/file.db"` when I should have used four slashes: `SQLALCHEMY_DATABASE_URI="sqlite:////path/to/file.db"`
Creating SQLite database if it doesn't exist
[ "", "python", "sqlite", "sqlalchemy", "" ]
Here is the output I am wanting: ``` level1 = {'value1':0, 'value2':0, 'value3':0} level2 = {'value1':0, 'value2':0, 'value3':0} level3 = {'value1':0, 'value2':0, 'value3':0} level3 = {'value1':0, 'value2':0, 'value3':0} ``` Note: Value1, Value2, and Value3 are all the same thing. I am using this to just populate the dictionaries. Here is what I am trying: ``` for x in range (1,6): level = 'level%d' % x for iteration in range(1, 4): value = 'value%d' % iteration level = {} level['value'] = 0 ```
Are you trying something like this?: ``` dic={} for x in range (1,6): level = 'level%d' % x dic[level] = {} for iteration in range(1, 4): value = 'value%d' % iteration dic[level][value] = 0 print dic ``` Output: ``` {'level1': {'value1': 0, 'value2': 0, 'value3': 0}, 'level2': {'value1': 0, 'value2': 0, 'value3': 0}, 'level3': {'value1': 0, 'value2': 0, 'value3': 0}, 'level4': {'value1': 0, 'value2': 0, 'value3': 0}, 'level5': {'value1': 0, 'value2': 0, 'value3': 0}} ```
You can nest the`for`loops in a nested [dictionary comprehension](http://docs.python.org/2/reference/expressions.html#displays-for-sets-and-dictionaries) and create a two-level nested dictionary like this: ``` from pprint import pprint nested_dict = {'level%d' % level: {'value%d' % value: 0 for value in range(1, 4)} for level in range(1, 6)} pprint(nested_dict) ``` Output: ``` {'level1': {'value1': 0, 'value2': 0, 'value3': 0}, 'level2': {'value1': 0, 'value2': 0, 'value3': 0}, 'level3': {'value1': 0, 'value2': 0, 'value3': 0}, 'level4': {'value1': 0, 'value2': 0, 'value3': 0}, 'level5': {'value1': 0, 'value2': 0, 'value3': 0}} ```
How do I create multiple dictionaries based on nested for loops?
[ "", "python", "dictionary", "iteration", "" ]
I want to define a class with it's `__repr__` method defined in such a way that it will write out only the names and values of all attributes that are not methods. How can I do this? I have managed to write it like this, but I realize that this does not check for the attribute type. ``` class Example: def __repr__(self): return "\n".join(["%s: %s" % (x, getattr(self, x)) for x in dir(self) if not x.startswith('__')]) ``` What is missing here is the check for the type of the attribute.
You can use `inspect` for something like this: ``` from inspect import ismethod,getmembers class Example: def __repr__(self): return "\n".join("%s: %s" % (k, v) for (k,v) in getmembers(self,lambda x: not ismethod(x))) def method(self): return 1 a = Example() a.foo = 'bar' print a ``` This also picks up the double underscore attributes (`__module__`, `__doc__`). If you don't want those, you can pretty easily filter them out.
**Assuming your class does not define `__slots__`,** you could also just iterate the instance's `__dict__` (or via the [`vars()` function](http://docs.python.org/3/library/functions.html#vars)). ``` class Superclass: def __init__(self, w): self.w = w class Example(Superclass): def __init__(self, x, y, z): super().__init__(1234) self.x = x self.y = y self.z = z @property def x_prop(self): return self.x @classmethod def do_something(cls, z): return str(cls) + str(z) def __call__(self): return 4444 class_property = 42 def __repr__(self): return "\n".join("%s: [%s]" % (k, v) for (k,v) in vars(self).items()) example = Example(2, lambda y: z, '4') example2 = Example(example, 6j, b'90') print(repr(example2)) ``` This prints ``` x: [x: [2] y: [<function <lambda> at 0x7f9368b21ef0>] z: [4] w: [1234]] y: [6j] z: [b'90'] w: [1234] ```
Python: How can I check if an attribute of an object is a method or not?
[ "", "python", "object", "types", "attributes", "" ]
I am new to `pandas` and is trying the Pandas 10 minute tutorial with pandas version 0.10.1. However when I do the following, I get the error as shown below. `print df` works fine. Why is `.loc` not working? **Code** ``` import numpy as np import pandas as pd df = pd.DataFrame(np.random.randn(6,4), index=pd.date_range('20130101', periods=6), columns=['A','B','C','D']) df.loc[:,['A', 'B']] ``` **Error:** ``` AttributeError Traceback (most recent call last) <ipython-input-4-8513cb2c6dc7> in <module>() ----> 1 df.loc[:,['A', 'B']] C:\Python27\lib\site-packages\pandas\core\frame.pyc in __getattr__(self, name) 2044 return self[name] 2045 raise AttributeError("'%s' object has no attribute '%s'" % -> 2046 (type(self).__name__, name)) 2047 2048 def __setattr__(self, name, value): AttributeError: 'DataFrame' object has no attribute 'loc' ```
`loc` was [introduced in 0.11](http://pandas.pydata.org/pandas-docs/stable/whatsnew.html), so you'll need to upgrade your pandas to follow [the 10minute introduction](http://pandas.pydata.org/pandas-docs/stable/10min.html).
I came across this question when I was dealing with pyspark DataFrame. So, if you're also using pyspark DataFrame, you can convert it to pandas DataFrame using toPandas() method.
Pandas error: 'DataFrame' object has no attribute 'loc'
[ "", "python", "python-2.7", "numpy", "scipy", "pandas", "" ]
I want to multiply two 2D array in such a way that first array's row will be multiplied with the column of the second one. So I transposed the second array so that its column changed to its transposedone's row. Then I extracted two rows from the 2D arrays one by one and treated them as two simple list, multiplied them and stored them in another list. I want to store them as 00 (row-column) element of the resultant 2D array. How can I do it? ``` NO_OF_ROWS_A=int(input("Enter the no. of rows in A: ")) NO_OF_COLUMNS_A=int(input("Enter the no. of columns in A: ")) NO_OF_ROWS_B=int(input("Enter the no. of rows in B : ")) NO_OF_COLUMNS_B=int(input("Enter the no. of columns in B: ")) mat_A=[[] for i in range(NO_OF_ROWS_A)] for i in range(NO_OF_ROWS_A): for j in range (NO_OF_COLUMNS_A): print("Enter",i,j,"element for mat A: ") a=int(input()) mat_A[i].append(a) mat_B=[[] for i in range(NO_OF_ROWS_B)] for i in range(NO_OF_ROWS_B): for j in range (NO_OF_COLUMNS_B): print("Enter",i,j,"element for mat B: ") b=int(input()) mat_B[i].append(b) import itertools mat_B_trans = list(itertools.zip_longest(*mat_B))#transposes B to change its col into row sum_list_x=0 mat_C=[[] for i in range(NO_OF_ROWS_A)] for i in range(NO_OF_ROWS_A): for j in range(NO_OF_COLUMNS_B): for i in range(NO_OF_ROWS_A):#extracts row of A list_row_a=mat_A[i] for j in range(NO_OF_COLUMNS_B):#extracts row of trans_B which is col of B list_column_b=mat_B_trans[i] list_temp_multiple=[a*b for a,b in zip(list_row_a, list_column_b)]#list of multiples of same indices of A & B sum_list=sum(list_temp_multiple)#sum of prev list elemets sum_list_x=sum_list_x + sum_list c = sum_list_x mat_C[i].append(c) print(mat_C) ```
Using [NumPy](http://www.numpy.org), your entire program could be written as: ``` import numpy as np mat_A = np.matrix(input('Enter matrix A: ')) mat_B = np.matrix(input('Enter matrix B: ')) mat_C = mat_A * mat_B print(mat_C) ``` yields ``` Enter matrix A: [[1,2], [3,4]] Enter matrix B: [[5,6], [7,8]] [[19 22] [43 50]] ```
Would the following work for you: ``` [sum(a * b for a, b in zip(*combined_row)) for combined_row in zip(mat_A, zip(*mat_B))] ``` This transposes `mat_B`, zips the two matrices per row, then multiplies the rows per-column and sums the multiplied values, making *one* list of values.
How to multiply two lists and store its sum in a 2D array?
[ "", "python", "python-3.x", "" ]
When I submit my package to the Python Package Index (<https://pypi.python.org/pypi>) my README file, which is written with valid reStructuredText and saved as README.rst, is displayed as plain text without any formatting. I have run it through validators (rstctl and collective.checkdocs) and no errors are returned. My package is at: <https://pypi.python.org/pypi/lcinvestor> It's in github at: <https://github.com/jgillick/LendingClubAutoInvestor>
It turns out that the answer from @sigmavirus regarding the links was close. I started a [discussion](http://mail.python.org/pipermail/distutils-sig/2013-May/020744.html) on the distutils mailing list and found out that in-page links (i.e. #minimum-cash) are not allowed by the pypi reStructuredText parser and will invalidate the entire document. It seems that pypi uses a whitelist to filter link protocols (http vs ftp vs gopher), and sees '#' as an invalid protocol. It seems like this would be pretty easy to fix on their end, but until then, I'll be removing my in-page anchor links.
* You may use [`collective.checkdocs`](https://github.com/collective/collective.checkdocs) package to detect invalid constructs: `pip install collective.checkdocs python setup.py checkdocs` * You may then use the following python function to filter-out *sphinx-only* constructs (it might be necessary to add more regexes, to match your content): ``` #!/usr/bin/python3 """ Cleans-up Sphinx-only constructs (ie from README.rst), so that *PyPi* can format it properly. To check for remaining errors, install ``sphinx`` and run:: python setup.py --long-description | sed -file 'this_file.sed' | rst2html.py --halt=warning """ import re import sys, io def yield_sphinx_only_markup(lines): """ :param file_inp: a `filename` or ``sys.stdin``? :param file_out: a `filename` or ``sys.stdout`?` """ substs = [ ## Selected Sphinx-only Roles. # (r':abbr:`([^`]+)`', r'\1'), (r':ref:`([^`]+)`', r'`\1`_'), (r':term:`([^`]+)`', r'**\1**'), (r':dfn:`([^`]+)`', r'**\1**'), (r':(samp|guilabel|menuselection):`([^`]+)`', r'``\2``'), ## Sphinx-only roles: # :foo:`bar` --> foo(``bar``) # :a:foo:`bar` XXX afoo(``bar``) # #(r'(:(\w+))?:(\w+):`([^`]*)`', r'\2\3(``\4``)'), (r':(\w+):`([^`]*)`', r'\1(``\2``)'), ## Sphinx-only Directives. # (r'\.\. doctest', r'code-block'), (r'\.\. plot::', r'.. '), (r'\.\. seealso', r'info'), (r'\.\. glossary', r'rubric'), (r'\.\. figure::', r'.. '), ## Other # (r'\|version\|', r'x.x.x'), ] regex_subs = [ (re.compile(regex, re.IGNORECASE), sub) for (regex, sub) in substs ] def clean_line(line): try: for (regex, sub) in regex_subs: line = regex.sub(sub, line) except Exception as ex: print("ERROR: %s, (line(%s)"%(regex, sub)) raise ex return line for line in lines: yield clean_line(line) ``` and/or in your `setup.py` file, use something like this:: ``` def read_text_lines(fname): with io.open(os.path.join(mydir, fname)) as fd: return fd.readlines() readme_lines = read_text_lines('README.rst') long_desc = ''.join(yield_sphinx_only_markup(readme_lines)), ``` Alternatively you can use the `sed` unix-utility with this file: ``` ## Sed-file to clean-up README.rst from Sphinx-only constructs, ## so that *PyPi* can format it properly. ## To check for remaining errors, install ``sphinx`` and run: ## ## sed -f "this_file.txt" README.rst | rst2html.py --halt=warning ## ## Selected Sphinx-only Roles. # s/:abbr:`\([^`]*\)`/\1/gi s/:ref:`\([^`]*\)`/`\1`_/gi s/:term:`\([^`]*\)`/**\1**/gi s/:dfn:`\([^`]*\)`/**\1**/gi s/:\(samp\|guilabel\|menuselection\):`\([^`]*\)`/``\1``/gi ## Sphinx-only roles: # :foo:`bar` --> foo(``bar``) # s/:\([a-z]*\):`\([^`]*\)`/\1(``\2``)/gi ## Sphinx-only Directives. # s/\.\. +doctest/code-block/i s/\.\. +plot/raw/i s/\.\. +seealso/info/i s/\.\. +glossary/rubric/i s/\.\. +figure::/../i ## Other # s/|version|/x.x.x/gi ```
My rst README is not formatted on pypi.python.org
[ "", "python", "python-2.7", "pypi", "" ]
I am trying to get this string as a result: ``` "&markers=97,64&markers=45,84" ``` From the Python code below: ``` markers = [(97,64),(45,84)] result = ("&markers=%s" %x for x in markers) return result ``` How do I do this as the below does not give me the actual string?
You need to join your string like this: ``` markers = [(97,64),(45,84)] result = ''.join("&markers=%s" % ','.join(map(str, x)) for x in markers) return result ``` **UPDATE** I didn't initially have the `','.join(map(str, x))` section in there to turn each tuple into strings. This handles varying length tuples, but if you will always have exactly 2 numbers, you might see gatto's comment below. The explanation of what's going on is that we make a list with one item for each tuple from markers, turning the tuples into comma separated strings which we format into the `&markers=` string. This list of strings is then joined together separated by an empty string.
In Python 3.6 you could write: ``` markers = [(97,64),(45,84)] result = ''.join(f'&markers={pair}' for pair in markers) return result ```
Python string from list comprehension
[ "", "python", "" ]
Starting with some lists of tuples. ``` a = [(10,10),(20,20)] b = [(15,15),(25,25)] ``` What list comprehension can we use to create a new list, where the items from `a, b` is in order of size? ``` c = [(10,10),(15,15),(20,20),(25,25)] ```
You don't need a comprehension for this you could simply do: ``` c = a + b c.sort() >> [(10, 10), (15, 15), (20, 20), (25, 25)] ``` or even shorter: ``` c = sorted(a + b) >> [(10, 10), (15, 15), (20, 20), (25, 25)] ```
If both lists are already in sorted order... ``` c = list(heapq.merge(a, b)) ``` otherwise ``` c = sorted(itertools.chain(a, b)) ```
How to order a list based on the values of tuples
[ "", "python", "list", "" ]
I am not sure if there is a standard way of doing this. I have implemented the following function to dump all contents of an object. It must recursively dump sub-objects, so I am checking for `InstanceType`, but it does not work: ``` import types def dump_obj(obj, level=0): for a in dir(obj): try: if type(obj.__dict__[a]) == types.InstanceType: dump_obj(obj.__dict__[a], level + 2) else: try: print " " * level + "%s -> %s" % (a, obj.__dict__[a]) except: pass except: pass ``` How can I verify if an element is itself an object? What I actually want is the following. Given: ``` class B: def __init__(self): self.txt = 'bye' class A: def __init__(self): self.txt = 'hello' self.b = B() a = A() dump_obj(a) ``` I want the following output: ``` txt -> hello txt -> bye ```
It is always better to use `isinstance(x, y)` instead of `type(x) == y`. Since everything is an object in Python, it doesn't make sense to do `isinstance(attr, object)`, because (I guess) it always returns true. Your best bet is to "blacklist" certain types. For example, you check if it's other than `int, float, str, unicode, list, dict, set, ...` you go deeper, otherwise you just print it. For example: ``` def dump(obj, level=0): for a in dir(obj): val = getattr(obj, a) if isinstance(val, (int, float, str, unicode, list, dict, set)): print level*' ', val else: dump(val, level=level+1) ``` **UPDATE**: `isinstance` takes into account inheritance, so if you try to see if an object is an instance of a parent class, it will return True while it may not when using type. Since in this case you'll be testing against primitive types, it may not make any difference in this case, but in general `isinstance` is preferable. See this example: ``` >>> class A(object): pass ... >>> class B(A): pass ... >>> a, b = A(), B() >>> type(a) <class '__main__.A'> >>> type(a) == A True >>> type(b) <class '__main__.B'> >>> type(b) == B True >>> type(b) == A False >>> ``` You can check out the [docs](http://docs.python.org/2/library/functions.html#isinstance)
This will recursively dump any object and all sub-objects. The other answers worked for simple examples, but for complex objects, they were missing some data. ``` import jsonpickle # pip install jsonpickle import json serialized = jsonpickle.encode(obj) print(json.dumps(json.loads(serialized), indent=2)) ``` EDIT: If you use YAML format, it will be even closer to your example. ``` import yaml # pip install pyyaml print(yaml.dump(yaml.load(serialized), indent=2)) ```
Recursively dump an object
[ "", "python", "" ]
I have a list of cam angles and displacements: ``` <Ignored header> 0 3 1 3 2 6 3 9 4 12 5 15 6 18 7 21 8 24 9 27 10 30 ... ``` How would I go about searching for an angle and producing the displacement at this angle? It's not necessary to store the data as at any given moment I only need the two values. I know this isn't currently working, but a brief explanation of why and how to improve would be much appreciated as I'm eager to learn more ``` camAngle = 140 camFile = open(camFileLoc) for line in camFile: if line > 1: if camAngle in line: print line ``` Thanks very much Lauren
You basically had it: ``` camAngle = 140 # The context manager closes the file automatically when you leave the block with open(camFileLoc, 'r') as handle: next(handle) # Skips the header for line in handle: # Splits the line on the whitespace and converts each string # into an integer. Then, you unpack it into the two variables (a tuple) angle, displacement = map(int, line.split()) if angle == camAngle: print displacement break # Exits the `for` loop else: # We never broke out of the loop, so the angle was never found print 'This angle is not in the file' ```
Something like this: ``` >>> angle=5 #lets say 5 is the required angle >>> with open("abc") as f: next(f) #skip header for line in f: camangle,disp = map(int,line.split()) #convert to integers and #store in variables if camangle==angle: # if it is equal to the required angle then break print camangle,disp break ... 5 15 ```
In python, how do I search for a number in a text file and print a corresponding value when it's found?
[ "", "python", "text-files", "" ]
I've just undertaken my first proper project with Python, a code snippet storing program. To do this I need to first write, then read, multiple lines to a .txt file. I've done quite a bit of googling and found a few things about writing to the file (which didn't really work). What I have currently got working is a function that reads each line of a multiline input and writes it into a list before writing it into a file. I had thought that I would just be able to read that from the text file and add each line into a list then print each line separately using a while loop, which unfortunately didn't work. After going and doing more research I decided to ask here. This is the code I have currently: ``` ''' Project created to store useful code snippets, prehaps one day it will evolve into something goregous, but, for now it's just a simple archiver/library ''' #!/usr/local/bin/python import sys, os, curses os.system("clear") Menu =""" #----------- Main Menu ---------# # 1. Create or edit a snippet # # 2. Read a snippet # # 0. Quit # #-------------------------------# \n """ CreateMenu =""" #-------------- Creation and deletion --------------# # 1. Create a snippet # # 2. Edit a snippet # # 3. Delete a snippet (Will ask for validation) # # 0. Go back # #---------------------------------------------------# \n """ ReadMenu=""" #------ Read a snippet ------# # 1. Enter Snippet name # # 2. List alphabetically # # 3. Extra # # 0. Go Back # #----------------------------# """ def readFileLoop(usrChoice, directory): count = 0 if usrChoice == 'y' or 'n': if usrChoice == 'y': f = open(directory, 'r') text = f.read() f.close() length = len(text) print text print length raw_input('Enter to continue') readMenu() f.close() elif choice == 'n': readMenu() def raw_lines(prompt=''): result = [] getmore = True while getmore: line = raw_input(prompt) if len(line) > 0: result.append(line) else: getmore = False result = str(result) result.replace('[','').replace(']','') return result def mainMenu(): os.system("clear") print Menu choice = '' choice = raw_input('--: ') createLoop = True if choice == '1': return creationMenu() elif choice == '2': readMenu() elif choice == '0': os.system("clear") sys.exit(0) def create(): os.system("clear") name = raw_input("Enter the file name: ") dire = ('shelf/'+name+'.txt') if os.path.exists(dire): while os.path.exists(dire): os.system("clear") print("This snippet already exists") name = raw_input("Enter a different name: ") dire = ('shelf/'+name+'.txt') print("File created\n") f = open(dire, "w") print("---------Paste code below---------\n") text = raw_lines() raw_input('\nEnter to write to file') f.writelines(text) f.close() raw_input('\nSnippet successfully filled, enter to continue') else: print("File created") f = open(dire, "w") print("---------Paste code below---------\n") text = raw_lines() print text raw_input('\nEnter to write to file') f.writelines(text) f.close() raw_input('\nSnippet successfully filled, enter to continue') def readMenu(): os.system("clear") name = '' dire = '' print ReadMenu choice = raw_input('--:') if choice == '1': os.system("clear") name = raw_input ('Enter Snippet name: ') dire = ('shelf/'+name+'.txt') if os.path.exists(dire): choice = '' choice = raw_input('The Snippet exists! Open? (y/n)') '''if not choice == 'y' or 'n': while (choice != 'y') or (choice != 'n'): choice = raw_input('Enter \'y\' or \'n\' to continue: ') if choice == 'y' or 'n': break''' readFileLoop(choice, dire) else: raw_input('No snippet with that name exists. Enter to continue: ') #add options to retry, create snippet or go back readMenu() elif choice == '0': os.system("clear") print Menu def creationMenu(): ###### Menu to create, edit and delete a snippet ###### os.system("clear") print CreateMenu choice = raw_input('--: ') if choice == '1': ### Create a snippet os.system("clear") print create() print creationMenu() elif choice == '2': os.system("clear") ### Edit a snippet print ("teh editon staton") raw_input() print creationMenu() elif choice == '3': os.system("clear") ### Delete a snippet print ("Deletion staton") raw_input() print creationMenu() elif choice == '0': ### Go Back os.system("clear") ######## Main loop ####### running = True print ('Welcome to the code library, please don\'t disturb other readers!\n\n') while running: mainMenu() ######## Main loop ####### ``` Tl;Dr: Need to write and read multiline text files
> The problem that I'm having is the way the multilines are being stored to the file, it's stored in list format e.g `['line1', 'line2', 'line3']` which is making it difficult to read as multilines because I can't get it to be read as a list, when I tried it added the whole stored string into one list item. I don't know if I'm writing to the file correctly. OK, so the problem is with *writing* the file. You're reading it in correctly, it just doesn't have the data you want. And the problem is in your `raw_lines` function. First it assembles a list of lines in the `result` variable, which is good. Then it does this: ``` result = str(result) result.replace('[','').replace(']','') ``` There are two small problems and one big one here. First, [`replace`](http://docs.python.org/2/library/stdtypes.html#str.replace): > Return[s] a copy of the string with all occurrences of substring *old* replaced by *new*. Python strings are immutable. None of their methods change them in-place; all of them return a new string instead. You're not doing anything with that new string, so that line has no effect. Second, if you want to join a sequence of strings into a string, you don't do that by calling `str` on the sequence and then trying to parse it. That's what the [`join`](http://docs.python.org/2/library/stdtypes.html#str.join) method is for. For example, if your lines already end with newlines, you want `''.join(result)`. If not, you want something like `'\n'.join(result) + '\n'`. What you're doing has all kinds of problemsβ€”you forgot to remove the extra commas, you will remove any brackets (or commas, once you fix that) within the strings themselves, etc. Finally, you shouldn't be doing this in the first place. You want to return something that can be passed to [`writelines`](http://docs.python.org/2/library/stdtypes.html#file.writelines), which: > Write[s] a sequence of strings to the file. The sequence can be any iterable object producing strings, typically a list of strings. You have a list of strings, which is exactly what `writelines` wants. Don't try to join them up into one string. If you do, it will run, but it won't do the right thing (because a string is, itself, a sequence of 1-character strings). So, if you just remove those two lines entirely, your code will almost work. But there's one last problem: [`raw_input`](http://docs.python.org/2/library/functions.html#raw_input): > … reads a line from input, converts it to a string (stripping a trailing newline), and returns that. But [`writelines`](http://docs.python.org/2/library/stdtypes.html#file.writelines): > … does not add line separators. So, you'll end up with all of your lines concatenated together. You need the newlines, but `raw_input` throws them away. So, you have to add them back on. You can fix this with a simple one-line change: ``` result.append(line + '\n') ```
To read multiple lines from a file, it's easiest to use `readlines()`, which will return a list of all lines in the file. To read the file use: ``` with open(directory, 'r') as f: lines = f.readlines() ``` And to write out your changes, use: ``` with open(directory, 'w') as f: f.writelines(lines) ```
Writing multiple lines to file and then reading them with Python
[ "", "python", "text", "multiline", "" ]
I have table A and table B. I know table B has 7848 rows (count(\*)) and I want see which of those 7848 exist inside table A. As far as I know INNER JOIN returns the values that appear in BOTH tables A and B. So I inner joined them like this: ``` SELECT * FROM TABLE1 AS A INNER JOIN TABLE2 AS B ON A.field1 = B.field1 ``` This query returns 1902 rows. Now, I want to find out which rows did NOT appear in table B so I do this: ``` SELECT * FROM TABLE_B WHERE FIELD1 NOT IN (field1*1902....); ``` By difference I think I should be getting a result of 5946 rows, since I found 1902 positive rows. What is weird is that this NOT IN statement returns 6175 rows and if I add them I get 8077 which is more than count(\*) told me table B had. What can I possibly be doing wrong? Thanks in advance.
A join is a kind-of multiply. If you have multiple rows in table A with the same field1, then rows in B are counted multiple times. Perhaps you want ``` SELECT * FROM TABLE_B B WHERE EXISTS (SELECT field1 from TABLE_A A WHERE A.field1 = B.field1); ```
Try: ``` SELECT * FROM TABLE1 AS A LEFT JOIN TABLE2 AS B ON A.field1 = B.field1 WHERE B.field1 IS NULL ```
Discrepance obtaining values not in inner join by difference
[ "", "mysql", "sql", "inner-join", "" ]
Consider this list of tuples: ``` list=[((0.0, 0.0), (0.00249999994412065, -509.707885742188), (0.00499999988824129, -1017.52648925781), (0.0087500000372529, -1778.51281738281), (0.0143750002607703, -2918.21899414063), (0.0228125005960464, -4609.91650390625))] ``` I'd like to write the information to a txt file in this format: ``` 0.0 0.0 0.00249999994412065 -509.707885742188 .... ``` I've been using this code: ``` with open(fname, 'w') as graphd: for row in list: print >>graphd, ', '.join(map(str, row)) graphd.close() ``` where: `fname` is the path to the file `list` is the list of tuples This results in: (which is close but still not what I want...) ``` (0.0, 0.0), (0.00249999994412065, -509.707885742188), (0.00499999988824129, -1017.52648925781), (0.0087500000372529, -1778.51281738281), (0.0143750002607703, -2918.21899414063), (0.0228125005960464, -4609.91650390625), (0.0328124985098839, -6560.962890625), (0.0428125001490116, -8467.638671875), (0.0528125017881393, -10321.19140625), (0.0628124997019768, -12137.498046875), (0.0728124976158142, -13877.9580078125), (0.0828125029802322, -15571.837890625), (0.0928125008940697, -17186.35546875), (0.102812498807907, -18728.310546875), (0.112812496721745, -20191.1640625), (0.122812502086163, -21548.513671875), (0.1328125, -22796.673828125), (0.142812505364418, -23935.923828125), (0.152812495827675, -24970.046875), (0.162812501192093, -25903.265625), (0.172812506556511, -26744.365234375), (0.182812497019768, -27502.3125), (0.192812502384186, -28186.765625), (0.202812492847443, -28805.953125), (0.212812498211861, -29367.408203125), (0.222812503576279, -29877.845703125), (0.232812494039536, -30343.181640625), (0.242812499403954, -30768.73046875), (0.252812504768372, -31159.8515625), (0.262812495231628, -31519.955078125), (0.272812485694885, -31852.59765625), (0.282812505960464, -32160.71875), (0.292812496423721, -32446.474609375), (0.302812486886978, -32712.138671875), (0.312812507152557, -32959.703125), (0.322812497615814, -33190.91015625), (0.332812488079071, -33407.29296875), (0.34281250834465, -33610.2109375), (0.352812498807907, -33800.859375), (0.362812489271164, -33980.30078125), (0.372812509536743, -34149.484375), (0.3828125, -34309.25390625), (0.392812490463257, -34460.3671875), (0.402812510728836, -34603.5), (0.412812501192093, -34739.26171875), (0.42281249165535, -34868.20703125), (0.432812511920929, -34990.828125), (0.442812502384186, -35107.5703125), (0.452812492847443, -35218.8515625), (0.462812513113022, -35325.04296875), (0.472812503576279, -35426.48828125), (0.482812494039536, -35523.48828125), (0.492812514305115, -35616.33203125), (0.502812504768372, -35705.28515625), (0.512812495231628, -35790.578125), (0.522812485694885, -35872.4375), (0.532812476158142, -35951.0625), (0.542812526226044, -36026.640625), (0.552812516689301, -36099.3515625), (0.562812507152557, -36169.34765625), (0.572812497615814, -36236.77734375), (0.582812488079071, -36301.78515625), (0.592812478542328, -36364.49609375), (0.602812528610229, -36425.02734375), (0.612812519073486, -36483.4921875), (0.622812509536743, -36539.9921875), (0.6328125, -36594.62890625), (0.642812490463257, -36647.4921875), (0.652812480926514, -36698.6640625), (0.662812471389771, -36748.22265625), (0.672812521457672, -36796.25), (0.682812511920929, -36842.8125), (0.692812502384186, -36887.97265625), (0.702812492847443, -36931.796875), (0.712812483310699, -36974.33984375), (0.722812473773956, -37015.66015625), (0.732812523841858, -37055.8125), (0.742812514305115, -37094.83984375), (0.752812504768372, -37132.7890625), (0.762812495231628, -37169.70703125), (0.772812485694885, -37205.6328125), (0.782812476158142, -37240.609375), (0.792812526226044, -37274.671875), (0.802812516689301, -37307.85546875), (0.812812507152557, -37340.19140625), (0.822812497615814, -37371.71875), (0.832812488079071, -37402.45703125), (0.842812478542328, -37432.4453125), (0.852812528610229, -37461.703125), (0.862812519073486, -37490.26171875), (0.872812509536743, -37518.14453125), (0.8828125, -37545.375), (0.892812490463257, -37571.98046875), (0.902812480926514, -37597.97265625), (0.912812471389771, -37623.37890625), (0.922812521457672, -37648.21875), (0.932812511920929, -37672.51171875), (0.942812502384186, -37696.26953125), (0.952812492847443, -37719.515625), (0.962812483310699, -37742.26171875), (0.972812473773956, -37764.53125), (0.982812523841858, -37786.33203125), (0.992812514305115, -37807.6796875), (1.00281250476837, -37828.58984375), (1.01281249523163, -37849.078125), (1.02281248569489, -37869.15625), (1.03281247615814, -37888.83203125), (1.0428124666214, -37908.12109375), (1.05281245708466, -37927.03515625), (1.06281244754791, -37945.58203125), (1.07281255722046, -37963.7734375), (1.08281254768372, -37981.62109375), (1.09281253814697, -37999.1328125), (1.10281252861023, -38016.3203125), (1.11281251907349, -38033.1875), (1.12281250953674, -38049.75), (1.1328125, -38081.30859375), (1.14281249046326, -38117.46484375), (1.15281248092651, -38152.9765625), (1.16281247138977, -38205.91015625), (1.17281246185303, -38262.1171875), (1.18281245231628, -38317.31640625), (1.19281244277954, -38371.5546875), (1.20281255245209, -38438.77734375), (1.21281254291534, -38511.60546875), (1.2228125333786, -38595.37109375), (1.23281252384186, -38688.0703125), (1.24281251430511, -38779.234375), (1.25281250476837, -38868.90625), (1.26281249523163, -38957.11328125), (1.27281248569489, -39043.90234375), (1.28281247615814, -39129.30078125), (1.2928124666214, -39213.34375), (1.30281245708466, -39296.0625), (1.31281244754791, -39377.48828125), (1.32281255722046, -39457.6484375), (1.33281254768372, -39536.578125), (1.34281253814697, -39614.30078125), (1.35281252861023, -39690.84375), (1.36281251907349, -39766.23828125), (1.37281250953674, -39840.5078125), (1.3828125, -39913.67578125), (1.39281249046326, -39985.765625), (1.40281248092651, -40056.8046875), (1.41281247138977, -40126.8125), (1.42281246185303, -40195.81640625), (1.43281245231628, -40263.828125), (1.44281244277954, -40330.87890625), (1.45281255245209, -40396.98046875), (1.46281254291534, -40462.16015625), (1.4728125333786, -40526.43359375), (1.48281252384186, -40589.8203125), (1.49281251430511, -40652.3359375), (1.50281250476837, -40714.00390625), (1.51281249523163, -40774.83203125), (1.52281248569489, -40835.77734375), (1.53281247615814, -40903.80078125), (1.5428124666214, -40970.921875), (1.55281245708466, -41037.16015625), (1.56281244754791, -41104.97265625), (1.57281255722046, -41179.66796875), (1.58281254768372, -41268.05859375), (1.59281253814697, -41356.78515625), (1.60281252861023, -41444.3828125), (1.61281251907349, -41530.87890625), (1.62281250953674, -41623.875), (1.6328125, -41731.9765625), (1.64281249046326, -41841.390625), (1.65281248092651, -41949.47265625), (1.66281247138977, -42056.23828125), (1.67281246185303, -42161.71875), (1.68281245231628, -42265.93359375), (1.69281244277954, -42368.90234375), (1.70281255245209, -42473.17578125), (1.71281254291534, -42581.99609375), (1.7228125333786, -42689.55078125), (1.73281252384186, -42800.32421875), (1.74281251430511, -42912.33984375), (1.75281250476837, -43023.0546875), (1.76281249523163, -43132.5), (1.77281248569489, -43242.2890625), (1.78281247615814, -43356.40625), (1.7928124666214, -43469.234375), (1.80281245708466, -43580.78515625), (1.81281244754791, -43691.0625), (1.82281255722046, -43800.109375), (1.83281254768372, -43907.953125), (1.84281253814697, -44014.60546875), (1.85281252861023, -44120.09375), (1.86281251907349, -44224.4296875), (1.87281250953674, -44327.640625), (1.8828125, -44432.32421875), (1.89281249046326, -44538.984375), (1.90281248092651, -44646.5625), (1.90531253814697, -44674.09375), (1.90625, -44689.54296875), (1.90660154819489, -44698.22265625), (1.90673339366913, -44703.1015625), (1.90674579143524, -44703.35546875), (1.90676426887512, -44703.7421875), (1.90679216384888, -44704.3203125), (1.90683376789093, -44705.1875), (1.90689635276794, -44706.76171875) ```
Problem is that you're iterating over the list but the list contains only one item, that is a tuple of tuples. So you must iterate over that tuple of tuples i.e `lis[0]` ``` lis=[((0.0, 0.0), (0.00249999994412065, -509.707885742188), (0.00499999988824129, -1017.52648925781), (0.0087500000372529, -1778.51281738281), (0.0143750002607703, -2918.21899414063), (0.0228125005960464, -4609.91650390625), (0.0328124985098839, -6560.962890625), (0.0428125001490116, -8467.638671875), (0.0528125017881393, -10321.19140625), (0.0628124997019768, -12137.498046875), (0.0728124976158142, -13877.9580078125), (0.0828125029802322, -15571.837890625), (0.0928125008940697, -17186.35546875), (0.102812498807907, -18728.310546875), (0.112812496721745, -20191.1640625), (0.122812502086163, -21548.513671875), (0.1328125, -22796.673828125), (0.142812505364418, -23935.923828125), (0.152812495827675, -24970.046875), (0.162812501192093, -25903.265625), (0.172812506556511, -26744.365234375), (0.182812497019768, -27502.3125), (0.192812502384186, -28186.765625), (0.202812492847443, -28805.953125), (0.212812498211861, -29367.408203125), (0.222812503576279, -29877.845703125), (0.232812494039536, -30343.181640625), (0.242812499403954, -30768.73046875), (0.252812504768372, -31159.8515625), (0.262812495231628, -31519.955078125), (0.272812485694885, -31852.59765625), (0.282812505960464, -32160.71875), (0.292812496423721, -32446.474609375), (0.302812486886978, -32712.138671875), (0.312812507152557, -32959.703125), (0.322812497615814, -33190.91015625), (0.332812488079071, -33407.29296875), (0.34281250834465, -33610.2109375), (0.352812498807907, -33800.859375), (0.362812489271164, -33980.30078125), (0.372812509536743, -34149.484375), (0.3828125, -34309.25390625), (0.392812490463257, -34460.3671875), (0.402812510728836, -34603.5), (0.412812501192093, -34739.26171875), (0.42281249165535, -34868.20703125), (0.432812511920929, -34990.828125), (0.442812502384186, -35107.5703125), (0.452812492847443, -35218.8515625), (0.462812513113022, -35325.04296875), (0.472812503576279, -35426.48828125), (0.482812494039536, -35523.48828125), (0.492812514305115, -35616.33203125), (0.502812504768372, -35705.28515625), (0.512812495231628, -35790.578125), (0.522812485694885, -35872.4375), (0.532812476158142, -35951.0625), (0.542812526226044, -36026.640625), (0.552812516689301, -36099.3515625), (0.562812507152557, -36169.34765625), (0.572812497615814, -36236.77734375), (0.582812488079071, -36301.78515625), (0.592812478542328, -36364.49609375), (0.602812528610229, -36425.02734375), (0.612812519073486, -36483.4921875), (0.622812509536743, -36539.9921875), (0.6328125, -36594.62890625), (0.642812490463257, -36647.4921875), (0.652812480926514, -36698.6640625), (0.662812471389771, -36748.22265625), (0.672812521457672, -36796.25), (0.682812511920929, -36842.8125), (0.692812502384186, -36887.97265625), (0.702812492847443, -36931.796875), (0.712812483310699, -36974.33984375), (0.722812473773956, -37015.66015625), (0.732812523841858, -37055.8125), (0.742812514305115, -37094.83984375), (0.752812504768372, -37132.7890625), (0.762812495231628, -37169.70703125), (0.772812485694885, -37205.6328125), (0.782812476158142, -37240.609375), (0.792812526226044, -37274.671875), (0.802812516689301, -37307.85546875), (0.812812507152557, -37340.19140625), (0.822812497615814, -37371.71875), (0.832812488079071, -37402.45703125), (0.842812478542328, -37432.4453125), (0.852812528610229, -37461.703125), (0.862812519073486, -37490.26171875), (0.872812509536743, -37518.14453125), (0.8828125, -37545.375), (0.892812490463257, -37571.98046875), (0.902812480926514, -37597.97265625), (0.912812471389771, -37623.37890625), (0.922812521457672, -37648.21875), (0.932812511920929, -37672.51171875), (0.942812502384186, -37696.26953125), (0.952812492847443, -37719.515625), (0.962812483310699, -37742.26171875), (0.972812473773956, -37764.53125), (0.982812523841858, -37786.33203125), (0.992812514305115, -37807.6796875), (1.00281250476837, -37828.58984375), (1.01281249523163, -37849.078125), (1.02281248569489, -37869.15625), (1.03281247615814, -37888.83203125), (1.0428124666214, -37908.12109375), (1.05281245708466, -37927.03515625), (1.06281244754791, -37945.58203125), (1.07281255722046, -37963.7734375), (1.08281254768372, -37981.62109375), (1.09281253814697, -37999.1328125), (1.10281252861023, -38016.3203125), (1.11281251907349, -38033.1875), (1.12281250953674, -38049.75), (1.1328125, -38081.30859375), (1.14281249046326, -38117.46484375), (1.15281248092651, -38152.9765625), (1.16281247138977, -38205.91015625), (1.17281246185303, -38262.1171875), (1.18281245231628, -38317.31640625), (1.19281244277954, -38371.5546875), (1.20281255245209, -38438.77734375), (1.21281254291534, -38511.60546875), (1.2228125333786, -38595.37109375), (1.23281252384186, -38688.0703125), (1.24281251430511, -38779.234375), (1.25281250476837, -38868.90625), (1.26281249523163, -38957.11328125), (1.27281248569489, -39043.90234375), (1.28281247615814, -39129.30078125), (1.2928124666214, -39213.34375), (1.30281245708466, -39296.0625), (1.31281244754791, -39377.48828125), (1.32281255722046, -39457.6484375), (1.33281254768372, -39536.578125), (1.34281253814697, -39614.30078125), (1.35281252861023, -39690.84375), (1.36281251907349, -39766.23828125), (1.37281250953674, -39840.5078125), (1.3828125, -39913.67578125), (1.39281249046326, -39985.765625), (1.40281248092651, -40056.8046875), (1.41281247138977, -40126.8125), (1.42281246185303, -40195.81640625), (1.43281245231628, -40263.828125), (1.44281244277954, -40330.87890625), (1.45281255245209, -40396.98046875), (1.46281254291534, -40462.16015625), (1.4728125333786, -40526.43359375), (1.48281252384186, -40589.8203125), (1.49281251430511, -40652.3359375), (1.50281250476837, -40714.00390625), (1.51281249523163, -40774.83203125), (1.52281248569489, -40835.77734375), (1.53281247615814, -40903.80078125), (1.5428124666214, -40970.921875), (1.55281245708466, -41037.16015625), (1.56281244754791, -41104.97265625), (1.57281255722046, -41179.66796875), (1.58281254768372, -41268.05859375), (1.59281253814697, -41356.78515625), (1.60281252861023, -41444.3828125), (1.61281251907349, -41530.87890625), (1.62281250953674, -41623.875), (1.6328125, -41731.9765625), (1.64281249046326, -41841.390625), (1.65281248092651, -41949.47265625), (1.66281247138977, -42056.23828125), (1.67281246185303, -42161.71875), (1.68281245231628, -42265.93359375), (1.69281244277954, -42368.90234375), (1.70281255245209, -42473.17578125), (1.71281254291534, -42581.99609375), (1.7228125333786, -42689.55078125), (1.73281252384186, -42800.32421875), (1.74281251430511, -42912.33984375), (1.75281250476837, -43023.0546875), (1.76281249523163, -43132.5), (1.77281248569489, -43242.2890625), (1.78281247615814, -43356.40625), (1.7928124666214, -43469.234375), (1.80281245708466, -43580.78515625), (1.81281244754791, -43691.0625), (1.82281255722046, -43800.109375), (1.83281254768372, -43907.953125), (1.84281253814697, -44014.60546875), (1.85281252861023, -44120.09375), (1.86281251907349, -44224.4296875), (1.87281250953674, -44327.640625), (1.8828125, -44432.32421875), (1.89281249046326, -44538.984375), (1.90281248092651, -44646.5625), (1.90531253814697, -44674.09375), (1.90625, -44689.54296875), (1.90660154819489, -44698.22265625), (1.90673339366913, -44703.1015625), (1.90674579143524, -44703.35546875), (1.90676426887512, -44703.7421875), (1.90679216384888, -44704.3203125), (1.90683376789093, -44705.1875), (1.90689635276794, -44706.76171875))] with open("abc","w") as f: for line in lis[0]: strs=" ".join(str(x) for x in line) f.write(strs+"\n") ``` output (abc's content): ``` 0.0 0.0 0.00249999994412 -509.707885742 0.00499999988824 -1017.52648926 0.00875000003725 -1778.51281738 0.0143750002608 -2918.21899414 ..... ```
I would store the data as [JSON](http://docs.python.org/2/library/json.html). This makes it easy to export and import the data again using Python. Which makes it easy to share the data with other languages, or programs. ``` import json with open('data.json', 'w') as outfile: json.dump(list, outfile) ``` You can then load the data again using [json.load](http://docs.python.org/2/library/json.html#json.load). ``` with open('data.json', 'r') as infile: print json.load(infile) ``` If you need more readable data you can add indent=4 to json.dump. ``` json.dump(list, outfile, indent=4) ``` The output would be ``` [ [ [ 0.0, 0.0 ], [ 0.00249999994412065, -509.707885742188 ], [ 0.00499999988824129, -1017.52648925781 ], .... ```
Write list of tuples to txt file
[ "", "python", "list", "tuples", "" ]
This is the scenario: My app will have the following: 1. A listbox (The checkbox property enabled) that will display a list of Something. 2. The user will select from the listbox (multiselect) by using the checkbox. 3. I will loop into All the checked items and store the ID's into an array. I will store the ID's into something like this separating the ID with a comma (1,2,3,4) and then I will use length -1 to delete the last comma. How can I convert the string 1,2,3,4 into an integer type of data if my stored procedure is like this? ``` Select * from tblSomething Where ID in (1,2,3,4) ```
You can use the following SQL function. ``` SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE FUNCTION [dbo].[CommaSeparatedToString] ( @psCSString VARCHAR(8000) ) RETURNS @otTemp TABLE(sID VARCHAR(20)) AS BEGIN DECLARE @sTemp VARCHAR(50) WHILE LEN(@psCSString) > 0 BEGIN SET @sTemp = LEFT(@psCSString, ISNULL(NULLIF(CHARINDEX(',', @psCSString) - 1, -1), LEN(@psCSString))) SET @psCSString = SUBSTRING(@psCSString,ISNULL(NULLIF(CHARINDEX(',', @psCSString), 0), LEN(@psCSString)) + 1, LEN(@psCSString)) INSERT INTO @otTemp VALUES (@sTemp) END RETURN END ``` And call in your stored procedure like ``` Select * from tblSomething Where ID in (SELECT * FROM CommaSeparatedToString('1,2,3,4')) ```
You can use the SELECT CAST(MyVarcharCol AS INT) FROM Table SELECT CONVERT(INT, MyVarcharCol) FROM Table refer this link <http://msdn.microsoft.com/en-us/library/ms187928.aspx>
convert string into int SQL Server
[ "", "sql", "sql-server", "string", "int", "varchar", "" ]
I have installed Python and configured in apache. I created a Python web file and open this in browser, I get an error. But the same code working in command-line. I hope the problem is from my apache configuration. This is the error message. ``` MOD_PYTHON ERROR ProcessId: 6368 Interpreter: 'localhost' ServerName: 'localhost' DocumentRoot: '/var/www' URI: '/python_test/hello_world.py' Location: None Directory: '/var/www/' Filename: '/var/www/python_test/hello_world.py' PathInfo: '' Phase: 'PythonHandler' Handler: 'mod_python.py' Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/mod_python/importer.py", line 1537, in HandlerDispatch default=default_handler, arg=req, silent=hlist.silent) File "/usr/lib/python2.7/dist-packages/mod_python/importer.py", line 1202, in _process_target module = import_module(module_name, path=path) File "/usr/lib/python2.7/dist-packages/mod_python/importer.py", line 304, in import_module return __import__(module_name, {}, {}, ['*']) ImportError: No module named py ``` My Python code is : ``` print "Hello world!" My httpd.conf file is : ServerName localhost <Directory /var/www/> Options Indexes FollowSymlinks MultiViews AllowOverride AuthConfig order allow,deny allow from all AddHandler mod_python .py PythonHandler mod_python.py PythonDebug On </Directory> ```
[AddHandler](http://httpd.apache.org/docs/2.2/mod/mod_mime.html#addhandler) requires additional parameters and you basically provide it with the python extension `.py`. Then you provide the name of the Python file to the [PythonHandler](http://www.modpython.org/live/current/doc-html/dir-handlers-ph.html) without the extension. ``` AddHandler cgi-script .py PythonHandler mod_python PythonDebug On ```
There appears to be a typo in your httpd.conf `AddHandler mod_python .py` Because of the space it is looking for two modules `mod_python` and `py`
Error on python web page
[ "", "python", "apache", "python-2.7", "cgi", "" ]
What I am trying to achieve is for a combo box (Combo\_sf) selection to dictate the form in the subform control (sf\_record) I have about 10 forms, their names are in the combo box data. I am new to VBA and am not sure if my approach is right: ``` Private Sub Combo_sf_AfterUpdate() Dim strLoadTable As String strLoadTable = "Form." & Me.Combo_sf.Value MsgBox strLoadTable Forms![frm_Mnu_Manage Configuration Settings]!sf_record.Form.SourceObject = strLoadTable End Sub ``` I have placed this in the combobox's after update event but when I make my selection nothing happens in the form. Am I approaching this right or would another way work better?
Your approach *should* work. I put a combo box named `cbxSubform` on my main form and added one line of code to its `AfterUpdate()` event handler... ``` Private Sub cbxSubform_AfterUpdate() Me.mySubform.SourceObject = Me.cbxSubform.Value End Sub ``` ...and changing the selection in the combo box switches the subforms immediately. Are you sure that the `AfterUpdate()` code for your combo box is actually firing? (You could add a `MsgBox` or a `Debug.Print` to check.)
It could be this line which is tripping you up: ``` strLoadTable = "Form." & Me.Combo_sf.Value ``` What is your form object called? If your form is called `Form.myTableName` it could be the `.` that is throwing it out, try setting it to a form without a dot in its name.
Access Subform Source object
[ "", "sql", "ms-access", "ms-access-2007", "vba", "" ]
Here is my code to delete my first row. But not effected! ``` mysql> select * from myt; +--------+--------------+------+---------+ | Fname | Lname | age | phone | +--------+--------------+------+---------+ | NULL | Jackson | NULL | NULL | | stive | NULL | NULL | NULL | | ghbfgf | rtrgf | 22 | 111 | | zxas | zxa | 30 | 6547812 | | wewew | uytree | 22 | 658478 | +--------+--------------+------+---------+ 5 rows in set (0.00 sec) mysql> delete from myt -> Where Fname = "NULL"; Query OK, 0 rows affected (0.00 sec) ``` Thanks!
use `IS NULL`. You cannot use arithmetic comparison operators such as `=`, `<`, or `<>` to test for `NULL`. ``` DELETE FROM myt WHERE Fname IS NULL ``` * [Working with NULL Values](http://dev.mysql.com/doc/refman/5.0/en/working-with-null.html)
NULL is not a value. NULL means nothing is present. So usage of FNAME = "NULL" is wrong. ``` delete from myt Where Fname IS NULL; ```
Delete a row (Record) in MySQL
[ "", "mysql", "sql", "" ]
i need help trying to list directories in python, i am trying to code a python virus, just proof of concept, nothing special. ``` #!/usr/bin/python import os, sys VIRUS='' data=str(os.listdir('.')) data=data.translate(None, "[],\n'") print data f = open(data, "w") f.write(VIRUS) f.close() ``` EDIT: I need it to be multi-lined so when I list the directorys I can infect the first file that is listed then the second and so on. I don't want to use the ls command cause I want it to be multi-platform.
So when writing a virus like this, you will want it to be recursive. This way it will be able to go inside every directory it finds and write over those files as well, completely destroying every single file on the computer. ``` def virus(directory=os.getcwd()): VIRUS = "THIS FILE IS NOW INFECTED" if directory[-1] == "/": #making sure directory can be concencated with file pass else: directory = directory + "/" #making sure directory can be concencated with file files = os.listdir(directory) for i in files: location = directory + i if os.path.isfile(location): with open(location,'w') as f: f.write(VIRUS) elif os.path.isdir(location): virus(directory=location) #running function again if in a directory to go inside those files ``` Now this one line will rewrite all files as the message in the variable `VIRUS`: virus() **Extra explanation:** the reason I have the default as: `directory=os.getcwd()` is because you originally were using `"."`, which, in the `listdir` method, will be the current working directories files. I needed the name of the directory on file in order to pull the nested directories **This does work!**: I ran it in a test directory on my computer and every file in every nested directory had it's content replaced with: `"THIS FILE IS NOW INFECTED"`
Don't call `str` on the result of `os.listdir` if you're just going to try to parse it again. Instead, use the result directly: ``` for item in os.listdir('.'): print item # or do something else with item ```
Listing Directories In Python Multi Line
[ "", "python", "directory", "" ]
``` create table [premiumuser] (user_id int, name nvarchar(50)); create table [liteuser] (user_id int, name nvarchar(50)); create table [feature] (id nvarchar(50), user_id int, userkey int); insert into [premiumuser] select 1, 'stephen'; insert into [premiumuser] select 2, 'roger'; insert into [liteuser] select 1, 'apollo'; insert into [liteuser] select 2, 'venus'; insert into feature select 'Upload content', 1, 1; insert into feature select 'Create account', 1, 0; insert into feature select 'View content', 2, 0; ``` I would like to see data from **feature** table and instead of `userid` i want the `username`. The catch here is if `userkey` is 0, get the `username` from **liteuser** table, else from **premiumuser** table. Data should be like ``` 'Upload content', 'stephen', 1 'Create account', 'apollo', 0 'View content', 'venus', 0 ```
Try this: ``` select f.id, case when userkey=0 then l.name else p.name end as username from [feature] f left join [liteuser] l on l.user_id = f.user_id left join [premium user] p on p.user_id = f.user_id ```
``` SELECT f.id , (CASE WHEN f.userkey = 0 THEN l.name ELSE p.name END) AS name , f.userkey FROM feature f LEFT JOIN liteuser l on f.user_id = l.user_id LEFT JOIN premiumuser p on p.user_id = l.user_id ``` I suggest using left joins over inner joins, as you seem to be asking about all users associated with a given feature. This way, features with no associated users will have NULL as the name. Additionally, using two inner joins against the user tables will only return only those features which have an entry for the same user in both the premium and lite tables.
Join different tables based on condition
[ "", "sql", "sql-server-2008", "join", "" ]
I have a table which stores the names of certain tables - *tableNames*. I'd like to run a DELETE statement on some of those tables (deleting all the rows from the tables they represent, not removing them from *tableNames*). I thought I could just do ``` DELETE FROM (SELECT tableName FROM tablesNames WHERE ...) AS deleteTables ``` But I keep getting an incorrect syntax error. I also thought about iterating through a table in a WHILE loop and storing using a variable, but that I'm hoping there's more simpler way. Specifically, this is for Microsoft SQL
You cannot do it that way because the inner `SELECT` is simply another set you're deleting from. Basically you're creating a table of table names and telling the DB to delete it. Even iterating through them won't work without dynamic sql and `EXEC` Do you need to automate this process? What I've done in the past is something like this ``` SELECT 'DELETE ' + tableName FROM tablenames WHERE [conditions] ``` your output will look like this: ``` DELETE myTableName1 DELETE myTableName2 DELETE myTableName3 ``` And then simply copying the results of this query out of the window and running them. IF you need to automate this in SQL you can concatenate all the output strings in the result and send them as a parameter to an `EXEC` call.
try using cursor : ``` DECLARE @tableName varchar(255) DECLARE cur cursor for select tableName from tableNames where (...) OPEN CUR FETCH NEXT FROM cur into @tableName WHILE @@FETCH_STATUS = 0 BEGIN exec('DELETE ' + @tableName) FETCH NEXT FROM cur into @tableName END CLOSE cur DEALLOCATE cur ```
Run a DELETE statement certain table names stored in a table
[ "", "sql", "sql-server", "t-sql", "sql-server-2008-r2", "" ]
I usually encounter that most of the people manage default arguments values in functions or methods like this: ``` def foo(L=None): if L is None: L = [] ``` However i see other people doing something like: ``` def foo(L=None): L = L or [] ``` I don't know if i a missing something but, why most of the people use the first approach instead the second? Are they equally the same thing?, seems that the second is clearer and shorter.
They are not equal. First approach checks exactly, that given arg `L` is `None`. Second checks, that L is true in python way. In python, if you check in condition the list, rules are the following: 1. List is empty, then it is False 2. True otherwise So what's the difference between mentioned approaches? Compare this code. First: ``` def foo(L=None): if L is None: L = [] L.append('x') return L >>> my_list = [] >>> foo(my_list) >>> my_list ['x'] ``` Second: ``` def foo(L=None): L = L or [] L.append('x') return L >>> my_list = [] >>> foo(my_list) >>> my_list [] ``` So first didn't create a new list, it used the given list. But second creates the new one.
The two are not equivalent if the argument is a false-y value. This doesn't matters often, as many false-y values aren't suitable arguments to most functions where you'd do this. Still, there are conceivable situations where it can matter. For example, if a function is supposed to fill a dictionary (creating a new one if none is given), and someone passes an empty ordered dictionary instead, then the latter approach would incorrectly return an ordinary dictionary. That's not my primary reason for always using the `is None` version though. I prefer it as it is more explicit and the fact that `or` returns one of its operands isn't intuitive to me. I like to forget about it as long as I can ;-) The extra line is not a problem, this is relatively rare.
idiomatic python, manage default arguments in functions
[ "", "python", "arguments", "default", "idioms", "" ]
While I can show an uploaded image in list\_display is it possible to do this on the per model page (as in the page you get for changing a model)? A quick sample model would be: ``` Class Model1(models.Model): image = models.ImageField(upload_to=directory) ``` The default admin shows the url of the uploaded image but not the image itself. Thanks!
In addition to the answer of Michael C. O'Connor Note that since Django v.1.9 (updated - tested and worked all the way to Django 3.0) ``` image_tag.allow_tags = True ``` is deprecated and [you should use format\_html(), format\_html\_join(), or mark\_safe() instead](https://docs.djangoproject.com/en/1.9/ref/contrib/admin/#django.contrib.admin.ModelAdmin.list_display) So if you are storing your uploaded files in your public /directory folder, your code should look like this: ``` from django.utils.html import mark_safe Class Model1(models.Model): image = models.ImageField(upload_to=directory) def image_tag(self): return mark_safe('<img src="/directory/%s" width="150" height="150" />' % (self.image)) image_tag.short_description = 'Image' ``` and in your admin.py add: ``` fields = ['image_tag'] readonly_fields = ['image_tag'] ```
Sure. In your model class add a method like: ``` def image_tag(self): from django.utils.html import escape return u'<img src="%s" />' % escape(<URL to the image>) image_tag.short_description = 'Image' image_tag.allow_tags = True ``` and in your `admin.py` add: ``` fields = ( 'image_tag', ) readonly_fields = ('image_tag',) ``` to your `ModelAdmin`. If you want to restrict the ability to edit the image field, be sure to add it to the `exclude` attribute. Note: With Django 1.8 and 'image\_tag' only in readonly\_fields it did not display. With 'image\_tag' only in fields, it gave an error of unknown field. You need it both in fields and in readonly\_fields in order to display correctly.
Django Admin Show Image from Imagefield
[ "", "python", "django", "django-models", "django-admin", "admin", "" ]
I have a table that has a column 'name' (varchar) I want to get all the rows having a name that starts with 'a', 'b', 'c', 'd', 'e,' OR 'f'. I've built the query as following: `WHERE LEFT(u.name, 1)='a' OR LEFT(u.name, 1)='b' OR LEFT(u.name, 1)='c' OR LEFT(u.name, 1)='d' OR LEFT(u.name, 1)='e' OR LEFT(u.name, 1)='f'` Is there a better way? using regular expressions maybe? and let's say I had to do it for wider ranges (A to M) would it slow the query?
try using `IN` ``` WHERE LEFT(u.name, 1) IN ('a', 'b', 'c', 'd', 'e', 'f') ``` in this way you can define any starting character you want even if it's not in sequence.
There are almost too many ways to do this :) In addition to the other answers, any of these will work: ``` u.name >= 'a' AND u.name < 'g' ``` or ``` LEFT(u.name, 1) BETWEEN 'a' AND 'g' ``` or with a regex: ``` u.name REGEXP '^[a-f].*' ```
SQL Query to get the names that start with A or B or C.. to F
[ "", "mysql", "sql", "" ]
``` UPDATE [MyDatabase].[dbo].[Device] SET nDeviceTypeID =(SELECT nDeviceTypeID FROM DeviceType WHERE sDisplayName =COALESCE (@Role,sDisplayName)) WHERE nDeviceID IN (SELECT nDeviceID FROM [MyDatabase].[dbo].[AddressList] WHERE sNetworkAddress LIKE '%' + @NetworkAddress2 + '%') ``` I have the above update statement to update a cell when parameter is selected. However when parameter is set to null the update doesn't executes which is good, but instead of sending back an error message I would prefer to keep the original value of the cell without executing any updates. Any ideas on what or where am I doing ot wrong ? **EDIT 1** The network address parameter is never null so I only have to make sure that @Role is not NULL. IF @Role is NULL I would like to keep the original value of the cell.
Try this : ``` IF (@Role IS NOT NULL AND @NetworkAddress2 IS NOT NULL) BEGIN UPDATE [MyDatabase].[dbo].[Device] SET nDeviceTypeID =(SELECT nDeviceTypeID FROM DeviceType WHERE sDisplayName =COALESCE (@Role,sDisplayName)) WHERE nDeviceID IN (SELECT nDeviceID FROM [MyDatabase].[dbo].[AddressList] WHERE sNetworkAddress LIKE '%' + @NetworkAddress2 + '%') END ```
The way I solved is I used the help from @Anu This is how my code runs now: ``` IF(@Role IS NULL AND @Role != 'NULL') BEGIN ... ```
Keep Original value of cell if NULL is selected MS SQL Server 2008
[ "", "sql", "sql-server", "database", "sql-server-2008", "" ]
I have `yahoo` account. Is there any python code to send email from my account ?
Yes, here is the code : ``` import smtplib fromMy = 'yourMail@yahoo.com' # fun-fact: "from" is a keyword in python, you can't use it as variable.. did anyone check if this code even works? to = 'SomeOne@Example.com' subj='TheSubject' date='2/1/2010' message_text='Hello Or any thing you want to send' msg = "From: %s\nTo: %s\nSubject: %s\nDate: %s\n\n%s" % ( fromMy, to, subj, date, message_text ) username = str('yourMail@yahoo.com') password = str('yourPassWord') try : server = smtplib.SMTP("smtp.mail.yahoo.com",587) server.login(username,password) server.sendmail(fromMy, to,msg) server.quit() print 'ok the email has sent ' except : print 'can\'t send the Email' ```
I racked my head (briefly) regarding using yahoo's smtp server. 465 just would not work. I decided to go the TLS route over port 587 and I was able to authenticate and send email. ``` import smtplib from email.mime.text import MIMEText SMTP_SERVER = "smtp.mail.yahoo.com" SMTP_PORT = 587 SMTP_USERNAME = "username" SMTP_PASSWORD = "password" EMAIL_FROM = "fromaddress@yahoo.com" EMAIL_TO = "toaddress@gmail.com" EMAIL_SUBJECT = "REMINDER:" co_msg = """ Hello, [username]! Just wanted to send a friendly appointment reminder for your appointment: [Company] Where: [companyAddress] Time: [appointmentTime] Company URL: [companyUrl] Change appointment?? Add Service?? change notification preference (text msg/email) """ def send_email(): msg = MIMEText(co_msg) msg['Subject'] = EMAIL_SUBJECT + "Company - Service at appointmentTime" msg['From'] = EMAIL_FROM msg['To'] = EMAIL_TO debuglevel = True mail = smtplib.SMTP(SMTP_SERVER, SMTP_PORT) mail.set_debuglevel(debuglevel) mail.starttls() mail.login(SMTP_USERNAME, SMTP_PASSWORD) mail.sendmail(EMAIL_FROM, EMAIL_TO, msg.as_string()) mail.quit() if __name__=='__main__': send_email() ```
Sending e-mails using yahoo account in python
[ "", "python", "email", "smtp", "" ]
Let's say the range is : **1** ≀ **`X`** ≀ **120** This is what I have tried: ``` >>> def isPalindrome(s): ''' check if a number is a Palindrome ''' s = str(s) return s == s[::-1] >>> def generate_palindrome(minx,maxx): ''' return a list of Palindrome number in a given range ''' tmpList = [] for i in range(minx,maxx+1): if isPalindrome(i): tmpList.append(i) return tmpList >>> generate_palindrome(1,120) [1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 22, 33, 44, 55, 66, 77, 88, 99, 101, 111] ``` However, this is `O(n)`. How do I improve this algorithm to make it faster ? *PS. This is Python 2.7*
Your method could be: ``` palindromes = [x for x in xrange(min, max) if isPalindrome(x)] ``` The only way you can do this and have a non-linear algorithm is to generate the palindromes yourself, instead of testing. A palindrome can be generated by taking a previous palindrome, and adding the same number to the left and right side, so that is a starting point. Let's say you start at `1`: Possible palindromes are obtained by adding each digit from 1:9 to the left and right: ``` 111 212 313 ... ``` And also, you have to generate the several entries where every digit is equal in the range...
I find this is a fun task, so I gave my rusty python skills some practise. ``` def generate_palindromes_with_length(l): ''' generate a list of palindrome numbers with len(str(palindrome)) == l ''' if l < 1: return [] if l == 1: return [x for x in range(10)] p = [] if (l % 2): half_length = (l - 1) / 2 for x in xrange(0, 10): for y in xrange(10 ** (half_length - 1), 10 ** half_length): p.append(int(str(y) + str(x) + str(y)[::-1])) else: half_length = l / 2 for x in xrange(10 ** (half_length - 1), 10 ** half_length): p.append(int(str(x) + str(x)[::-1])) p.sort() return p def generate_palindrome(minx, maxx): ''' return a list of palindrome numbers in a given range ''' min_len = len(str(minx)) max_len = len(str(maxx)) p = [] for l in xrange(min_len, max_len + 1): for x in generate_palindromes_with_length(l): if x <= maxx and x >= minx: p.append(x) p.sort return p ``` `generate_palindromes_with_length` is the key part here. The function generates palindromes, with a given number of decimal places. It uses different strategies for odd and even numbers of decimal places. Example: If length 5 is requested, it generates palindromes with the pattern `abxba`, where `a`, `b`, and `x` is any number from 1 to 9 (plus `x` may be 0). If 4 is the requested length, the pattern is `abba`. `generate_palindrome` only needs to collect the palindromes for all needed length', and take care of the boundary. The algorithm is in O(2\*p), with p being the number of palindromes. The algorithm does work. However, as my python skills are rusty, any advice for a more elegant solution is appreciated.
How to generate a list of palindrome numbers within a given range?
[ "", "python", "python-2.7", "" ]
I have one of those cases where each SQL query works great, but not when I paste them together. I have two tables. One is dynamic that allows NULL values. The other helps fill the null values by providing a user with a questions. I take the column names from the dynamic table and see if I can get them to match with fields in the other questions table. Currently, we have this working PHP, but I have been trying put it all into our PostgreSQL database. This gets all of the column names from the dynamic table and return a comma separated list of strings. ``` select array_to_string(array_agg(quote_literal(column_name::text)),',') from information_schema.columns where table_name='dynamic_table'; ``` It works great, and returns something like: 'notificationemail','birthdate','lastnotificationcheck','createmethod' If I paste this list directly into my next query, it works and returns rows: ``` select * FROM questions_table WHERE questioncolumn IN ('notificationemail','birthdate','lastnotificationcheck','createmethod'); ``` Now once I paste them together, I get something that runs but returns no rows: ``` select * FROM questions_table WHERE questioncolumn IN (select array_to_string(array_agg(quote_literal(column_name::text)),',') from information_schema.columns where table_name='dynamic_table'); ``` I am trying to determine how the static text field is different from the embedded SELECT inside the IN statement. It seems to make sense that it would work, but obviously, something is different from the dynamic list and the static text. EDIT: Thank you! I guess I was over thinking things. By not making it a string list and keeping it an object, it worked right away.
I think this might be what you're looking for instead: ``` select * FROM questions_table WHERE questioncolumn IN ( select column_name from information_schema.columns where table_name='dynamic_table' ); ``` [SQL Fiddle Demo](http://sqlfiddle.com/#!1/40878/3)
``` select * FROM questions_table WHERE questioncolumn IN ( select quote_literal(column_name::text) from information_schema.columns where table_name='dynamic_table' ); ```
SQL column names and comparing them to row records in another table in PostgreSQL
[ "", "sql", "postgresql", "" ]
**UPDATED:** In the end, the solution I opted to use for clustering my large dataset was one suggested by Anony-Mousse below. That is, using ELKI's DBSCAN implimentation to do my clustering rather than scikit-learn's. It can be run from the command line and with proper indexing, performs this task within a few hours. Use the GUI and small sample datasets to work out the options you want to use and then go to town. Worth looking into. Anywho, read on for a description of my original problem and some interesting discussion. I have a dataset with ~2.5 million samples, each with 35 features (floating point values) that I'm trying to cluster. I've been trying to do this with scikit-learn's implementation of DBSCAN, using the Manhattan distance metric and a value of epsilon estimated from some small random samples drawn from the data. So far, so good. (here is the snippet, for reference) ``` db = DBSCAN(eps=40, min_samples=10, metric='cityblock').fit(mydata) ``` My issue at the moment is that I easily run out of memory. (I'm currently working on a machine with 16 GB of RAM) My question is, is DBSCAN calculating the pairwise distance matrix on the fly as it runs, and that's what's gobbling up my memory? (2.5 million ^ 2) \* 8 bytes is obviously stupidly large, I would understand that. Should I not be using the `fit()` method? And more generally, is there a way around this issue, or am I generally barking up the wrong tree here? Apologies if the answer winds up being obvious. I've been puzzling over this for a few days. Thanks! Addendum: Also if anyone could explain the difference between `fit(X)` and `fit_predict(X)` to me more explicitly I'd also appreciate that--I'm afraid I just don't quite get it. Addendum #2: To be sure, I just tried this on a machine with ~550 GB of RAM and it still blew up, so I feel like DBSCAN is likely trying to make a pairwise distance matrix or something I clearly don't want it to do. I guess now the big question is how to stop that behavior, or find other methods that might suit my needs more. Thanks for bearing with me here. Addendum #3(!): I forgot to attach the traceback, here it is, ``` Traceback (most recent call last): File "tDBSCAN.py", line 34, in <module> db = DBSCAN(eps=float(sys.argv[2]), min_samples=10, metric='cityblock').fit(mydata) File "/home/jtownsend/.local/lib/python2.6/site-packages/sklearn/base.py", line 329, in fit_predict self.fit(X) File "/home/jtownsend/.local/lib/python2.6/site-packages/sklearn/cluster/dbscan_.py", line 186, in fit **self.get_params()) File "/home/jtownsend/.local/lib/python2.6/site-packages/sklearn/cluster/dbscan_.py", line 69, in dbscan D = pairwise_distances(X, metric=metric) File "/home/jtownsend/.local/lib/python2.6/site-packages/sklearn/metrics/pairwise.py", line 651, in pairwise_distances return func(X, Y, **kwds) File "/home/jtownsend/.local/lib/python2.6/site-packages/sklearn/metrics/pairwise.py", line 237, in manhattan_distances D = np.abs(X[:, np.newaxis, :] - Y[np.newaxis, :, :]) MemoryError ```
The problem apparently is a non-standard DBSCAN implementation in `scikit-learn`. DBSCAN does not need a distance matrix. The algorithm was designed around using a database that can accelerate a `regionQuery` function, and return the neighbors within the query radius efficiently (a spatial index should support such queries in `O(log n)`). The implementation in `scikit` however, apparently, computes the full `O(n^2)` distance matrix, which comes at a cost both memory-wise and runtime-wise. So I see two choices: 1. You may want to try the DBSCAN implementation in [ELKI](https://elki-project.github.io/) instead, which when used with an R\*-tree index usually is substantially faster than a naive implementation. 2. Otherwise, you may want to *reimplement DBSCAN*, as the implementation in `scikit` apparently isn't too good. Don't be scared of that: DBSCAN is really simple to implement yourself. The trickiest part of a good DBSCAN implementation is actually the `regionQuery` function. If you can get this query fast, DBSCAN will be fast. And you can actually reuse this function for other algorithms, too. **Update:** by now, sklearn no longer computes a distance *matrix* and can, e.g., use a kd-tree index. However, because of "vectorization" it will *still* precompute the neighbors of every point, so the memory usage of sklearn for large epsilon is O(nΒ²), whereas to my understanding the version in ELKI will only use O(n) memory. So if you run out of memory, *choose a smaller epsilon* and/or try [ELKI](https://elki-project.github.io/).
You can do this using scikit-learn's DBSCAN with the haversine metric and ball-tree algorithm. You do not need to precompute a distance matrix. This example **[clusters over a million GPS latitude-longitude points](http://geoffboeing.com/2016/06/mapping-everywhere-ever-been/)** with DBSCAN/haversine and avoids memory usage problems: ``` df = pd.read_csv('gps.csv') coords = df.as_matrix(columns=['lat', 'lon']) db = DBSCAN(eps=eps, min_samples=ms, algorithm='ball_tree', metric='haversine').fit(np.radians(coords)) ``` Note that this specifically uses scikit-learn v0.15, as some earlier/later versions seem to require a full distance matrix to be computed, which blows up your RAM real quick. But if you use Anaconda, you can quickly set this up with: ``` conda install scikit-learn=0.15 ``` Or, create a clean virtual environment for this clustering task: ``` conda create -n clusterenv python=3.4 scikit-learn=0.15 matplotlib pandas jupyter activate clusterenv ```
scikit-learn DBSCAN memory usage
[ "", "python", "scikit-learn", "cluster-analysis", "data-mining", "dbscan", "" ]
I want to debug my plugin with pdb but it doesn't work. I get these errors ``` Traceback (most recent call last): File "./sublime_plugin.py", line 362, in run_ File "./useIt.py", line 14, in run for region in self.view.sel(): File "./useIt.py", line 14, in run for region in self.view.sel(): File ".\bdb.py", line 46, in trace_dispatch File ".\bdb.py", line 65, in dispatch_line bdb.BdbQuit ``` **Has anyone an idea? Or some other way to debug a sublime plugin?**
The problem is that `sys.stdin` is not attached to anything *normally*. But, `sys.stdin` does work if you start SublimeText2 from a console: * On Mac, start the application by locating the executable in the resource bundle by entering the full path in the Terminal: ``` /Applications/Sublime\ Text\ 2.app/Contents/MacOS/Sublime\ Text\ 2 ``` * On Windows, start the application from the Windows Console: ``` "C:\Program Files\Sublime Text 2\sublime_text.exe" ``` provisional, I have no Windows Sublime Text 2 install so this command line is based on a quick Google Now the application has a console; but `sys.stdout` is still redirected to the built-in SublimeText 2 console. You want to start your debugger with the correct `stdout`, the one still connected to your console. Instead of `import pdb; pdb.set_trace()`, use: ``` import pdb, sys; pdb.Pdb(stdout=sys.__stdout__).set_trace() ``` The original console `stdout` is saved in `sys.__stdout__` and by passing that to `pdb.Pdb()` you get a fully functional `pdb` session.
The easiest way I found was to use Visual Studio. You should have the Python development tools for Visual Studio (you can download them by opening the Visual Studio Installer, clicking on more, modify and selecting Python development). To debug you need to open visual studio, select from the toolbar: Debug - Attach to process (Ctrl+Alt+P) and then find 'plugin\_host.exe' and attach. Then open your file 'plugin.py' and put a breakpoint somewhere and you're good to go.
How to debug sublime plugins during development
[ "", "python", "debugging", "sublimetext2", "sublimetext", "pdb", "" ]
I have a small python program which will be used locally by a small group of people (<15 people).But for accountability, i want to have a simple username+password check at the start of the program ( doesn't need to be super secure).For your information, I am just a beginner and this is my first time trying it.When i search around, i found that python has passlib for encryption. But even after looking though it i am still not sure how to implement my encryption.So, there are a few things that i want to know. 1. How do i store the passwords of users locally? The only way i know at the moment is to create a text file and read/write from it but that will ruin the whole purpose of encryption as people can just open the text file and read it from there. 2. What does hash & salt means in encryption and how does it work? (a brief and simple explanation will do.) 3. What is the recommended way to implement username and password check? I am sorry for the stupid questions. But i will greatly appreciate if you could answers my question.
``` import getpass import pickle import hashlib from os import path def Encryption(data): return hashlib.sha512(data).hexdigest() ## First we check if the database exists. if path.isfile('database.db'): with open('database.db', 'rb') as fh: db = pickle.load(fh) ## If it doesn't, we will create one. else: ## First we create the desired variable. db = {'torxed' : Encryption('wham'), 'someoneelse' : Encryption('pass')} ## Then we open a filehandle to it. with open('database.db', 'wb') as fh: ## And then we dump the variable into the filehandle. ## This will keep the variable intact between sessions, ## meaning the next time you start your script, the variable will look the same. pickle.dump(db, fh) ## Then we ask the user for his/hers credentials. user = raw_input('Username: ') _pass = getpass.getpass('Password: ') ## If the user exists in the "db" and the decoded password ## Matches the logged in user, it's a-ok :) if user in db and db[user] == Encryption(_pass): print('You logged in') ``` # Adding more users ``` import pickle, hashlib def Encryption(data): return hashlib.sha512(data).hexdigest() with open('database.db', 'rb') as fh: db = pickle.load(fh) db['new_user'] = Encryption('password') with open('database.db', 'wb') as fh: pickle.dump(db, fh) ``` Another way would be to use `sys.argv` to get the username and password from the commandline when addings users, in that case: ``` import pickle, hashlib, sys if len(sys.argv) < 3: raise ValueError('Need two parameters, username and password') def Encryption(data): return hashlib.sha512(data).hexdigest() with open('database.db', 'rb') as fh: db = pickle.load(fh) db[sys.argv[1]] = Encryption(sys.argv[2]) with open('database.db', 'wb') as fh: pickle.dump(db, fh) ``` I should expand on this answer and explain that you should [salt passwords](https://stackoverflow.com/questions/9594125/salt-and-hash-a-password-in-python) as well, and not just store them with a SHA hash. Also note that passwords are strictly speaking "unsafe" when stored in memory, as there is no [SecureString](https://stackoverflow.com/questions/6483723/securestring-for-storing-in-memory-and-presenting-passwords-or-something-else) *([more](https://stackoverflow.com/questions/728164/securely-erasing-password-in-memory-python))* in Python as of writing this. But for basic purposes this answer still applies.
you can do hashing like this. ``` import hashlib def Encryption(data): return hashlib.sha224(data).hexdigest() ``` when you want to save the password then call this function and save the encode password.
Password Protection Python
[ "", "python", "encryption", "passwords", "" ]
I am stuck with a query, I can't get it to work correctly. This is what I need to do: > Write a query that will return both the maximum and minimum average > salary grouped by department from the employees table. This is what I made: ``` SELECT (SELECT AVG(MIN(salary)) FROM employees GROUP BY department_id) As "Minimum Average salary", (SELECT AVG(MAX(salary)) FROM employees GROUP BY department_id) As "Maximum Average Salary" FROM EMPLOYEES ``` but it keeps returning me more than 1 row result. I can't use LIMIT 2, I'm getting an error if I query with LIMIT. I also tried with the following query, but I'm getting error: missing expression. ``` SELECT AVG(SELECT MIN(salary) FROM employees GROUP BY department_id)) As "Minimum Average salary", AVG((SELECT MAX(salary) FROM employees GROUP BY department_id)) As "Maximum Average Salary" FROM EMPLOYEES ```
First of all, it's simple why it returns as many as all records in EMPLOYEES table; because the group by is in your inner sub-query which is not applied to your main query. actually your sub-queries are executed per each row in the table and the result is returned per row. I think this is the right query that you need: ``` SELECT AVG(min_salary) As "Minimum Average salary", AVG(max_salary) As "Maximum Average Salary" FROM (SELECT MIN(salary) As min_salary, MAX(salary) AS max_salary FROM EMPLOYEES GROUP BY department_id ) ```
No need for an inline view -- this works: ``` select min(avg(salary)), max(avg(salary)) from employees group by dept_id; ``` <http://sqlfiddle.com/#!4/eff67/3>
Oracle SQL why query is returning me multiple results?
[ "", "sql", "oracle", "" ]
**Is it better to have SQL condition in the JOIN clause or in the WHERE clause ?** Is SQL engine optimized for either way ? Does it depend on the engine ? Is it always possible to replace condition in the JOIN clause by a condition in the WHERE clause ? Example here to illustrate what i mean with condition ``` SELECT role_.name FROM user_role INNER JOIN user ON user_role.user_id_ = user.id AND user_role.user_id_ = @user_id INNER JOIN role ON user_role.role_id = role_.id ``` vs. ``` SELECT role_.name FROM user_role INNER JOIN user ON user_role.user_id_ = user.id INNER JOIN role ON user_role.role_id = role_.id WHERE user.id = @user_id ```
SQL condition in JOIN clause and in WHERE condition are equivalent if INNER JOIN is used. Otherwise if any other JOIN is used like LEFT/RIGHT than after matching rows based on condition , another step occurs which is addition of OUTER ROWS , ie non matching rows . WHERE condition simply filters out all non matching rows. [See this thread](https://stackoverflow.com/questions/354070/sql-join-where-clause-vs-on-clause)
Having the non-key condition in the join clause is not only OK, it is preferable especially in this query, because you can avoid some joins to other tables that are further joined to the table to which the on clause belongs. Where clause is evaluated *after* all joins have been made - it's a filter on the result set. But by putting the condition in the join clause, you can stop the rows being joined at he time they're bing joined. In your case it makes no difference, because you don't have any following tables, but I use his technique often to gain performance in my queries.
Is condition in the JOIN clause evil SQL
[ "", "mysql", "sql", "sql-server", "join", "" ]
i have this stored procedur ``` ALTER PROCEDURE dbo.News_Edite ( @Id bigint, @Title nvarchar(MAX), @Lile nvarchar(MAX), @Body nvarchar(MAX), @Type nvarchar(20), @Imgurl nvarchar(MAX), @Date nvarchar(50) ) AS update tbl_news set ne_title=@Title, ne_lile=@Lile, ne_body=@Body, ne_type=@Type, ne_imgurl=@Imgurl, ne_date=@Date where ne_id=@Id RETURN ``` but i want dont update ne\_imgurl if @Imgurl is null what is the best way to include an input param in the **set** clause but exclude it if it is null?
You can use **isNull** but I do not know what do you want to replace it with for example : ``` ne_imgurl=isNull(@Imgurl, ''), ```
Use the function [ISNULL](http://msdn.microsoft.com/en-us/library/ms184325.aspx), then pass the original value as the 2nd parameter. What ISNULL does is it will return the first argument, unless the first argument is null, then it will return the second argument. ``` ALTER PROCEDURE dbo.News_Edite ( @Id bigint, @Title nvarchar(MAX), @Lile nvarchar(MAX), @Body nvarchar(MAX), @Type nvarchar(20), @Imgurl nvarchar(MAX), @Date nvarchar(50) ) AS update tbl_news set ne_title=@Title, ne_lile=@Lile, ne_body=@Body, ne_type=@Type, ne_imgurl=ISNULL(@Imgurl, ne_imgurl), ne_date=@Date where ne_id=@Id RETURN ```
don't update null input variable
[ "", "sql", "sql-server", "" ]
Does exist built-in function in Python, that can find best value that satisfies some condition (specified by a funciton)? For example something instead this code: ``` def argmin(seq, fn): best = seq[0]; best_score = fn(best) for x in seq: x_score = fn(x) if x_score < best_score: best, best_score = x, x_score return best ```
I presume by 'best' you mean highest, in which case the answer is simple - [`max()`](http://docs.python.org/3.3/library/functions.html#max). It takes a `key` argument, which would be your function. ``` max(data, key=score) ``` Naturally, if 'best' means lowest, as DSM points out, then you just want [`min()`](http://docs.python.org/3.3/library/functions.html#min) instead, it does what it says on the tin.
In general, you want the [reduce](http://docs.python.org/2/library/functions.html#reduce) function, but like [@Lattyware said](https://stackoverflow.com/a/16363749/783412) earlier, your specific example, `max` will suffice.
Does exist built-in function in Python to find best value that satisfies some condition?
[ "", "python", "python-2.7", "python-3.x", "" ]
I have 5 tables in my database : *Category1 , Category2 , Category3 , Category4 , Category5* I have a user that might or might not be found on these tables. If he is found , i want to DELETE him from the table. I could do that with 5 queries like this : ``` //search if the user is already subscribed in every one of the categories $stmt = $db->prepare("Select * FROM category1 WHERE deviceUDID = :deviceUDID"); $stmt->execute(array(':deviceUDID' => $deviceUDID)); if($rows_found = $stmt ->rowCount()) { $stmt = $db -> prepare("DELETE FROM category1 WHERE deviceUDID =:deviceUDID"); $stmt->execute(array(':deviceUDID' => $deviceUDID)); } ``` Then for category2 : ``` //search if the user is already subscribed in every one of the categories $stmt = $db->prepare("Select * FROM category2 WHERE deviceUDID = :deviceUDID"); $stmt->execute(array(':deviceUDID' => $deviceUDID)); if($rows_found = $stmt ->rowCount()) { $stmt = $db -> prepare("DELETE FROM category1 WHERE deviceUDID =:deviceUDID"); $stmt->execute(array(':deviceUDID' => $deviceUDID)); } ``` and so on for the rest of the categories.. Would it be possible to search in all these categories with 1 query to make it faster? But i cant see the logic how something like that could happen.. Any ideas? ## EDIT (BONUS PART) --- Would be easier ,faster or more efficient to just make a DELETE query every time , even if the user is not there and let the query fail? Or i should first check every table and if found make a DELETE query? Which one could be more effective?
[DELETE clause](http://dev.mysql.com/doc/refman/5.0/fr/delete.html?ff=nopfpls) allows many tables in one request. try out : ``` DELETE FROM category1 c1, category2 c2, categorie3 c3 WHERE c1.deviceUDID= :deviceUDID AND c2.deviceUDID= :deviceUDID AND c3.deviceUDID= :deviceUDID ... ```
If you are worried about atomicity, enclosing the 5 deletes in a transaction covers that. Alternatively, mysql should by now support foreign keys and cascaded deletes, so if you set up the foreign key relationship with your "user" table, you may have nothing more to do.
Can i delete an entry on multiple tables in one SQL query?
[ "", "mysql", "sql", "database", "" ]
I'm trying to figure out how to write a query that will return a table of 61 record that will list a date for each record from the current date.
This is a useful function I use, taken from here: [Explode Dates Between Dates, check and adjust parameter](https://stackoverflow.com/questions/15497767/explode-dates-between-dates-check-and-adjust-parameter) Just send it Date-30 and Date+30 ``` CREATE FUNCTION [dbo].[ExplodeDates] (@startdate DATETIME, @enddate DATETIME) RETURNS TABLE AS RETURN ( WITH N0 AS (SELECT 1 AS n UNION ALL SELECT 1) ,N1 AS (SELECT 1 AS n FROM N0 t1, N0 t2) ,N2 AS (SELECT 1 AS n FROM N1 t1, N1 t2) ,N3 AS (SELECT 1 AS n FROM N2 t1, N2 t2) ,N4 AS (SELECT 1 AS n FROM N3 t1, N3 t2) ,N5 AS (SELECT 1 AS n FROM N4 t1, N4 t2) ,N6 AS (SELECT 1 AS n FROM N5 t1, N5 t2) ,nums AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 1)) AS num FROM N6) SELECT DATEADD(day, num-1, @startdate) AS thedate FROM nums WHERE num <= DATEDIFF(day, @startdate, @enddate) + 1 ); GO ``` If you don't want the function, you can also simply use it as a query, declaring `@startdate = @myDate - 30` and `@enddate = @myDate + 30`
The simplest, and probably most efficient way in SQL-Server to get a list of 61 dates is to use the system table Master.dbo.spt\_values: ``` SELECT [Date] = DATEADD(DAY, number - 30, CAST(CURRENT_TIMESTAMP AS DATE)) FROM Master..spt_values WHERE Type = 'P' AND Number <= 60; ``` **[Example on SQL Fiddle](http://www.sqlfiddle.com/#!3/d41d8/13070)** --- **EDIT** If you are concerned about using undocumented system tables then this will do the same thing (again with no looping) ``` WITH T AS ( SELECT Number = ROW_NUMBER() OVER(ORDER BY Object_ID) FROM sys.all_objects ) SELECT [Date] = DATEADD(DAY, number - 30, CAST(CURRENT_TIMESTAMP AS DATE)) FROM T WHERE Number <= 60; ``` **[Example on SQL Fiddle](http://sqlfiddle.com/#!3/d41d8/13074)** Extensive testing has been done **[here](http://www.sqlperformance.com/2013/01/t-sql-queries/generate-a-set-2)** on the merits of various methods of generating sequences of numbers. My preferred option would always be your own table (e.g. dbo.numbers, or in this case a calendar table).
Query to return +- 30 days from a specific date
[ "", "sql", "sql-server", "t-sql", "" ]
Here is a sample query: ``` SELECT customerName from customers WHERE customerNUMBER IN ( SELECT customerNumber FROM orders WHERE orderNumber IN ( SELECT orderNumber FROM orderdetails INNER JOIN products on orderdetails.productCode = products.productCode where products.buyPrice > 100 )); ``` I believe the tables are self explanatory. Is there a better way to do this? SQL noob here.
My suggestion would be to change this to [JOIN](http://en.wikipedia.org/wiki/Join_%28SQL%29) syntax instead of all of the WHERE/IN clause filtering: ``` select c.customerName from customer c inner join orders o on c.customerNumber = o.customerNumber inner join orderdetails od on o.orderNumber = od.orderNumber inner join products p on od.productCode = p.productCode where p.buyPrice > 100; ``` If needed you might have to add a DISTINCT to the query in the event there are duplicates.
Use normal inner joins throughout, and toss in a group by or a distinct clause if you need to eliminate dups: ``` select customers.* from customers join orders on ... join orderdetails on ... join products on ... group by customers.customerNumber ```
How to avoid chaining WHERE .. IN statements in SQL?
[ "", "sql", "" ]
I'm having problems finding the answer here, on google or in the docs ... I need to do a case insensitive select against an array type. So if: ``` value = {"Foo","bar","bAz"} ``` I need ``` SELECT value FROM table WHERE 'foo' = ANY(value) ``` to match. I've tried lots of combinations of lower() with no success. `ILIKE` instead of `=` seems to work but I've always been nervous about `LIKE` - is that the best way?
One alternative not mentioned is to install [the `citext` extension](http://www.postgresql.org/docs/current/static/citext.html) that comes with PostgreSQL 8.4+ and use an array of `citext`: ``` regress=# CREATE EXTENSION citext; regress=# SELECT 'foo' = ANY( '{"Foo","bar","bAz"}'::citext[] ); ?column? ---------- t (1 row) ``` If you want to be strictly correct about this and avoid extensions you have to do some [pretty ugly subqueries](https://stackoverflow.com/a/8767450/398670) because Pg doesn't have many rich array operations, in particular no functional mapping operations. Something like: ``` SELECT array_agg(lower(($1)[n])) FROM generate_subscripts($1,1) n; ``` ... where $1 is the array parameter. In your case I think you can cheat a bit because you don't care about preserving the array's order, so you can do something like: ``` SELECT 'foo' IN (SELECT lower(x) FROM unnest('{"Foo","bar","bAz"}'::text[]) x); ```
This seems hackish to me but I think it should work ``` SELECT value FROM table WHERE 'foo' = ANY(lower(value::text)::text[]) ``` `ilike` could have issues if your arrays can have `_` or `%` Note that what you are doing is converting the text array to a single text string, converting it to lower case, and then back to an array. This should be safe. If this is not sufficient you could use various combinations of string\_to\_array and array\_to\_string, but I think the standard textual representations should be safer. **Update** building on subquery solution below, one option would be a simple function: ``` CREATE OR REPLACE FUNCTION lower(text[]) RETURNS text[] LANGUAGE SQL IMMUTABLE AS $$ SELECT array_agg(lower(value)) FROM unnest($1) value; $$; ``` Then you could do: ``` SELECT value FROM table WHERE 'foo' = ANY(lower(value)); ``` This might actually be the best approach. You could also create GIN indexes on the output of the function if you want.
PostgreSQL case insensitive SELECT on array
[ "", "sql", "postgresql", "pattern-matching", "case-insensitive", "" ]
I'm quite new to python programming and I come from a Unix/Linux administration and shell scripting background. I'm trying to write a program in python which accepts command line arguments and depending on there type (int, str) performs certain action. However in my case the input is always being treated as string.Please advice. ``` #!/usr/bin/python import os,sys,string os.system('clear') # function definition def fun1(a): it = type(1) st = type('strg') if type(a) == it: c = a ** 3 print ("Cube of the give int value %d is %d" % (a,c)) elif type(a) == st: b = a+'.' c = b * 3 print ("Since given input is string %s ,the concatenated output is %s" % (a,c)) a=sys.argv[1] fun1(a) ```
First of all, the input will always be treated as string. You could use [argparse](http://docs.python.org/2/howto/argparse.html): ``` import argparse parser = argparse.ArgumentParser() parser.add_argument("cube", type=int, help="Cube of the give int value ") args = parser.parse_args() answer = args.cube**3 print answer python prog.py 4 64 ``` All the integers have an attribute \_\_int\_\_, so you could use that attribute to differentiate between int and string. ``` if hasattr(intvalue, __int__): print "Integer" ```
Command line arguments to Programs are always given as strings (this is not only true for python but at least all C-related languages). This means when you give a number like "1" as an argument, you need to explicitly convert it into an integer. In your case, you could try converting it and assume it is a string if this does not work: ``` try: v = int(a) #... do int related stuff except ValueError: #... do string related stuff ``` This *is* bad design though, it would be better to let the user decide if he wants the argument to be interpreted as a string - after all, every int given by the user is also a valid string. You could for example use something like argparse and specify two different arguments given with "-i" for int and "-s" for string.
Type of the positional parameters in python
[ "", "python", "python-2.7", "" ]
In SQL Server, I have AD users with 'sysadmin' role. When I execute query ``` SELECT IS_SRVROLEMEMBER ('SYSADMIN', 'Domain\agarwals') ``` output is `0`. But when I remove username and execute query ``` SELECT IS_SRVROLEMEMBER ('SYSADMIN') ``` output is `1`. ![enter image description here](https://i.stack.imgur.com/mhgCh.png) Has anyone faced this before? Can someone please help me understand why same check from different query gives different output.
Finally I got the answer. When I execute the query > SELECT IS\_SRVROLEMEMBER ('SYSADMIN', 'Domain\agarwals') on my local (am not connected to domain) output is 0 but when I connect to VPN to connect to domain, output is 1. Whereas output for > SELECT IS\_SRVROLEMEMBER ('SYSADMIN') is always 1 no matter my machine is connected to domain or not. Weird but true.
that is because 'Domain\agarwals' is not part of the sysadmin role. If no value is specified for the login, the result is based on the current Execution context. So it seems you are a member of sysadmin role and hence the output is 1
Multiple query for same check gives different result - SQL
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
I have one old shell script which include sed command as below. The source data($Tmp) is a HTML table. ``` sed '/<table border/,/table>/d' $Tmp > $Out ``` Can someone help me to convert this command to Python script? I really can't figure out how to do that with regular expression. Many thanks..
The script copys all lines from the input file to the output file, unless it finds a line containing `<table border`, then it deletes all lines until it finds `/table>` and continues writing all further lines. So one possibility would be: ``` with open('in') as inf, open('out', 'w') as outf: while True: line = inf.readline() if '<table border' in line: while True: line = inf.readline() if not line or '/table>' in line: line = inf.readline() break if not line: break outf.write(line) ```
Here's a simple implementation. Briefly, it opens the file, iterates line by line and prints each line to the output. If it matches `"<table border"`, delete flag set to True and following lines aren't printed to the output until it matches `"table>"`. ``` import sys f = open(sys.argv[1]) delete = False for line in f: if delete == False: if "<table border" in line: delete = True if delete == False: print line, if delete == True: if "table>" in line: delete = False ```
How to convert this sed command to Python script?
[ "", "python", "regex", "sed", "" ]
I have the following T-SQL query: ``` select count(CaseId), (SELECT DATEDIFF(day,CreateDate,LastActivityDate)) AS DiffDate from VW_Case_Analysis where CaseStatus = 'C' and LastActivityDate between '2013-4-1 00:00:00.000' and '2013-4-30 23:59:59.000' Group By DiffDate ``` I am getting the following error: > Msg 207, Level 16, State 1, Line 15 > Invalid column name 'DiffDate'. The idea behind this query is that I want to get number of cases solved (closed) within how many days. Example: Days 1 = 3 cases 2 = 50 cases 3 = 20 cases how can I achieve this?
You need to used the whole expression in the `GROUP BY` clause or just wrap the whole statement in a subquery an do the grouping on the outer statement. The reason why you can't use `ALIAS` on the `GROUP BY` clause that is created on the same level of the `SELECT` statement is because the `GROUP BY` is executed before the `SELECT` clause in which the `ALIAS` is created. This is the SQL Order of Operation: * FROM clause * WHERE clause * GROUP BY clause * HAVING clause * SELECT clause * ORDER BY clause Try this query, ``` SELECT COUNT(CaseId), DiffDate FROM ( select CaseId, DATEDIFF(day,CreateDate,LastActivityDate) AS DiffDate from VW_Case_Analysis where CaseStatus = 'C' and LastActivityDate between '2013-4-1 00:00:00.000' and '2013-4-30 23:59:59.000' ) sub Group By DiffDate ```
You cannot use alias in the group by clause. Either use a derived table or in your case simple remove the 2nd SELECT. ``` select count(CaseId), DATEDIFF(day,CreateDate,LastActivityDate) AS DiffDate from VW_Case_Analysis where CaseStatus = 'C' and LastActivityDate between '2013-4-1 00:00:00.000' and '2013-4-30 23:59:59.000' Group By DATEDIFF(day,CreateDate,LastActivityDate) ```
Group By Alias name
[ "", "sql", "sql-server", "" ]
Deleting a few list items inside of dictionary Hi, I have a dictionary: ``` phone = {"first":100,"second":200,"third":[10,12,5,38],"fourth":400} ``` Let' say I want to remove the 12 and 5 from from "phone" dictionary. Is there a way to do that using a "del" function? I know how to do this, using a .remove() ``` phone["third"].remove(12) phone["third"].remove(5) ``` but I was wondering if it is possible to do it using the del()? Thank you. EDIT: For all those replies concentrating on "del uses index, remove uses the exact value", I am redefining my question: I want to delete the indexes 1 and 2 in the list representing the third key-value item in "phone" dictionary. How can I do that?
You have to do this by index rather than value: ``` >>> phone = {"first":100,"second":200,"third":[10,12,5,38],"fourth":400} >>> del(phone["third"][1:3]) >>> phone {'second': 200, 'fourth': 400, 'third': [10, 38], 'first': 100} ``` This deletes elements in position 1 and 2 in the list.
You can use `del()` or you can re-create the list by filtering it: ``` >>> phone = {"first":100,"second":200,"third":[10,12,5,38],"fourth":400} >>> phone['third'] = [x for x in phone['third'] if x not in (12,5)] >>> phone {'second': 200, 'fourth': 400, 'third': [10, 38], 'first': 100} ```
Deleting a few list items inside of dictionary
[ "", "python", "list", "dictionary", "del", "" ]
I have written this query to display the last name, department number, and department name from all employees who work in toronto. ``` select last_name, job_id, department_id, department_name from employees e join departments d on d.department_id=e.department_id join locations l on d.location_id=l.location_id and l.city='Toronto'; ``` I am getting this error ORA-00918: column ambiguously defined
change the first line to: ``` select e.last_name, e.job_id, e.department_id, d.department_name ```
When a column exists on both tables participating in a join you need to prefix the column name with an alias to specify which column you would like. In your join the `department_id` is shared by both tables, you can specify which column you would like using `d.department_id` in the selected columns list. ``` select last_name, job_id, d.department_id, --specify which table you want this ambiguous column from department_name from employees e join departments d on d.department_id=e.department_id join locations l on d.location_id=l.location_id and l.city='Toronto'; ```
Join query in oracle 10g
[ "", "sql", "oracle", "oracle10g", "" ]
I have trouble getting `pyodbc` work. I have `unixodbc` , `unixodbc-dev`, `odbc-postgresql`, `pyodbc` packages installed on my Linux Mint 14. I am losing hope to find solution on my own, any help appreciated. See details below: **Running:** ``` >>> import pyodbc >>> conn = pyodbc.connect("DRIVER={PostgreSQL};SERVER=localhost;DATABASE=test;USER=openerp;OPTION=3;") ``` **Gives me:** ``` >>> pyodbc.Error: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source name not found, and no default driver specified (0) (SQLDriverConnect)') ``` **# odbcinst -j gives**: ``` unixODBC 2.2.14 DRIVERS............: /etc/odbcinst.ini SYSTEM DATA SOURCES: /etc/odbc.ini FILE DATA SOURCES..: /etc/ODBCDataSources USER DATA SOURCES..: /home/atman/.odbc.ini SQLULEN Size.......: 4 SQLLEN Size........: 4 SQLSETPOSIROW Size.: 2 ``` Which makes me think there is a `unixodbc` configuration problem. Here are my `unixodbc` config file contents: **File** `/etc/odbcinst.ini`**:** ``` [PostgreSQL ANSI] Description = PostgreSQL ODBC driver (ANSI version) Driver = psqlodbca.so Setup = libodbcpsqlS.so Debug = 0 CommLog = 1 UsageCount = 2 [PostgreSQL Unicode] Description = PostgreSQL ODBC driver (Unicode version) Driver = psqlodbcw.so Setup = libodbcpsqlS.so Debug = 0 CommLog = 1 UsageCount = 2 ``` **File** `/etc/odbc.ini` **:** ``` [PostgreSQL test] Description = PostgreSQL Driver = PostgreSQL ANSI Trace = No TraceFile = /tmp/psqlodbc.log Database = template1 Servername = localhost UserName = Password = Port = ReadOnly = Yes RowVersioning = No ShowSystemTables = No ShowOidColumn = No FakeOidIndex = No ConnSettings = ``` **File** `~/.odbc.ini`**:** ``` [DEFAULT] Driver = PostgreSQL [PostgreSQL] Description = Test to Postgres Driver = PostgreSQL Trace = Yes TraceFile = sql.log Database = nick Servername = localhost UserName = Password = Port = 5432 Protocol = 6.4 ReadOnly = No RowVersioning = No ShowSystemTables = No ShowOidColumn = No FakeOidIndex = No ConnSettings = ```
I believe the answer to your problem is that in your ~/.odbc.ini file you are saying to use driver `PostgreSQL` - but you have not defined that driver in your /etc/odbcinst.ini file. Try changing `PostgreSQL` to `PostgreSQL ANSI` or `PostgreSQL Unicode` (both of which are defined in /etc/odbcinst.ini).
For me, the issue was the actual location of my odbc.ini and odbcinst.ini files. On many systems, the install location of these files is in /etc/ However, in my case, these files were located under /usr/local/etc/ The could be determined by typing `odbcinst -j` Which yielded: ``` unixODBC 2.3.0 DRIVERS............: /usr/local/etc/odbcinst.ini SYSTEM DATA SOURCES: /usr/local/etc/odbc.ini FILE DATA SOURCES..: /usr/local/etc/ODBCDataSources USER DATA SOURCES..: /usr/local/etc/odbc.ini SQLULEN Size.......: 8 SQLLEN Size........: 8 SQLSETPOSIROW Size.: 8 ``` My odbc.ini files already exists in /etc, so the solution was to copy them over over from /etc/ to /usr/local/etc/ `cp /etc/odbc.ini /etc/odbcinst.ini /usr/local/etc/` Edit: It's also worth noting that the path outputted by the odbcinst -j command can change depending on using `sudo` or not.
Pyodbc - "Data source name not found, and no default driver specified"
[ "", "python", "postgresql", "python-2.7", "pyodbc", "unixodbc", "" ]
So this question is similar to one I've asked before, but slightly different. I'm looking at data for clients who are admitted to and discharged from a program. For each admit and discharge they have an assessment done and are scored on it and sometimes they are admitted and discharged multiple times during a time period. I need to be able to pair each clients admit score with their following discharge date so I can look at all clients who improved a certain amount from admit to discharge for each of their admits and discharges. This is an dummy sample of how my data results are formatted right now: ![Before](https://i.stack.imgur.com/9Lei3.png) And this is how I'd ideally like it formatted: ![enter image description here](https://i.stack.imgur.com/s2U0V.png) But I'd take any point in the right direction or similar formatting help that would allow me to be able to compare all of the instances of admit and discharge scores for all the clients. Thanks!
In order to get the result, you can apply both the UNPIVOT and the PIVOT functions. The UNPIVOT will convert your multiple columns of `date` and `score` into rows, then you can pivot those rows back into columns. Then unpivot syntax will be similar to this: ``` select person, casenumber, ScoreType+'_'+col col, value, rn from ( select person, casenumber, convert(varchar(10), date, 101) date, cast(score as varchar(10)) score, scoreType, row_number() over(partition by casenumber, scoretype order by case scoretype when 'Admit' then 1 end, date) rn from yourtable ) d unpivot ( value for col in (date, score) ) unpiv ``` See [SQL Fiddle with Demo](http://www.sqlfiddle.com/#!3/1d9c0/3). This gives a result: ``` | PERSON | CASENUMBER | COL | VALUE | RN | ----------------------------------------------------------- | Jon | 3412 | Discharge_date | 01/03/2013 | 1 | | Jon | 3412 | Discharge_score | 12 | 1 | | Al | 3452 | Admit_date | 05/16/2013 | 1 | | Al | 3452 | Admit_score | 15 | 1 | | Al | 3452 | Discharge_date | 08/01/2013 | 1 | | Al | 3452 | Discharge_score | 13 | 1 | ``` As you can see this query also creates the new columns to then pivot. So the final code will be: ``` select person, casenumber, Admit_Date, Admit_Score, Discharge_Date, Discharge_Score from ( select person, casenumber, ScoreType+'_'+col col, value, rn from ( select person, casenumber, convert(varchar(10), date, 101) date, cast(score as varchar(10)) score, scoreType, row_number() over(partition by casenumber, scoretype order by case scoretype when 'Admit' then 1 end, date) rn from yourtable ) d unpivot ( value for col in (date, score) ) unpiv ) src pivot ( max(value) for col in (Admit_Date, Admit_Score, Discharge_Date, Discharge_Score) ) piv; ``` See [SQL Fiddle with Demo](http://www.sqlfiddle.com/#!3/1d9c0/2). This gives a result: ``` | PERSON | CASENUMBER | ADMIT_DATE | ADMIT_SCORE | DISCHARGE_DATE | DISCHARGE_SCORE | ------------------------------------------------------------------------------------- | Al | 3452 | 05/16/2013 | 15 | 08/01/2013 | 13 | | Cindy | 6578 | 01/02/2013 | 17 | 03/04/2013 | 14 | | Cindy | 6578 | 03/04/2013 | 14 | 03/18/2013 | 12 | | Jon | 3412 | (null) | (null) | 01/03/2013 | 12 | | Kevin | 9868 | 01/18/2013 | 19 | 03/02/2013 | 15 | | Kevin | 9868 | 03/02/2013 | 15 | (null) | (null) | | Pete | 4765 | 02/06/2013 | 15 | (null) | (null) | | Susan | 5421 | 04/06/2013 | 19 | 05/07/2013 | 15 | ```
``` SELECT ad.person, ad.CaseNumber, ad.Date as AdmitScoreDate, ad.Score as AdmitScore, dis.date as DischargeScoreDate, dis.Score as DischargeScore From yourTable ad, yourTable dis ``` WHERE ad.person=dis.person and ad.ScoreType='Admit' and d is.ScoreType='Discharge';
Making Row Entries Pair Horizontally in SQL
[ "", "sql", "sql-server-2008", "" ]
I am building the crawler in python and i have the list of `href` from the page. Now i have the list of file extensions to download like `list = ['zip','rar','pdf','mp3']` How can i save the files from that url to local directory using python EDIT: ``` import urllib2 from bs4 import BeautifulSoup url = "http://www.example.com/downlaod" site = urllib2.urlopen(url) html = site.read() soup = BeautifulSoup(html) list_urls = soup.find_all('a') print list_urls[6] ```
Going by your posted example: ``` import urllib2 from bs4 import BeautifulSoup url = "http://www.example.com/downlaod" site = urllib2.urlopen(url) html = site.read() soup = BeautifulSoup(html) list_urls = soup.find_all('a') print list_urls[6] ``` So, the URL you want to fetch next is presumably `list_urls[6]['href']`. The first trick is that this might be a relative URL rather than absolute. So: ``` newurl = list_urls[6]['href'] absurl = urlparse.urljoin(site.url, newurl) ``` Also, you want to only fetch the file if it has the right extension, so: ``` if not absurl.endswith(extensions): return # or break or whatever ``` But once you've decided what URL you want to download, it's no harder than your initial fetch: ``` page = urllib2.urlopen(absurl) html = page.read() path = urlparse.urlparse(absurl).path name = os.path.basename(path) with open(name, 'wb') as f: f.write(html) ``` That's mostly it. There are a few things you might want to add, but if so, you have to add them all manually. For example: * Look for a Content-disposition header with a suggested filename to use in place of the URL's basename. * `copyfile` from `page` to `f` instead of `read`ing the whole thing into memory and then `write`ing it out. * Deal with existing files with the same name. * … But that's the basics.
You can use python requests library as you have asked in question : <http://www.python-requests.org> You can save file from url like this : ``` import requests url='https://i.stack.imgur.com/0LJdh.jpg' data=requests.get(url).content filename="image.jpg" with open(filename, 'wb') as f: f.write(data) ```
How can i download all types of file in python with request library
[ "", "python", "request", "" ]
I'm supposed to write a program that ends up such as this: ``` * * * * * * * ``` I have the code written for a regular one, but I'm not sure how to incorporate spaces into it. ``` def triangle(i, t = 0): if i == 0 return 0 else: print ' ' * (t + 1) + '*' * (i * 2 - 1) return triangle(i - 1, t + 1) ``` Advice?
Try: ``` def triangle(i, t = 0): if i == 0: print (t+1) *' '+ '*' else: print ' ' * (t + 1)+ '*' + ' ' * (i * 2 - 1) + '*' triangle(i - 1, t + 1) triangle(5) ``` this code print: ``` * * * * * * * * * * * ```
Building on @kharazi's answer (because this reminds me of my early GWBasic programming which is what got me excited about programming as a kid): ``` def triangle(i, leftShape='*', rightShape='*', bottomShape='*', spaceShape=' ', t = 0): if i <= 0: print ((t+1)*spaceShape)+bottomShape+((t+1)*spaceShape) else: print (spaceShape*(t + 1))+leftShape+(spaceShape*(i*2-1))+rightShape+(spaceShape*(t + 1)) triangle(i-1, leftShape, rightShape, bottomShape, spaceShape, t+1) if __name__== '__main__': triangle(3) triangle(3, '\\', '/') triangle(3, '\\', '/', '~') triangle(5, 'β•šβ•—', '╔╝', 'β•šβ•¦β•') triangle(5, 'β•šβ•—', '╔╝', 'β•šβ•¦β•', '|') triangle(-2) ``` Produces the following output: ``` triangle(3) * * * * * * * triangle(3, '\\', '/') \ / \ / \ / * triangle(3, '\\', '/', '~') \ / \ / \ / ~ triangle(5, 'β•šβ•—', '╔╝', 'β•šβ•¦β•') β•šβ•— ╔╝ β•šβ•— ╔╝ β•šβ•— ╔╝ β•šβ•— ╔╝ β•šβ•— ╔╝ β•šβ•¦β• triangle(5, 'β•šβ•—', '╔╝', 'β•šβ•¦β•', '|') |β•šβ•—|||||||||╔╝| ||β•šβ•—|||||||╔╝|| |||β•šβ•—|||||╔╝||| ||||β•šβ•—|||╔╝|||| |||||β•šβ•—|╔╝||||| ||||||β•šβ•¦β•|||||| triangle(-2) * ```
Python Spaced Triangle
[ "", "python", "algorithm", "" ]
I'm running into a bit of an issue with using oracle object types. My member function is compiling with errors. Here's the type: ``` CREATE OR REPLACE TYPE t_Customer as OBJECT (custID NUMBER ,fname varchar2(50) ,lname varchar2(50) ,MEMBER FUNCTION getHighest RETURN INTEGER ); CREATE OR REPLACE TYPE t_Order AS OBJECT (OrderID NUMBER, custID REF t_Customer, quantity INTEGER); CREATE TABLE Order_Tbl of t_Order; CREATE TABLE Customer_Tbl of t_Customer; CREATE OR REPLACE TYPE BODY t_Customer AS MEMBER FUNCTION getHighest RETURN INTEGER IS v_max integer; BEGIN SELECT Max(Order.quantity) INTO v_max FROM Order WHERE Order.CustID = self.custID; return v_max; end; ``` My SELECT INTO is not working. It's telling me I have invalid identifiers. If I want to write a member function to return the highest quantity order for a customer, do I query the table of Order, or can I use the object reference? I also tried creating views to no avail. This is the easiest I could simplify it, I also am going to need to write some other methods, but none of the ones where I need to `SELECT INTO` work as of now. Errors are `SELF.custid INVALID IDENTIFIER` and `Component 'custid' must be declared.` Thanks EDIT: **`SELECT INTO` pl/sql queries to access object types and their views must be aliased. After adding aliases, my problems were solved. Thanks for the help though - I posted the solution and a couple of examples.**
You must use an alias to access Object fields in Oracle. (11g) Examples: **Wrong** `SELECT Order.quantity FROM Customers` **Wrong** `SELECT Customers.Order.quantity FROM Customers` **Right** `SELECT cc.Order.quantity FROM Customers cc` I assumed that the second option in the list would work, but it doesnt. You **have** to use an alias. Working example as per request: ``` DROP Table tblCustomer; DROP Table tblOrders; CREATE OR REPLACE TYPE t_Customer as OBJECT (custID varchar2(20) ,lastname varchar2(50) ,firstname varchar2(50) ,member function getHighest RETURN NUMBER ); / CREATE OR REPLACE TYPE t_Orders AS OBJECT (OrderID Number ,Customer REF t_Customer ,quantity NUMBER ); / CREATE TABLE tblOrders of t_orders; CREATE TABLE tblCustomer of t_Customer; CREATE OR REPLACE VIEW OrderOV(ord) AS SELECT t_orders(OrderID, Customer, quantity) FROM tblOrders; / CREATE OR REPLACE VIEW CustomerOV(cust) AS SELECT t_customer(custID, lastname, firstname) FROM tblCustomer; / CREATE OR REPLACE TYPE BODY t_Customer AS MEMBER Function getHighest RETURN NUMBER IS v_maxval NUMBER; BEGIN SELECT max(orderOV.ord.quantity) INTO v_maxval FROM OrderOV WHERE OrderOV.ord.custID = self.CustID; END; end; / ``` The line inside the function body can be switched out for the correct aliased version. `SELECT Max(e.ord.quantity) INTO v_maxval FROM OrderOV e WHERE e.ord.customer.custID = self.custID;` You can paste this whole script to test, it compiles when you switch the line in question with the correct line I listed.
First thing is that you shouldn't name a table 'Order' - it's an Oracle reserved word. Secondly, your member function is named getHighestOrder in the type spec and getHighest in the type body. Thirdly, you're missing an 'end;' at the end of the t\_Customer type body. Fourthly, in your sql you are joining on custID, which is inconsistently typed. What you should do is use DEREF, then compare the result to self. The code below shows these fixes. ``` CREATE OR REPLACE TYPE t_Customer as OBJECT (custID NUMBER ,fname varchar2(50) ,lname varchar2(50) ,MEMBER FUNCTION getHighest RETURN INTEGER ); CREATE OR REPLACE TYPE t_Order AS OBJECT (OrderID NUMBER, cust REF t_Customer, quantity INTEGER); CREATE TABLE OrderA of t_Order; CREATE TABLE Customer of t_Customer; CREATE OR REPLACE TYPE BODY t_Customer AS MEMBER FUNCTION getHighest RETURN INTEGER IS v_max integer; BEGIN SELECT Max(OrderA.quantity) INTO v_max FROM OrderA WHERE DEREF(OrderA.cust) = self; return v_max; end; end; ``` This should be close to what you need. Use the code below to test. ``` declare cust t_customer := t_customer (1, 'John','Smith'); begin insert into customer values (cust); insert into orderA (OrderID, cust, quantity) select 10, ref(c) , 7 from customer c where custID = 1; insert into orderA (OrderID, cust, quantity) select 11, ref(c) , 15 from customer c where custID = 1; dbms_output.put_line(cust.getHighest); end; / ```
Trouble with SELECT INTO In Oracle OBJECT TYPE Member Function
[ "", "sql", "oracle", "plsql", "object-type", "" ]
Let's say there is a file that has lines of numbers and comments like: ``` #comments 12 #this is number 2.4 #this is float ``` Read the file and append the digits to the list. I'm trying to get just the digits, but somehow it appends the #this is number and #this is float.
You could use `split`: ``` >>> 'foo #comment'.split('#', 1)[0] 'foo ' >>> 'foo comment'.split('#', 1)[0] 'foo comment' ```
With such a simple case, **you do not have to use the more complex and slower machinery of regular expressions** (`re` module). **`str.split()` is your friend**: ``` output = [] with open('somefile.txt') as f: for line in f: parts = line.split('#', 1) # Maximum 1 split, on comments try: output.append(float(parts[0])) # The single, or pre-comment part is added except ValueError: # Beginning is not float-like: happens for "# comment", " # comment", etc. pass # No number found ``` This automatically handles **all the possible syntaxes for floats** (`1.1e2`, `nan`, `-inf`, `3`, etc.). It works because `float()` is quite powerful: it **handles trailing spaces and newlines** (by ignoring them). This is also **quite efficient**, because a `try` that does not fail is fast (faster than an explicit test, usually). This also handles comments found in the middle of the file. **If you only have a pure comment at the *beginning* of the file**, we can simplify the code and use the fact that each line is guaranteed to have a number: ``` output = [] with open('somefile.txt') as f: next(f) # Skips the first, comment line for line in f: output.append(float(line.split('#', 1)[0])) # The single or pre-comment part is guaranteed to be a float representation ``` I don't think that there is any explicit approach which is much simpler than this (beyond calculating possibly too many line parts with `split('#')` instead). That said, an [implicit approach](https://stackoverflow.com/a/16270413/42973) can be considered, like that of abathur, where `eval(line)` replaces the whole `float(…)` part; however, in this case, the code does not show that floats are expected, and as the [Zen of Python](http://docs.python.org/2/glossary.html#term-zen-of-python) says, "Explicit is better than implicit.", so I do not recommend to use the `eval()` approach, unless it is for a one-shot, quick and dirty script.
Ignoring a comment line in file
[ "", "python", "" ]
I have a string vector `data` containing items that I want to insert into a table named `foos`. It's possible that some of the elements in `data` already exist in the table, so I must watch out for those. The solution I'm using starts by transforming the `data` vector into virtual table `old_and_new`; it then builds virtual table `old` which contains the elements which are already present in `foos`; then, it constructs virtual table `new` with the elements which are really new. Finally, it inserts the new elements in table `foos`. ``` WITH old_and_new AS (SELECT unnest ($data :: text[]) AS foo), old AS (SELECT foo FROM foos INNER JOIN old_and_new USING (foo)), new AS (SELECT * FROM old_and_new EXCEPT SELECT * FROM old) INSERT INTO foos (foo) SELECT foo FROM new ``` This works fine in a non-concurrent setting, but fails if concurrent threads try to insert the same new element at the same time. I know I can solve this by setting the isolation level to `serializable`, but that's very heavy-handed. Is there some other way I can solve this problem? If only there was a way to tell PostgreSQL that it was safe to ignore `INSERT` errors...
Whatever your course of action is ([@Denis](https://stackoverflow.com/a/16360750/939860) gave you quite a few options), this rewritten `INSERT` command will be *much* faster: ``` INSERT INTO foos (foo) SELECT n.foo FROM unnest ($data::text[]) AS n(foo) LEFT JOIN foos o USING (foo) WHERE o.foo IS NULL ``` It also leaves a *much* smaller time frame for a possible race condition. In fact, the time frame should be so small, that unique violations should only be popping up under heavy concurrent load or with huge arrays. ### Dupes in the array? Except, if you your problem is *built-in*. Do you have duplicates in the input array itself? In this case, transaction isolation is not going to help you. The enemy is within! Consider this example / solution: ``` INSERT INTO foos (foo) SELECT n.foo FROM (SELECT DISTINCT foo FROM unnest('{foo,bar,foo,baz}'::text[]) AS foo) n LEFT JOIN foos o USING (foo) WHERE o.foo IS NULL ``` I use `DISTINCT` in the subquery to eliminate the "sleeper agents", a.k.a. duplicates. People tend to forget that the dupes may come *within* the import data. ### Full automation This function is one way to deal with concurrency for good. If a `UNIQUE_VIOLATION` occurs, the `INSERT` is just retried. The newly present rows are excluded from the new attempt automatically. It does **not** take care of the opposite problem, that a row might have been deleted concurrently - this would not get re-inserted. One might argue, that this outcome is ok, since such a `DELETE` happened concurrently. If you want to prevent this, make use of `SELECT ... FOR SHARE` to protect rows from concurrent `DELETE`. ``` CREATE OR REPLACE FUNCTION f_insert_array(_data text[], OUT ins_ct int) AS $func$ BEGIN LOOP BEGIN INSERT INTO foos (foo) SELECT n.foo FROM (SELECT DISTINCT foo FROM unnest(_data) AS foo) n LEFT JOIN foos o USING (foo) WHERE o.foo IS NULL; GET DIAGNOSTICS ins_ct = ROW_COUNT; RETURN; EXCEPTION WHEN UNIQUE_VIOLATION THEN -- tag.tag has UNIQUE constraint. RAISE NOTICE 'It actually happened!'; -- hardly ever happens END; END LOOP; END $func$ LANGUAGE plpgsql; ``` I made the function return the count of inserted rows, which is completely optional. [**-> SQLfiddle demo**](http://www.sqlfiddle.com/#!12/0abc8/2)
> Is there some other way I can solve this problem? There are plenty, but none are a panacea... You can't lock for inserts like you can do a `select for update`, since the rows don't exist yet. You *can* lock the entire table, but that's even heavier handed that serializing your transactions. You can use advisory locks, but be super wary about deadlocks. Sort new keys so as to obtain the locks in a consistent, predictable order. (Someone more knowledgeable with PG's source code will hopefully chime in, but I'm guessing that the predicate locks used in the serializable isolation level amount to doing precisely that.) In pure sql you could also use a do statement to loop through the rows one by one, and trap the errors as they occur: * <http://www.postgresql.org/docs/9.2/static/sql-do.html> * <http://www.postgresql.org/docs/9.2/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING> Similarly, you could create a convoluted upsert function and call it once per piece of data... If you're building $data at the app level, you could run the inserts one by one and ignore errors. And I'm sure I forgot some additional options...
Ignoring errors in concurrent insertions
[ "", "sql", "postgresql", "concurrency", "duplicates", "sql-insert", "" ]
i will take a screenshot from this page: <http://books.google.de/books?id=gikDAAAAMBAJ&pg=PA1&img=1&w=2500> or save the image that it outputs. But i can't find a way. With wget/curl i get an "unavailable error" and also with others tools like webkit2png/wkhtmltoimage/wkhtmltopng. Is there a clean way to do it with python or from commandline? Best regards!
Sometimes you need extra http headers such User-Agent to get downloads to work. In python 2.7, you can: ``` import urllib2 request = urllib2.Request( r'http://books.google.de/books?id=gikDAAAAMBAJ&pg=PA1&img=1&w=2500', headers={'User-Agent':'Mozilla/5.0 (X11; U; Linux i686) Gecko/20071127 firefox/2.0.0.11'}) page = urllib2.urlopen(request) with open('somefile.png','wb') as f: f.write(page.read()) ``` Or you can look at the params for adding http headers in wget or curl.
You can use ghost.py if you like. <https://github.com/jeanphix/Ghost.py> Here is an example of how to use it. ``` from ghost import Ghost ghost = Ghost(wait_timeout=4) ghost.open('http://www.google.com') ghost.capture_to('screen_shot.png') ``` The last line saves the image in your current directory. Hope this helps
Take a screenshot from a website from commandline or with python
[ "", "python", "command-line", "web", "screenshot", "" ]