Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
Folks - I have two lists
```
list1=['a','b']
list2=['y','z']
```
I would like to send the variables to a function like below:
```
associate_address(list1[0],list2[0])
associate_address(list1[1],list2[1])
```
my script:
```
for l in list1:
for i in list2:
conn.associate_address(i,l)
```
I receive the below output:
```
conn.associate_address(a,y)
conn.associate_address(a,z)
```
I would like it to look like this:
```
conn.associate_address(a,y)
conn.associate_address(b,z)
``` | Use the [`zip`](http://docs.python.org/2/library/functions.html#zip) function, like this:
```
list1=['a','b']
list2=['y','z']
for i, j in zip(list1, list2):
print(i, j)
```
Output:
```
('a', 'y')
('b', 'z')
``` | Why do you suppose this is?
```
>>> for x in [1,2]:
... for y in ['a','b']:
... print x,y
...
1 a
1 b
2 a
2 b
```
Nested loops will be performed for each iteration in their parent loop. Think about truth tables:
```
p q
0 0
0 1
1 0
1 1
```
Or combinations:
```
Choose an element from a set of two elements.
2 C 1 = 2
Choose one element from each set, where each set contains two elements.
(2 C 1) * (2 C 1) = 4
```
Let's say you have a list of 10 elements. Iterating over it with a `for` loop will take 10 iterations. If you have another list of 5 elements, iterating over it with a `for` loop will take 5 iterations. Now, if you nest these two loops, you will have to perform 50 iterations to cover every possible combination of the elements of each list.
You have many options to solve this.
```
# use tuples to describe your pairs
lst = [('a','y'), ('b','z')]
for pair in lst:
conn.associate_address(pair[0], pair[1])
# use a dictionary to create a key-value relationship
dct = {'a':'y', 'b':'z'}
for key in dct:
conn.associate_address(key, dct[key])
# use zip to combine pairwise elements in your lists
lst1, lst2 = ['a', 'b'], ['y', 'z']
for p, q in zip(lst1, lst2):
conn.associate_address(p, q)
# use an index instead, and sub-index your lists
lst1, lst2 = ['a', 'b'], ['y', 'z']
for i in range(len(lst1)):
conn.associate_address(lst1[i], lst2[i])
``` | nested for loops in python with lists | [
"",
"python",
"loops",
"nested",
""
] |
I've hit a wall and needed some help or advice to get me through.
I want to append certain strings always apppend to the end of a list.
So we have
```
List1 = ["sony","cant recall","samsung","dont know","apple","no answer", "toshiba"]
```
Next we have another list
```
List2 = ["dont know", "cant recall","no answer"]
```
Here is what I've developed so far. The script basically checks if the word in List 1 is in List 2, if found, that particular string should be moved from its current location in the list to the end. But all I can do for now is find the string and its index. I dont know how to move and append the found string to the end of the list.
```
for p, item in enumerate(list1):
for i, element in enumerate(list2):
if item == element:
print item, p
```
Thanks! | You can do your algorithm like:
```
list1 = [x for x in list1 if not x in list2] + [x for x in list1 if x in list2]
```
which will result in:
```
['sony', 'samsung', 'apple', 'toshiba', 'cant recall', 'dont know', 'no answer']
``` | ```
lst_result = set(lst1) | set(lst2)
```
Removes duplicates from the lists as a side effect. Not sure if you want it, but looks like it's assumed.
To keep the ordering:
```
lst1 = ["sony","cant recall","samsung","dont know","apple", "toshiba"]
lst2 = ["dont know", "cant recall","no answer"]
stripped = list(filter(lambda x: x not in lst2,lst1))
lst_result = stripped + lst2
``` | Python - Always append certain strings to the end of a list | [
"",
"python",
"list",
""
] |
I want to create a RSS Feed generator. The generator is supposed to read data from my database and generate RSS feeds for them. I want to write a script that would automatically generate the `xml` file for validation.
So if a typical `xml` file for RSS feed looks like this:
```
<item>
<title>Entry Title</title>
<link>Link to the entry</link>
<guid>http://example.com/item/123</guid>
<pubDate>Sat, 9 Jan 2010 16:23:41 GMT</pubDate>
<description>[CDATA[ This is the description. ]]</description>
</item>
```
I want the script to replace automatically the field between the `<item>` and the `</item>` tag.
Similarly for all the tags. The value for the tags will be fetched from the database. So I will query for it. My app has been designed in Django.
I am looking for suggestions as to how to do this in python. I am also open to any other alternatives if my idea is vague. | Since this is python. Its good to be using `PyRSS2Gen`. Its easy to use and generates the xml really nicely. <https://pypi.python.org/pypi/PyRSS2Gen> | What have you tried so far?
A very basic approach would be to use the [Python DB API](http://docs.python.org/2/library/sqlite3.html#module-sqlite3) to query the database and perform some simple formatting using [`str.format`](http://docs.python.org/2/library/stdtypes.html#str.format) (I type directly in SO -- so beware of typos):
```
>>> RSS_HEADER = """
<whatever><you><need>
"""
>>> RSS_FOOTER = """
</need></you></whatever>
"""
>>> RSS_ITEM_TEMPLATE = """
<item>
<title>{0}</title>
<link>{1}</link>
</item>
"""
>>> print RSS_HEADER
>>> for row in c.execute('SELECT title, link FROM MyThings ORDER BY TheDate'):
print RSS_ITEM_TEMPLATE.format(*row)
>>> print RSS_FOOTER
``` | How to dynamically generate XML file for RSS Feed? | [
"",
"python",
"rss",
""
] |
I'm new to mysql so I don't know yet how to make queries joining multiple tables so I need some help making this query.
So I have these tables:
Products
```
| id | category | description | brand |
|:-----------|------------:|:------------:|:------------:|
| 1 | desktops | Hp example| HP |
| 2 | laptops | asus example| ASUS |
```
Stores
```
| id | location | Brands(physical stores) |
|:-----------|------------:|:-----------------------:|
| 1 | liverpool| currys |
| 2 | london | novatech |
| 3 | bristol | novatech |
```
products\_stores
```
| id_product | id_store | price |
|:-----------|------------:|:-------:|
| 1 | 2 | 700 |
| 2 | 3 | 400 |
| 2 | 1 | 300 |
```
So, I want a query to get data and organize it like this (supposing I want all data):
```
| category | description | brand | store | location | price |
|:------------|-------------:|:------------:|:------------:|:----------:|:---------:|
| desktops | Hp example| HP | novatech | london | 700 |
| laptops | asus example| ASUS | novatech | bristol | 400 |
| laptops | asus example| ASUS | currys | liverpool | 300 |
``` | ```
select products.category, products.description, products.brand, stores.brands, stores.location, products_stores.price
from products_stores
inner join stores on stores.id = products_stores.stores_id
inner join products on products.id = products_stores.products_id
```
[sql fiddle](http://sqlfiddle.com/#!2/a1ae2/2) | Simply use [joins](http://dev.mysql.com/doc/refman/5.0/en/join.html) (more particularly `INNER JOIN` and make sure that the `store` column isn't called `Brands(physical stores)` though. If it's called `brands`, then simply change it in the query:
```
SELECT
pr.category,
pr.description,
pr.brand,
st.store,
st.location,
ps.price
FROM
`products_stores` as `ps`
INNER JOIN
`products` as `pr`
ON
pr.id = ps.id_product
INNER JOIN
`stores` as `st`
ON
st.id = ps.id_store
``` | Multiple table query - SQL | [
"",
"mysql",
"sql",
""
] |
In a list of list of dicts:
```
A = [
[{'x': 1, 'y': 0}, {'x': 2, 'y': 3}, {'x': 3, 'y': 4}, {'x': 4, 'y': 7}],
[{'x': 1, 'y': 0}, {'x': 2, 'y': 2}, {'x': 3, 'y': 13}, {'x': 4, 'y': 0}],
[{'x': 1, 'y': 20}, {'x': 2, 'y': 4}, {'x': 3, 'y': 0}, {'x': 4, 'y': 8}]
]
```
I need to retrieve the highest 'y' values from each of the list of dicts...so the resulting list would contain:
```
Z = [(4, 7), (3,13), (1,20)]
```
In A, the 'x' is the key of each dict while 'y' is the value of each dict.
Any ideas? Thank you. | `max` accept optional `key` parameter.
```
A = [
[{'x': 1, 'y': 0}, {'x': 2, 'y': 3}, {'x': 3, 'y': 4}, {'x': 4, 'y': 7}],
[{'x': 1, 'y': 0}, {'x': 2, 'y': 2}, {'x': 3, 'y': 13}, {'x': 4, 'y': 0}],
[{'x': 1, 'y': 20}, {'x': 2, 'y': 4}, {'x': 3, 'y': 0}, {'x': 4, 'y': 8}]
]
Z = []
for a in A:
d = max(a, key=lambda d: d['y'])
Z.append((d['x'], d['y']))
print Z
```
**UPDATE**
suggested by – J.F. Sebastian:
```
from operator import itemgetter
Z = [itemgetter(*'xy')(max(lst, key=itemgetter('y'))) for lst in A]
``` | I'd use `itemgetter` and `max`'s `key` argument:
```
from operator import itemgetter
pair_getter = itemgetter('x', 'y')
[pair_getter(max(d, key=itemgetter('y'))) for d in A]
``` | Select highest value from python list of dicts | [
"",
"python",
"list",
"numpy",
""
] |
I'm trying to insert a python variable into a MySQL table within a python script but it is not working. Here is my code
```
add_results=("INSERT INTO account_cancel_predictions"
"(account_id,21_day_probability,flagged)"
"Values(%(account_id)s,%(21_day_probability)s,%(flagged)s)")
data_result={
'account_id':result[1,0],
'21_day_probability':result[1,1],
'flagged':result[1,2]
}
cursor.execute(add_results,data_result)
cnx.commit()
cursor.close()
cnx.close()
```
This gets the error
```
ProgrammingError: Failed processing pyformat-parameters; 'MySQLConverter' object has no attribute '_float64_to_mysql'
```
However, when I replace the variable names `result[1,0]`, `result[1,1]`, and `result[1,2]` with their actual numerical values it does work. I suspect python is passing the actual variable names rather than the values they hold. How do I fix this? | Assuming you are using `mysql.connector` (I think you are), define your own converter class:
```
class NumpyMySQLConverter(mysql.connector.conversion.MySQLConverter):
""" A mysql.connector Converter that handles Numpy types """
def _float32_to_mysql(self, value):
return float(value)
def _float64_to_mysql(self, value):
return float(value)
def _int32_to_mysql(self, value):
return int(value)
def _int64_to_mysql(self, value):
return int(value)
config = {
'user' : 'user',
'host' : 'localhost',
'password': 'xxx',
'database': 'db1'
}
conn = mysql.connector.connect(**config)
conn.set_converter_class(NumpyMySQLConverter)
``` | One of your passed values could be of type `numpy.float64` which is not recognized by the MySQL connector. Cast it to a genuine python `float` on populating the dict. | MySQL Connector/Python - insert python variable to MySQL table | [
"",
"python",
"mysql",
"insert",
"mysql-connector-python",
""
] |
I have a table which contains, apart from other, such fields: `id integer, status_id integer, add_date date`.
I would like to execute a query similar to this:
`update table set status_id = new_status_id where status_id = old_status_id`
but one that would only update a given percent of values, say 50%. Moreover, the distribution of the updated rows for each date should be similar; I want half rows with `date = 23.06.2013` updated and half not. | ```
update table
set status_id = new_status_id
where
status_id = old_status_id
and random() < 0.5
``` | This query will give you `id` of the rows, you want to update:
```
SELECT *
FROM
(SELECT id,
count(id) OVER (PARTITION BY add_date) cnt,
row_num() OVER (PARTITION BY add_date ORDER BY id) rn
FROM table
WHERE status_id = old_status_id) sub
WHERE rn <= cnt * 0.5 -- your percentage
-- WHERE rn <= cnt * 0.5 + random() -- another (better) version.
-- Will update at random if there if only one row
``` | How to update only a certain percent of matching values | [
"",
"sql",
"postgresql",
"sql-update",
""
] |
Is there any efficient way using any Python module like `PyWind32` to interact with already existing Native OS dialog boxes like 'Save As' boxes?
I tried searching on Google but no help.
**EDIT:**
1: The Save As dialog box is triggered when user clicks on a Save As dialog box on a web application.
2: Any suggestion are welcome to handle any native OS dialog boxes which are already triggered using Python.(Need not be specific to Selenium webdriver, I am looking for a generic suggestion.)
(When I was posting the question I thought that by 'interacting with a dialog box' will implicitly mean that it is an existing one as if I am able to create one then surely I can interact with it as it is under my programs control. After reading the first 2 answers I realized I was not explicitly clear. Thats why the EDIT )
Thanks | While looking for a possible solution for this I came across several solutions on SO and otherwise.
Some of them were using `AutoIT`, or editing browsers profile to make it store the file directly without a prompt.
I found all this solution too specific like you can overcome the issue for Save As dialog by editing the browser profile but if later you need to handle some other window then you are stuck.
For using `AutoIT` is overkill, this directly collides for the fact I choose `Python` to do this task. (I mean `Python` is itself so powerful, depending on some other tool is strict NO NO for any Pythonist)
So after a long search for a possible generic solution to this problem which not only serves any one who is looking to handle any Native OS dialog boxes like 'Save As', 'File Upload' etc in the process of automating a web application using selenium web driver but also to any one who wants to interact with a specific window using only `Python` APIs.
This solution makes use of `Win32gui`, `SendKeys` modules of `Python`.
I will explain first a generic method to get hold of any window desired then a small code addition which will also make this usable while automating a web application using Selenium Webdriver.
**Generic Solution**::
```
import win32gui
import re
import SendKeys
class WindowFinder:
"""Class to find and make focus on a particular Native OS dialog/Window """
def __init__ (self):
self._handle = None
def find_window(self, class_name, window_name = None):
"""Pass a window class name & window name directly if known to get the window """
self._handle = win32gui.FindWindow(class_name, window_name)
def _window_enum_callback(self, hwnd, wildcard):
'''Call back func which checks each open window and matches the name of window using reg ex'''
if re.match(wildcard, str(win32gui.GetWindowText(hwnd))) != None:
self._handle = hwnd
def find_window_wildcard(self, wildcard):
""" This function takes a string as input and calls EnumWindows to enumerate through all open windows """
self._handle = None
win32gui.EnumWindows(self._window_enum_callback, wildcard)
def set_foreground(self):
"""Get the focus on the desired open window"""
win32gui.SetForegroundWindow(self._handle)
win = WindowFinder()
win.find_window_wildcard(".*Save As.*")
win.set_foreground()
path = "D:\\File.txt" #Path of the file you want to Save
ent = "{ENTER}" #Enter key stroke.
SendKeys.SendKeys(path) #Use SendKeys to send path string to Save As dialog
SendKeys.SendKeys(ent) #Use SendKeys to send ENTER key stroke to Save As dialog
```
To use this code you need to provide a string which is the name of the window you want to get which in this case is 'Save As'. So similarly you can provide any name and get that window focused.
Once you have the focus of the desired window then you can use `SendKeys` module to send key strokes to the window which in this case includes sending file path where you want to save the file and `ENTER`.
**Specific to Selenium Webdriver::**
The above specified code segment can be used to handle native OS dialog boxes which are triggered through a web application during the automation using `Selenium Webdriver` with the addition of little bit of code.
The issue you will face which I faced while using this code is that once your automation code clicks on any `Web Element` which triggers a native OS dialog window, the control will stuck at that point waiting for any action on the native OS dialog window. So basically you are stuck at this point.
The work around is to generate a new `thread` using `Python` `threading` module and use it to click on the `Web Element` to trigger the native OS dialog box and your parent thread will be moving on normally to find the window using the code I showed above.
```
#Assume that at this point you are on the page where you need to click on a Web Element to trigger native OS window/dialog box
def _action_on_trigger_element(_element):
_element.click()
trigger_element = driver.find_element_by_id('ID of the Web Element which triggers the window')
th = threading.Thread(target = _action_on_trigger_element, args = [trigger_element]) #Thread is created here to call private func to click on Save button
th.start() #Thread starts execution here
time.sleep(1) #Simple Thread Synchronization handle this case.
#Call WindowFinder Class
win = WindowFinder()
win.find_window_wildcard(".*Save As.*")
win.set_foreground()
path = "D:\\File.txt" #Path of the file you want to Save
ent = "{ENTER}" #Enter key stroke.
SendKeys.SendKeys(path) #Use SendKeys to send path string to Save As dialog
SendKeys.SendKeys(ent) #Use SendKeys to send ENTER key stroke to Save As dialog
#At this point the native OS window is interacted with and closed after passing a key stroke ENTER.
# Go forward with what ever your Automation code is doing after this point
```
**NOTE::**
When using the above code in automating a web application, check the name of the window you want to find and pass that to `find_window_wildcard()`. The name of windows are browser dependent. E.g. A window which is triggered when you click on an Element to Upload a file is called 'File Upload' in `Firefox` and Open in `Chrome`.
Uses `Python2.7`
I hope this will help any one who is looking for a similar solution whether to use it in any generic form or in automating a web application.
**EDIT:**
If you are trying to run your code through command line arguments then try to use the thread to find the window using `Win32gui` and use the original program thread to click on the element (which is clicked here using the thread). The reason being the urllib library will throw an error while creating a new connection using the thread.)
References::
[SO Question](https://stackoverflow.com/questions/2090464/python-window-activation)
[SenKeys](https://pypi.python.org/pypi/SendKeys)
[Win32gui](http://docs.activestate.com/activepython/2.4/pywin32/win32gui.html) | There is a Python module called win32ui. Its found in the [Python for Windows extensions](http://sourceforge.net/projects/pywin32/ "Python for Windows extensions") package. You want the CreateFileDialog function.
[Documentation](http://timgolden.me.uk/pywin32-docs/win32ui__CreateFileDialog_meth.html "Documentation")
Edit:
This is a save dialog example. Check the documentation for the other settings.
```
import win32ui
if __name__ == "__main__":
select_dlg = win32ui.CreateFileDialog(0, ".txt", "default_name", 0, "TXT Files (*.txt)|*.txt|All Files (*.*)|*.*|")
select_dlg.DoModal()
selected_file = select_dlg.GetPathName()
print selected_file
``` | Which is the best way to interact with already open native OS dialog boxes like (Save AS) using Python? | [
"",
"python",
"windows",
"winapi",
"selenium",
"pywin32",
""
] |
I have some experimental data which looks like this - <http://paste2.org/YzJL4e1b> (too long to post here). The blocks which are separated by field name lines are different trials of the same experiment - I would like to read everything in a pandas dataframe but have it bin together certain trials (for instance 0,1,6,7 taken together - and 2,3,4,5 taken together in another group). This is because different trials have slightly different conditions and I would like to analyze the results difference between these conditions. I have a list of numbers for different conditions from another file.
Currently I am doing this:
```
tracker_data = pd.DataFrame
tracker_data = tracker_data.from_csv(bhpath+i+'_wmet.tsv', sep='\t', header=4)
tracker_data['GazePointXLeft'] = tracker_data['GazePointXLeft'].astype(np.float64)
```
but this of course just reads everything in one go (including the field name lines) - it would be great if I could nest the blocks somehow which allows me to easily access them via numeric indices...
Do you have any ideas how I could best do this? | You should use [`read_csv`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.parsers.read_csv.html) rather than `from_csv`\*:
```
tracker_data = pd.read_csv(bhpath+i+'_wmet.tsv', sep='\t', header=4)
```
If you want to join a list of DataFrames like this you could use concat:
```
trackers = (pd.read_csv(bhpath+i+'_wmet.tsv', sep='\t', header=4) for i in range(?))
df = pd.concat(trackers)
```
\* *which I think is deprecated.* | I haven't quite got it working, but I think that's because of how I copy/pasted the data. Try this, let me know if it doesn't work.
Using some inspiration from [this question](https://stackoverflow.com/questions/11020935/splitting-one-file-into-two-files-with-python)
```
pat = "TimeStamp\tGazePointXLeft\tGazePointYLeft\tValidityLeft\tGazePointXRight\tGazePointYRight\tValidityRight\tGazePointX\tGazePointY\tEvent\n"
with open('rec.txt') as infile:
header, names, tail = infile.read().partition(pat)
names = names.split() # get rid of the tabs here
all_data = tail.split(pat)
res = [pd.read_csv(StringIO(x), sep='\t', names=names) for x in all_data]
```
We read in the whole file so this won't work for huge files, and then partition it based on the known line giving the column names. `tail` is just a string with the rest of the data so we can split that, again based on the names. There may be a better way than using StringIO, but this should work.
I'm note sure how you want to join the separate blocks together, but this leaves them as a list. You can concat from there however you desire.
For larger files you might want to write a generator to read until you hit the column names and write a new file until you hit them again. Then read those in separately using something like Andy's answer.
A separate question from how to work with the multiple blocks. Assuming you've got the list of `Dataframe`s, which I've called `res`, you can use pandas' [concat](http://pandas.pydata.org/pandas-docs/dev/merging.html#concatenating-objects) to join them together into a single DataFrame with a MultiIndex (also see the link Andy posted).
```
In [122]: df = pd.concat(res, axis=1, keys=['a', 'b', 'c']) # Use whatever makes sense for the keys
In [123]: df.xs('TimeStamp', level=1, axis=1)
Out[123]:
a b c
0 NaN NaN NaN
1 0.0 0.0 0.0
2 3.3 3.3 3.3
3 6.6 6.6 6.6
``` | iteratively read (tsv) file for Pandas DataFrame | [
"",
"python",
"pandas",
"dataframe",
"eye-tracking",
""
] |
I am trying to run the following:
```
CREATE TABLE IF NOT EXISTS table_name (
user_id int(11) NOT NULL,
other_id int(11) NOT NULL,
FOREIGN KEY (user_id) REFERENCES users(id),
FOREIGN KEY (other_id) REFERENCES other_table(id),
PRIMARY KEY (user_id, other_id)
);
```
and getting the following error:
```
#1005 - Can't create table 'database_name.table_name' (errno: 150)
```
am I doing something wrong? This works fine just running it in another environment rather than phpmyadmin sql environment. | Take a look at this [SO question](https://stackoverflow.com/questions/1085001/mysql-creating-table-with-fk-error-errno-150).
Note the correct answer. *Check column types* They need to match. May be your problem.
In general, Here is the authoritative guide to [FK in Mysql](http://dev.mysql.com/doc/refman/5.1/en/innodb-foreign-key-constraints.html).
> In addition to SHOW ERRORS, in the event of a foreign key error
> involving InnoDB tables (usually Error 150 in the MySQL Server), you
> can obtain a detailed explanation of the most recent InnoDB foreign
> key error by checking the output of SHOW ENGINE INNODB STATUS.
EDIT: Incorporating comments
Table on PHPMyAdmin were defaulting to **MyISAM**. On Local they were defaulting to to **InnoDB**. **MyISAM** does not support FK. This does not fully explain the difference, as based on [MySql Documentation](http://dev.mysql.com/doc/refman/5.0/en/ansi-diff-foreign-keys.html), It should just work, without creating the FK's. ( Perhaps a settings issue or Older Version Issue) | Does `users` and `other_table` exist?
You can't have the foreign key references to non-existant tables.
You can add the references afterwards with `alter table`. | I am getting an error trying to create table in phpmyadmin | [
"",
"mysql",
"sql",
"database",
"phpmyadmin",
""
] |
I am learning python, step-by-step. Today is about Object Oriented Programming. I know how to create and use simple classes, but something bugs me. Most of the objects I use in python do not require to call a constructor
How can this works? Or is the constructor called implicitly? Example:
```
>>> import xml.etree.ElementTree as etree
>>> tree = etree.parse('examples/feed.xml')
>>> root = tree.getroot()
>>> root
<Element {http://www.w3.org/2005/Atom}feed at cd1eb0>
```
(from <http://www.diveinto.org/python3/xml.html#xml-parse>)
I would have gone this way (which actually works):
```
>>> import xml.etree.ElementTree as etree
>>> tree = etree.ElementTree() # instanciate object
>>> tree.parse('examples/feed.xml')
```
I'd like to use this way of programming (do not call constructor, or at least call it implicitly) for my own project, but I can't get how it really works.
Thanks | In this case, what's happening is that the `etree.parse()` function is creating the `ElementTree` object for you, and returning it. That's why you don't have to call a constructor yourself; it's wrapped up in the `parse` function. That function creates an `ElementTree` instance, parses the data and modifies the new object to correctly represent the parsed information. Then it `returns` the object, so you can use it (in fact, if you look at the [source](http://hg.python.org/cpython/file/2.7/Lib/xml/etree/ElementTree.py#l1180), it does essentially what you wrote in your second example).
This is a pretty common idiom in object-oriented programming. Broadly speaking, it's called a [**factory function**](http://en.wikipedia.org/wiki/Factory_method_pattern). Basically, especially for complex objects, a lot of work is required to create a useful instance of the object. So, rather than pack a lot of logic into the object's constructor, it's cleaner to make one or more factory functions to create the object and configure it as needed. This means that someone developing with the library may have several clean, simple ways to instantiate the class, even if "under the hood" that instantiation may be complex. | `etree.parse` is a Factory function. Their purpose is mainly to be convenient ways of constructing objects (instances). As you can easily verify by [looking at the source](http://hg.python.org/cpython/file/2.7/Lib/xml/etree/ElementTree.py#l1180), the `parse` function does almost exactly as you do in your second example except it ommits a line of code or two. | Using a python class without calling __init__ method? | [
"",
"python",
"class",
"object",
""
] |
so I built a sentence tokenizer, that splits paragraphs into sentences, words, and characters... each of these being a data type. but the sentence system is a two stage system, because things like '. . .' throw it off, sense it goes one letter at a time, but it works fine if it's '...' with no spaces.
So the output is a bit spliced up, but if I can do some secondary processing on it, it will work perfectly. So that's where my question comes in... I'm not sure how to write a system that allows me to append each sentence that has no end sentence punctuation to the previous sentence without losing something along the way.
**here's examples of what the output looks like and what I need it to look like:**
> Some sentence that is spliced...
>
> and has a continuation
>
> this cannot be confused by U.S.A.
>
> In that
>
> last sentence...
>
> an abbreviation ended the sentence!
So sentence objects that do not end with a normal end of sentence delimiter i.e. '.', '?','!' need to be appended to the next sentence... until there is a sentence with a real end of sentence delimiter. and the other thing that makes this tough is '. . .' counts as a continuation, not the end of a sentence. So that will also need to be appended as well.
**this is how it needs to be:**
> Some sentence that is spliced... and has a continuation.
>
> this cannot be confused by U.S.A.
>
> In that last sentence... an abbreviation ended the sentence!
here's the code I was working with:
```
last = []
merge = []
for s in stream:
if last:
old = last.pop()
if '.' not in old.as_utf8 and '?' not in old.as_utf8 and '!' not in old.as_utf8:
new = old + s
merge.append(new)
else:
merge.append(s)
last.append(s)
```
so there are a few problems with this method...
1. it only appends 1 sentence to another, but it does not keep appending if there are 2 or 3 that need to be added.
2. it drops the first sentence if it does not have any punctuation in it.
3. it does not deal with '. . .' as continuations. I know I did not wright anything for that in this, and that's because I'm not totally sure how to approach that problem, with sentences ending in an abbreviation, because I could count how many '.' are in the sentence, but it would be really thrown off by 'U.S.A.' because that counts as 3 periods.
so I've written an `__add__` method to the sentence class, so you can do `sentence + sentence` and that works as a way to append one to the other.
any help would be greatly appreciated on this. and let me know if any of this is unclear, and I will do my best to enplane it. | Ok, here's some working code. Is this roughly what you need?
I'm not too happy with it yet, it looks a bit ugly imho but I want to know if it's the right direction.
```
words = '''Some sentence that is spliced...
and has a continuation.
this cannot be confused by U.S.A.
In that
last sentence...
an abbreviation ended the sentence!'''.split()
def format_sentence(words):
output = []
for word in words:
if word.endswith('...') or not word.endswith('.'):
output.append(word)
output.append(' ')
elif word.endswith('.'):
output.append(word)
output.append('\n')
else:
raise ValueError('Unexpected result from word: %r' % word)
return ''.join(output)
print format_sentence(words)
```
Output:
```
Some sentence that is spliced... and has a continuation.
this cannot be confused by U.S.A.
In that last sentence... an abbreviation ended the sentence!
``` | This 'algorithm' tries to makes sense of the input without relying on line endings, so that it *should* work correctly with some input like
```
born in the U.
S.A.
```
The code lends itself to being integrated into a state machine - the loop only remembers its current phrase and "pushes" finished phrases off onto a list, and gobbles one word at a time. Splitting on whitespaces is good.
Notice the ambiguity in case #5: that cannot be reliably solved (and it is possible to have such an ambiguity also with line endings. Maybe combining *both*...)
```
# Sample decoded data
decoded = [ 'Some', 'sentence', 'that', 'is', 'spliced.', '.', '.',
'and', 'has', 'a', 'continuation.',
'this', 'cannot', 'be', 'confused', 'by', 'U.', 'S.', 'A.', 'or', 'U.S.A.',
'In', 'that', 'last', 'sentence...',
'an', 'abbreviation', 'ended', 'the', 'sentence!' ]
# List of phrases
phrases = []
# Current phrase
phrase = ''
while decoded:
word = decoded.pop(0)
# Possibilities:
# 1. phrase has no terminator. Then we surely add word to phrase.
if not phrase[-1:] in ('.', '?', '!'):
phrase += ('' if '' == phrase else ' ') + word
continue
# 2. There was a terminator. Which?
# Say phrase is dot-terminated...
if '.' == phrase[-1:]:
# BUT it is terminated by several dots.
if '..' == phrase[-2:]:
if '.' == word:
phrase += '.'
else:
phrase += ' ' + word
continue
# ...and word is dot-terminated. "by U." and "S.", or "the." and ".".
if '.' == word[-1:]:
phrase += word
continue
# Do we have an abbreviation?
if len(phrase) > 3:
if '.' == phrase[-3:-2]:
# 5. We have an ambiguity, we solve using capitals.
if word[:1].upper() == word[:1]:
phrases.append(phrase)
phrase = word
continue
phrase += ' ' + word
continue
# Something else. Then phrase is completed and restarted.
phrases.append(phrase)
phrase = word
continue
# 3. Another terminator.
phrases.append(phrase)
phrase = word
continue
phrases.append(phrase)
for p in phrases:
print ">> " + p
```
Output:
```
>> Some sentence that is spliced... and has a continuation.
>> this cannot be confused by U.S.A. or U.S.A.
>> In that last sentence... an abbreviation ended the sentence!
``` | How can I merge together sentence objects? | [
"",
"python",
"algorithm",
"object",
"append",
""
] |
Suppose I have a dataframe with columns `a`, `b` and `c`. I want to sort the dataframe by column `b` in ascending order, and by column `c` in descending order. How do I do this? | As of the 0.17.0 release, the [`sort`](http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.sort.html) method was deprecated in favor of [`sort_values`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html). `sort` was completely removed in the 0.20.0 release. The arguments (and results) remain the same:
```
df.sort_values(['a', 'b'], ascending=[True, False])
```
---
You can use the ascending argument of [`sort`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort.html):
```
df.sort(['a', 'b'], ascending=[True, False])
```
For example:
```
In [11]: df1 = pd.DataFrame(np.random.randint(1, 5, (10,2)), columns=['a','b'])
In [12]: df1.sort(['a', 'b'], ascending=[True, False])
Out[12]:
a b
2 1 4
7 1 3
1 1 2
3 1 2
4 3 2
6 4 4
0 4 3
9 4 3
5 4 1
8 4 1
```
---
As commented by @renadeen
> Sort isn't in place by default! So you should assign result of the sort method to a variable or add inplace=True to method call.
that is, if you want to reuse df1 as a sorted DataFrame:
```
df1 = df1.sort(['a', 'b'], ascending=[True, False])
```
or
```
df1.sort(['a', 'b'], ascending=[True, False], inplace=True)
``` | As of pandas 0.17.0, `DataFrame.sort()` is deprecated, and set to be removed in a future version of pandas. The way to sort a dataframe by its values is now is `DataFrame.sort_values`
As such, the answer to your question would now be
```
df.sort_values(['b', 'c'], ascending=[True, False], inplace=True)
``` | How to sort a pandas dataFrame by two or more columns? | [
"",
"python",
"pandas",
"python-2.7",
"sorting",
"data-analysis",
""
] |
I need little help with TSql :)
```
SELECT TOP (100) PERCENT HSW, NAZWA, COUNT(HSW) AS TEST
FROM _Katalogi.dbo.ZBIOR_NAZW
WHERE hsw = '3768917680'
GROUP BY HSW, NAZWA
```
Now I get this results
```
3768917680 PODKŁADKA UTWARDŹ. 1
3768917680 ŚRUBA SAMOZABEZPIECZJĄCA 1
3768917680 PODKŁADKA 82
3768917680 PODKŁADKA 3/8" 1
3768917680 PODKŁADKA UTWARDZONA 2883
3768917680 ŚRUBA 2
```
Now in this subquery I need to take only
```
3768917680 PODKŁADKA UTWARDZONA 2883
```
## With max COUNT(HSW) AS TEST
I thin i ask a little bit wrong
```
SELECT TOP (100) PERCENT HSW, NAZWA, COUNT(HSW) AS TEST
FROM _Katalogi.dbo.ZBIOR_NAZW
GROUP BY HSW, NAZWA
```
Goal is take only top NAZWA from table gropuing by HSW column . Like u see a little bit up i have one HSW number and many names and i need to get only most popular name :) but inside one HSW group. there is many hsw numbers which have many differend names and i need only list of HSW numbers with TOP name :) Any ideas ?
differend aproach :)
Data in table
```
3768917680 PODKŁADKA UTWARDŹ.
3768917680 ŚRUBA SAMOZABEZPIECZJĄCA
3768917680 PODKŁADKA
3768917680 PODKŁADKA
3768917680 PODKŁADKA
3768917680 PODKŁADKA 3/8"
3768917680 PODKŁADKA UTWARDZONA
3768917680 ŚRUBA
3768917681 PODKŁADKA UTWARDŹ.
3768917681 PODKŁADKA UTWARDŹ.
3768917681 ŚRUBA SAMOZABEZPIECZJĄCA
3768917682 PODKŁADKA
3768917683 PODKŁADKA 3/8"
3768917684 PODKŁADKA UTWARDZONA
3768917684 ŚRUBA
3768917684 ŚRUBA
```
like u see this is on emess so i need get only what is goo so most popular name in group
```
3768917680 PODKŁADKA
3768917681 PODKŁADKA UTWARDŹ.
3768917682 PODKŁADKA
3768917683 PODKŁADKA 3/8"
3768917684 ŚRUBA
```
so goal take one hsw number with top (most popular) name | ```
SELECT *
FROM (
SELECT ROW_NUMBER() OVER (PARTITION BY HSW ORDER BY CNT DESC) rn
, *
FROM (
SELECT COUNT(*) OVER (PARTITION BY HSW, NAZWA) as CNT
, *
FROM FROM _Katalogi.dbo.ZBIOR_NAZW
) as SubQuery1
) as SubQuery2
WHERE rn = 1 -- Only top CNT per HSW
``` | Something like this?
```
SELECT HSW, NAZWA, COUNT(HSW) AS TEST
FROM _Katalogi.dbo.ZBIOR_NAZW
WHERE hsw = '3768917680'
GROUP BY HSW, NAZWA
HAVING COUNT(HSW) =
(
SELECT TOP 1 COUNT(HSW) AS TEST
FROM _Katalogi.dbo.ZBIOR_NAZW
WHERE hsw = '3768917680'
GROUP BY HSW, NAZWA
ORDER BY COUNT(HSW) DESC
)
``` | Take max from SubQuery | [
"",
"sql",
"t-sql",
""
] |
I am trying to edit some text files I have in order to add value to one of the columns. I would like to add two new digit to the second column of my files where are separated with space. The first column would ended on the 13 character, then there is two space and then add the new two digits and the other columns would remain without change.
I have written the following script but unfortunately it does work. I have be thankful if somebody could help me to find my mistake.
```
%********function************
def add_num(infile,outfile):
output = ["%s %s%s" %(item.strip()[:13] ,32,item.strip()[16:]) for item in infile]
outfile.write("\n".join(output))
outfile.close()
return outfile
%*********************************
%**********main code for calling the function*******
import os, Add32
folder = 'E:/MLS_HFT/TEST/Stuttgart_2009_pointclouds/'
for filename in os.listdir(folder):
infilename = os.path.join(folder,filename)
if not os.path.isfile(infilename): continue
base,extension = os.path.splitext(filename)
infile= open(infilename, 'r')
outfile = open(os.path.join(folder, '{}_32{}'.format(base,extension)),'w')
add32.add_num(infile,outfile)
```
and this is a sample of a data:
```
399299.855212 512682.330 5403021.950 303.471 64 1 1 2 75
399299.855212 512681.470 5403020.790 302.685 1 1 2 2 75
399299.855222 512682.360 5403021.970 303.526 79 1 1 2 76
``` | ```
with open('infile.txt', 'rb') as infile, open('outfile.txt', 'wb') as outfile:
outfile.writelines(line[:15] + '32' + line[15:] for line in infile)
``` | Use `str.split`:
```
col = 2
#just pass filenames to your function and use `with` statement for handling files.
with open(infile) as f, open(outfile, 'w') as out:
for line in f:
spl = line.split(None, col)
spl[col -1] = '32' + spl[col -1]
out.write(" ".join(spl))
...
399299.855212 32512682.330 5403021.950 303.471 64 1 1 2 75
399299.855212 32512681.470 5403020.790 302.685 1 1 2 2 75
399299.855222 32512682.360 5403021.970 303.526 79 1 1 2 76
```
Working version of your code:
```
def add_num(infile,outfile):
with open(infile) as f, open(outfile, 'w') as out:
output = ["%s %s%s\n" %(item.strip()[:13] ,32,item.strip()[16:]) for item in f]
out.writelines(output)
outfile = os.path.join(folder, '{}_32{}'.format(base,extension))
add_num(infilename,outfile)
``` | add new digits to a column of a text file in Python | [
"",
"python",
"text-files",
"edit",
""
] |
I have a question about command line methods of searching. Here is my situation.
1. I have a data file which contains a bunch (~36,000) of objects' identification numbers, which looks like this:
```
# ID
85000213
85000234
85000246
85000247
85000249
85000250
...
```
2. I also have a directory of datafiles which contain data about each object with names like:
```
datafileID85000213.csv
datafileID85000234.csv
datafileID85000246.csv
datafileID85000247.csv
datafileID85000249.csv
datafileID85000250.csv
...
```
I would actually like to check that all objects in my original ID datafile have an associated CSV file containing its actual data. My first idea for doing this is to use the diff command in order to compare these lists, but in order to do that I need to make a single column data file from the ~36,000 CSV filenames I have in order to compare to. One complication is that for most commands I have tried to use so far (mv, cp, ls), I get this:
> Argument list too long
Would anybody have a workaround (or a better idea) to this task?
For searching for the number in each filename string, there is one of two things I have found so far:
1. python:
```
>>> import re
>>> idlist
['abcdefg0.dat', 'abcdefg1.dat', 'abcdefg2.dat', 'abcdefg3.dat', 'abcdefg4.dat', 'abcdefg5.dat', 'abcdefg6.dat', 'abcdefg7.dat', 'abcdefg8.dat', 'abcdefg9.dat']
>>> numbers = []
>>> for i in range(len(idlist)):
... numbers.append(re.search(r'\d+',idlist[i]).group())
...
>>> numbers
['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
```
2. bash: [UNIX BASH: extracting number from string](https://stackoverflow.com/questions/3569754/unix-bash-extracting-number-from-string)
Any help would be greatly appreciated! | This is pretty easy in python:
```
import os.path
with open('idfile') as fin:
header = next(fin) #read the `#ID` comment header line. Don't use it.
for line in fin:
fname = 'datafileID{0}.csv'.format(line.strip())
if not os.path.exists(fname):
print fname
``` | From `bash`:
```
{
# Read the header into a variable, but ignore it
read -r hdrLine
while read -r fileID; do
[[ -f $DIRECTORY/datafileID$fileID.csv ]] || echo "$fileID not found"
done
} < id_list.txt
```
Everything in `{...}` reads from the file `id_list.txt`. The first `read` statement reads and discards the header. The `while` loops then reads the remaining lines from the file, one at a time. With `DIRECTORY` set to the name of the directory where the files will live (or replaced with the hardcoded path, your choice), it checks if a file by the name of `datafileID$fileID.csv` exists in that directory. If so, nothing happens. Otherwise, a message to that effect is printed to standard output.
You can perform a more general check using a file glob.
```
[[ -f $DIRECTORY/*$fileID* ]]
```
would check if a file that contains `$fileID` anywhere in the name exists. | Checking That All Files Are Present | [
"",
"python",
"bash",
"csv",
"diff",
""
] |
I have the following bottleneck and am wondering if anyone can suggest ways to speed it up.
I have three lists `x,y,z` of length `N`. and I apply the following [summation](http://en.wikipedia.org/wiki/Correlation_sum).
```
def abs_val_diff(x1, x2, x3, y1, y2, y3):
""" Find the absolute value of the difference between x and y """
return py.sqrt((x1 - y1) ** 2.0 + (x2 - y2) ** 2.0 + (x3 - y3) ** 2.0)
R = 0.1
sumV = 0.0
for i in xrange(N):
for j in xrange(i + 1, N):
if R > abs_val_diff(x[i], y[i], z[i],
x[j], y[j], z[j]):
sumV += 1.0
```
I have tried using numpy arrays, but either I am doing something wrong or there is a reduction in speed of about a factor of 2.
Any ideas would be highly appreciated. | I believe you can utilize numpy a little more efficiently by doing something like the following. Make a small modification to your function to use the numpy.sqrt:
```
import numpy as np
def abs_val_diff(x1, x2, x3, y1, y2, y3):
""" Find the absolute value of the difference between x and y """
return np.sqrt((x1 - y1) ** 2.0 + (x2 - y2) ** 2.0 + (x3 - y3) ** 2.0)
```
Then call with the full arrays:
```
res = abs_val_diff(x[:-1],y[:-1],z[:-1],x[1:],y[1:],z[1:])
```
Then, because you're adding 1 for each match, you can simply take the length of the array resulting from a query against the result:
```
sumV = len(res[R>res])
```
This lets numpy handle the iteration. Hopefully that works for you | Is there any reason you actually need to take the square root in your function? If all you do with the result is to compare it against a limit why not just square both sides of the comparison?
```
def abs_val_diff_squared(x1, x2, x3, y1, y2, y3):
""" Find the square of the absolute value of the difference between x and y """
return (x1 - y1) ** 2.0 + (x2 - y2) ** 2.0 + (x3 - y3) ** 2.0
R = 0.1
R_squared = R * R
sumV = 0.0
for i in xrange(N):
for j in xrange(i + 1, N):
if R_squared > abs_val_diff_squared(x[i], y[i], z[i],
x[j], y[j], z[j]):
sumV += 1.0
```
I also feel there ought to be much bigger savings gained from sorting the data into something like an octtree so you only have to look at nearby points rather than comparing everything against everything, but that's outside my knowledge. | Speeding up summation for loop in python | [
"",
"python",
"numpy",
"scipy",
""
] |
I have a string which can contain any number of words separated by a space. I am sending this string as a parameter from vb to sql via sqlCommand. How can i split this into an array in sql or send it as array from vb altogether to search the table.
The sql search must return all those rows in which each row contains all the words from the string that i have passed. | ```
Using connection As New SqlConnection(connectionString)
Dim command As New SqlCommand("sp_GetCustomerByIDS", connection)
command.CommandType = System.Data.CommandType.StoredProcedure
'Here is how you pass in YourString seperated with comma's
command.Parameters.AddWithValue("@CustomerIDS", YourString.Replace(" ", ","))
...
End Using
```
The comma separated string of values is passed into the Stored Procedure (tip: please change the *ntext* as SQL2005/SQL2008+ have **varchar(MAX)** ):
```
CREATE PROCEDURE [dbo].[sp_GetCustomerByIDS]
@CustomerIDS ntext
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
DECLARE @err int
declare @TempTbl as TABLE(mystrings nvarchar(32), i1 int, i2 int, i3 int)
INSERT @TempTbl exec @err = sp_SplitTextList @CustomerIDS, ','
SELECT *
FROM dbo.Customers
WHERE
dbo.Customers.ID in (select mystrings from @TempTbl)
END
```
I use this SplitTextList stored procedure to process the comma separated values, again you might want to change @list\_text from text to varchar(MAX), you can see from the history its quite old:
```
/* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */
-- uspSplitTextList
--
-- Description:
-- splits a separated list of text items and returns the text items
--
-- Arguments:
-- @list_text - list of text items
-- @Delimiter - delimiter
--
-- Notes:
-- 02/22/2006 - WSR : use DATALENGTH instead of LEN throughout because LEN doesn't count trailing blanks
--
-- History:
-- 02/22/2006 - WSR : revised algorithm to account for items crossing 8000 character boundary
--
CREATE PROCEDURE [dbo].[sp_SplitTextList]
@list_text text,
@Delimiter varchar(3)
AS
SET NOCOUNT ON
DECLARE @InputLen integer -- input text length
DECLARE @TextPos integer -- current position within input text
DECLARE @Chunk varchar(8000) -- chunk within input text
DECLARE @ChunkPos integer -- current position within chunk
DECLARE @DelimPos integer -- position of delimiter
DECLARE @ChunkLen integer -- chunk length
DECLARE @DelimLen integer -- delimiter length
DECLARE @ItemBegPos integer -- item starting position in text
DECLARE @ItemOrder integer -- item order in list
DECLARE @DelimChar varchar(1) -- first character of delimiter (simple delimiter)
-- create table to hold list items
-- actually their positions because we may want to scrub this list eliminating bad entries before substring is applied
CREATE TABLE #list_items ( item_order integer, item_begpos integer, item_endpos integer )
-- process list
IF @list_text IS NOT NULL
BEGIN
-- initialize
SET @InputLen = DATALENGTH(@list_text)
SET @TextPos = 1
SET @DelimChar = SUBSTRING(@Delimiter, 1, 1)
SET @DelimLen = DATALENGTH(@Delimiter)
SET @ItemBegPos = 1
SET @ItemOrder = 1
SET @ChunkLen = 1
-- cycle through input processing chunks
WHILE @TextPos <= @InputLen AND @ChunkLen <> 0
BEGIN
-- get current chunk
SET @Chunk = SUBSTRING(@list_text, @TextPos, 8000)
-- setup initial variable values
SET @ChunkPos = 1
SET @ChunkLen = DATALENGTH(@Chunk)
SET @DelimPos = CHARINDEX(@DelimChar, @Chunk, @ChunkPos)
-- loop over the chunk, until the last delimiter
WHILE @ChunkPos <= @ChunkLen AND @DelimPos <> 0
BEGIN
-- see if this is a full delimiter
IF SUBSTRING(@list_text, (@TextPos + @DelimPos - 1), @DelimLen) = @Delimiter
BEGIN
-- insert position
INSERT INTO #list_items (item_order, item_begpos, item_endpos)
VALUES (@ItemOrder, @ItemBegPos, (@TextPos + @DelimPos - 1) - 1)
-- adjust positions
SET @ItemOrder = @ItemOrder + 1
SET @ItemBegPos = (@TextPos + @DelimPos - 1) + @DelimLen
SET @ChunkPos = @DelimPos + @DelimLen
END
ELSE
BEGIN
-- adjust positions
SET @ChunkPos = @DelimPos + 1
END
-- find next delimiter
SET @DelimPos = CHARINDEX(@DelimChar, @Chunk, @ChunkPos)
END
-- adjust positions
SET @TextPos = @TextPos + @ChunkLen
END
-- handle last item
IF @ItemBegPos <= @InputLen
BEGIN
-- insert position
INSERT INTO #list_items (item_order, item_begpos, item_endpos)
VALUES (@ItemOrder, @ItemBegPos, @InputLen)
END
-- delete the bad items
DELETE FROM #list_items
WHERE item_endpos < item_begpos
-- return list items
SELECT SUBSTRING(@list_text, item_begpos, (item_endpos - item_begpos + 1)) AS item_text, item_order, item_begpos, item_endpos
FROM #list_items
ORDER BY item_order
END
DROP TABLE #list_items
RETURN
``` | Besides using table parameters, you can parse the string into a table variable. This is what the pseudocode would be like:
```
Declare a table variable with one VARCHAR column
While the string is not empty
Grab everything up to the first space and insert it into the table variable
Remove the first word from the string
```
Once you have the temporary table, you can join it using LIKE with your source table. For each row in your source table, the join creates a row for every word that matched in the array. You can then use GROUP/HAVING to limit that down to only the results that returned a row for each entry in your table variable (meaning those that match every string in the table variable).
For example:
```
DECLARE @TempTable TABLE (Word VARCHAR(100))
...
-- Put logic from the pseudocode above to populate @TempTable from your string
...
SELECT Y.PrimaryKey, MAX(Y.MaybeYouNeedThisColumnToo) from YourTable Y
INNER JOIN @TempTable temp
ON Y.ColumnToMatch LIKE '%' + temp.Word + '%'
GROUP BY Y.PrimaryKey
HAVING COUNT(*) = (SELECT COUNT(*) FROM @TempTable)
``` | How to search by an array in a stored procedure? | [
"",
"sql",
"vb.net",
""
] |
For debugging purpose I want to create pseudo "result set" in order to join them, like:
```
with tmp_tbl as ( select v from dual where v in ('cat', 'dog', 'fish') )
select read_tbl.* from tmp_tbl
left outer join read_tbl on real_tbl.id = tmp_tbl.id;
```
I understand that above expression is invalid and can be transformed into another which works. But my real example too complicate to shown here.
My question how to make this expression:
```
select v from dual where v in ('cat', 'dog', 'fish')
```
a valid result set so I can use it with **joins** and **from** keywords?
**dual** doesn't have v column. I look for a way to break SQL syntax to avoid **create table** calls.. | I'm still not quite sure what you're trying to do, but it looks to me like you want a dummy table with fixed values. If so you can select multiple dummy values from `dual` and `union all` the results, which will give you multiple rows. You can then use that as a sub-select, or if you're effectively masking a real table (from the 'debug' comment) then a CTE might be clearer:
```
with tmp_tbl as (
select 'cat' as id from dual
union all select 'dog' from dual
union all select 'fish' from dual
)
select tmp_tbl.id, read_tbl.*
from tmp_tbl
left outer join real_tbl
on real_tbl.id = tmp_tbl.id;
```
You referred to a `v` column in the text, but you're joining on `id`, so I've aliased the fixed value as `id` inside the CTE (it only needs to be named in the first row). You can just change that to something else if you prefer. And you can of course select several fixed values (with different aliases) in each select from `dual` to make it look more like a real table. | For this purpose you can use [subquery factoring](http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_10002.htm#i2077142), also known as “the with clause”
```
with t as
( select v from dial where v in ('cat','dog','fish') )
Select * from t
```
Oracle may decide to materialize this result set internally or not. If you want to control this behavior, you can use the optimizer hints “materialize” and “inline”.
Hope this helps.
Regards,
Rob. | How to create temporary result set from selected set of data in SQL? | [
"",
"sql",
"oracle",
""
] |
I've been working with Tornado and have gotten very used to it's overall style and syntax. Currently, I would like to write a website (in Python) that does not require the asynchronous, non-blocking capabilities of Tornado. Obviously, I can use frameworks like Django or Pylons, but I was wondering if there was a particular Python setup that would most closely resemble Tornado but be blocking, rather than non-blocking.
Note: I've been experimenting with Django, but I feel as if it is a bit heavy-weighted for my current needs. What I liked about Tornado was its relative simplicity and flexibility while still being capable of scaling nicely should that become a concern. (Maybe I just need to keep plugging away with Django until I've become totally accustomed to it, but that's outside the realm of this question.)
Also note: I am not looking for a subjective opinion, rather, I am looking for someone to suggest, based on experience with the different Python frameworks (which I am lacking), similar design setups to Tornado.
Thanks. | My favorite "minimal" frameworks are:
Flask <http://flask.pocoo.org/docs/> and Pyramid <http://www.pylonsproject.org/>.
Big difference to tornado is that they are WSGI (hence blocking) and have lots of web-site-like plugins for the things you need to do deploying web-sites.
Flask has lots of examples around and is really easy to start with.
wish you fun | Turbo gears 2 looks promising as it has been built on top of the experience of several next generation web frameworks including TurboGears 1, Django, and Rails
[Turbo gears 2](http://turbogears.org/#the-next-generation-web-framework-that-scales-with-you) | A blocking equivalent to Tornado? | [
"",
"python",
"django",
"tornado",
""
] |
I have this code:
```
def save_to_gcs(self, img, img_obj):
'''
Image data, Image metadata object -> Blob Key
Given an image and image metadata, stores it in a GCS bucket
'''
bucket = '/foo'
filename = bucket + '/' + str(img_obj['filename'])
self.tmp_filenames_to_clean_up = []
logging.info('Creating file %s\n' % img_obj['filename'])
write_retry_params = gcs.RetryParams(backoff_factor=1.1)
gcs_file = gcs.open(filename,
'w',
content_type=img_obj['mimetype'],
retry_params=write_retry_params)
gcs_file.write(img)
gcs_file.close()
self.tmp_filenames_to_clean_up.append(filename)
return blobstore.create_gs_key('/gs/' + filename)
```
But it fails with this error:
```
Expect status [201] from Google Storage. But got status 403. Response headers: {'content-length': '145', 'via': 'HTTP/1.1 GWA', 'x-google-cache-control': 'remote-fetch', 'expires': 'Fri, 01 Jan 1990 00:00:00 GMT', 'server': 'HTTP Upload Server Built on Jun 7 2013 11:30:13 (1370629813)', 'pragma': 'no-cache', 'cache-control': 'no-cache, no-store, must-revalidate', 'date': 'Thu, 20 Jun 2013 23:13:55 GMT', 'content-type': 'application/xml; charset=UTF-8'}
Traceback (most recent call last):
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1536, in __call__
rv = self.handle_exception(request, response, e)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1530, in __call__
rv = self.router.dispatch(request, response)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1102, in __call__
return handler.dispatch()
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/s~foo/5.368231578716365248/main.py", line 409, in post
blob_key = self.save_to_gcs(img, img_obj) # Save the image to a GCS bucket. returns a blob_key
File "/base/data/home/apps/s~foo/5.368231578716365248/main.py", line 448, in save_to_gcs
retry_params=write_retry_params)
File "/base/data/home/apps/s~foo/5.368231578716365248/external/cloudstorage/cloudstorage_api.py", line 69, in open
return storage_api.StreamingBuffer(api, filename, content_type, options)
File "/base/data/home/apps/s~foo/5.368231578716365248/external/cloudstorage/storage_api.py", line 527, in __init__
errors.check_status(status, [201], headers)
File "/base/data/home/apps/s~foo/5.368231578716365248/external/cloudstorage/errors.py", line 99, in check_status
raise ForbiddenError(msg)
ForbiddenError: Expect status [201] from Google Storage. But got status 403. Response headers: {'content-length': '145', 'via': 'HTTP/1.1 GWA', 'x-google-cache-control': 'remote-fetch', 'expires': 'Fri, 01 Jan 1990 00:00:00 GMT', 'server': 'HTTP Upload Server Built on Jun 7 2013 11:30:13 (1370629813)', 'pragma': 'no-cache', 'cache-control': 'no-cache, no-store, must-revalidate', 'date': 'Thu, 20 Jun 2013 23:13:55 GMT', 'content-type': 'application/xml; charset=UTF-8'}
```
Any help with deciphering that error and coming up with a solution would be much appreciated.
Thanks | Same thing happened to me and it baffled me. I got it working by following the steps on [this page](https://developers.google.com/appengine/docs/python/googlestorage/) under the Prerequisites section. A couple notes though:
* For number 2, make sure you go to the [APIs Console](https://code.google.com/apis/console) and turn on GCS under Services
* For number 5, go to the [Cloud Console](https://cloud.google.com/console), select your project, click the Settings wrench and click Teams. Add your gserviceaccount.com thing here.
* Also for number 5, I think you have to edit the ACL files with gsutil. Follow the alternate instructions provided.
That should work for you since it did for me. | The documentation is confusing for granting access to your app engine app (which is most likely your problem). Here is what worked for me in the latest [Google Cloud Console](https://cloud.google.com/console):
1. In the Google Cloud Console, click your project, then "APIs & auth". Turn on "Google Cloud Storage" and "Google Cloud Storage JSON API".
2. Click back to the "Overview" screen, and click "Cloud Storage" in the left menu.
3. Click the check box next to your bucket, and click the "Bucket Permissions" button.
4. Add a new "User" permission and specify your app engine service account name, in the format of application-id@appspot.gserviceaccount.com. This is found in the Application Settings of the AppEngine Console. Better instructions to find this account name are [here](https://developers.google.com/appengine/docs/python/googlestorage/#Give_permissions_to_your_bucket_or_objects).
5. Save your changes. | ForbiddenError when attempting to write file to GCS from GAE python App | [
"",
"python",
"google-app-engine",
"google-cloud-storage",
""
] |
I have a **table** which looks like this
```
col1, col2
a,C
a,D
a,C
a,D
```
I want to find out that for **a** in `col1`, what is the (Number of rows with **C** - Number of rows with **D**).
If I were to find the two numbers it will simply be
```
SELECT COUNT(1) FROM mytable where COL1='a' and COL2='C'
SELECT COUNT(1) FROM mytable where COL1='a' and COL2='D'
```
And then I could just find the difference.
However, I wanted to do it with a single query. So I went for this
```
SELECT COUNT(CASEWHEN(COL2)='D', 1, -1)
FROM mytable
```
But that does not seem to work. Any suggestions? | Thanks folks for the quick responses. For me, the following code snippet worked.
```
SUM( CASEWHEN (COL2='D', 1, -1))
```
And a slightly fuller version of this is
```
SUM( CASE WHEN COL2='C' THEN -1 WHEN COL2='D' THEN 1 ELSE 0 END)
```
I would recommend the fuller version. | Here is one way:
```
SELECT SUM(CASE WHEN col2 = 'C' THEN 1 ELSE 0 END) Col2C,
SUM(CASE WHEN col2 = 'D' THEN 1 ELSE 0 END) Col2D,
SUM(CASE WHEN col2 = 'C' THEN 1 WHEN col2 = 'D' THEN -1 ELSE 0 END) [Col2C-Col2D]
FROM myTable
WHERE col1 = 'a'
``` | Case when to find the difference in the number of counts of particular kind of rows. | [
"",
"sql",
""
] |
I am trying to use bash functions inside my python script to allow me to locate a specific directory and then grep a given file inside the directory. The catch is that I only have part of the directory name, so I need to use the bash function find to get the rest of the directory name (names are unique and will only ever return one folder)
The code I have so far is as follows:
```
def get_tag(part_of_foldername):
import subprocess
import os
p1 = subprocess.Popen(["find", "/path/to/directory", "-maxdepth", "1", "-name", "%s.*" % part_of_foldername, "-type", "d"], stdout=subprocess.PIPE)
directory = p1.communicate()[0].strip('\n')
os.chdir(directory)
p2 = subprocess.Popen(["grep", "STUFF_", ".hgtags"], stdout=subprocess.PIPE)
tag = p2.comminucate()[0].strip('\n')
return tag
```
Here is what's really strange. This code works when you enter it line by line into interactive, but not when it's run thru a script. It also works when you import the script file into interactive and call the function, but not when it's called by the main function. The traceback I get from running the script straight is as follows:
```
Traceback (most recent call last):
File "./integration.py", line 64, in <module>
main()
File "./integration.py", line 48, in main
tag = get_tag(folder)
File "./integration.py", line 9, in get_date
os.chdir(directory)
OSError: [Errno 2] No such file or directory: ''
```
And it's called in the main function like this:
```
if block_dict[block][0]=='0':
tag = get_tag(folder)
```
with "folder" being previously defined as a string.
Please note we use python 2.6 so I can't use the module check\_output unfortunately. | Have you tried using the [glob](http://pymotw.com/2/glob/) module as opposed to find?
```
import glob
glob.glob("/path/to/directory/*/SomeDir/path/*")
```
You can look past multiple dirctories using \*\*:
```
glob.glob("/path/**/SomeDir/path/*")
```
and that would match /path/to/your/SomeDir/path/file. | evidently `p1.communicate()[0].strip('\n')` is returning an empty string. are you really using the hardcoded value `"/path/to/directory"` as in your example? | Different results when running in python interpreter vs. script file | [
"",
"python",
"bash",
"grep",
"subprocess",
""
] |
How can I get the full name of the current view (my\_app.views.index) in a template in Django 1.5?
With forms, I have an object called "view" in the template which I read using a template tag.
But with DetailViews I doesn't see anything similar.
Is there a way using a custom template processor?
Thanks
**EDIT**
Situation 1:
1. I retrieve a page, for example '/foo/bar/5/edit'.
2. Django will call 'foo.views.editbar' with pk=5.
3. This view renders an template 'foo/bar\_form.html'
Situation 2:
1. If I retrieve '/foo/bar/new'
2. Django will call 'foo.views.newbar'
3. This view renders the same template as above ('foo/bar\_form.html')
How can I check in this template 'foo/bar\_form.html', from which view it has been rendered?
The result should be one of
* 'foo.views.editbar'
* 'foo.views.newbar' | Just [set attribute](https://docs.python.org/2/library/functions.html#setattr) to request object in view:
```
setattr(request, 'view', 'app.views.func')
```
and check this in template:
```
{% if request.view == 'app.views.func' %}
do something
{% endif %}
```
It worked for me. | Type just in view
```
{% with request.resolver_match.view_name as view_name %}
...
{{ view_name }}
...
{% endwith %}
``` | Django get current view in template | [
"",
"python",
"django",
"django-templates",
"django-views",
""
] |
Suppose I have a list of functions
```
f1(x,y),f2(x,y),....
```
How can I define the function below:
```
z(x,y)=f1(x,y)+f2(x,y)+...
```
I want to be able to specify x later. So if somebody puts the y=c value, I want the z function to become a function of x! In the simple case when you you have one function, f, and you want to make f(x,y) to be a function of x only by specifying y=c you can use:
```
z=lambda x:f(x,c)
```
but the thing is such a method won't work for more than one function! | Tweaking the previous answers, we can throw in `functools.partial` to create the desired behavior.
We can do a partial function call (similar to the haskell construct) to produce new functions from incompletely applied functions. Check it out:
```
from functools import partial
funcs = [lambda x, y: x + y] * 10 # a list of ten placeholder functions.
def z(functions, x, y):
return sum(f(x, y) for f in functions)
a = partial(z, funcs, 1)
# 'a' is basically the same as: lambda y: z(funcs, 1, y)
print a # <functools.partial object at 0x7fd0c90246d8>
print a(1) # 20
print a(2) # 30
print a(3) # 40
```
More here: <http://docs.python.org/2/library/functools.html#functools.partial> | Can be useful for 10s of functions:
```
def z(x,y,functions):
return sum(F(x,y) for F in functions)
```
So you can use it like:
```
z(x,y,(f,g,h))
```
Or if the number of functions comes from a list:
```
z(x,y,list_of_functions)
``` | How to define sum of multiple arbitrary functions? | [
"",
"python",
"function",
""
] |
I've done some searching and can't figure out how to filter a dataframe by
```
df["col"].str.contains(word)
```
however I'm wondering if there is a way to do the reverse: filter a dataframe by that set's compliment. eg: to the effect of
```
!(df["col"].str.contains(word))
```
Can this be done through a `DataFrame` method? | You can use the invert (~) operator (which acts like a not for boolean data):
```
new_df = df[~df["col"].str.contains(word)]
```
where `new_df` is the copy returned by RHS.
*contains also accepts a regular expression...*
---
If the above throws a ValueError or TypeError, the reason is likely because you have mixed datatypes, so use `na=False`:
```
new_df = df[~df["col"].str.contains(word, na=False)]
```
Or,
```
new_df = df[df["col"].str.contains(word) == False]
``` | I was having trouble with the not (~) symbol as well, so here's another way from another [StackOverflow thread](https://stackoverflow.com/questions/11350770/pandas-dataframe-select-by-partial-string):
```
df[df["col"].str.contains('this|that')==False]
``` | Search for "does-not-contain" on a DataFrame in pandas | [
"",
"python",
"pandas",
"contains",
""
] |
I want to put all my `link` tags in `<head>`.
However, I don't know how to render all the `link` tags in the `head` of my DOM when I include shared templates via the built in `include` tag. So my `link` tags are rendered wherever I happen to include my shared templates. I've added code below to better illustrate my problem.
Layout:
```
<html>
<head>
{% block references %}{% endblock %}
</head>
<body>
{% block content %}{% endblock %}
</body>
</html>
```
Extending the layout with a template:
```
{% extends "layout.html" %}
{% load staticfiles %}
{% block references %}
<link rel="stylesheet" href="{% static "myStylesheet.css" %}" type="text/css">
{% endblock %}
...
{% include "mySharedTemplate.html" %}
...
```
Shared template. Note, this template is shared among a few but not all of my templates:
```
{% load staticfiles %}
<link rel="stylesheet" href="{% static "mySharedTemplateStylesheet.css" %}" type="text/css">
...
```
Is there a way to put all my `link` tags in the `head` of my DOM while using shared templates? Is there a completely different or better way to do this? I'm a week into my first django project, so even suggestions of basic features may help me! | I found a hacky way to do this. I'm not super pleased with it. I found that I can use simple `if` blocks to toggle which sections of my template I want to render with the `include` tag. This allows me to include my references and content separately. (Note, I could solve this problem by separating my references and content into separate files. But that seems more tedious than this solution.)
I like this solution better than the current answers because it allows my shared template to be isolated from other templates. Keeping this modular design is important when working with functionality that you can mix and match (which is what I'd like to do with my shared templates).
Template:
```
{% extends "layout.html" %}
{% load staticfiles %}
{% block references %}
<link rel="stylesheet" href="{% static "myStylesheet.css" %}" type="text/css">
{% include "mySharedTemplate.html" with references="True" %}
{% endblock %}
...
{% include "mySharedTemplate.html" with content="True" %}
...
```
Shared Template:
```
{% if references %}
{% load staticfiles %}
<link rel="stylesheet" href="{% static "mySharedTemplateStylesheet.css" %}" type="text/css">
{% endif %}
{% if content %}
...
{% endif %}
```
To illustrate why I think my modular design is important:
Imagine I have a many shared templates and many regular templates that each use the shared templates in different ways. My modular method makes it easy for regular templates to work with shared templates in flexible ways that best suit them.
Template 2:
```
{% extends "layout.html" %}
{% load staticfiles %}
{% block references %}
<link rel="stylesheet" href="{% static "myStylesheet.css" %}" type="text/css">
{% include "mySharedTemplate.html" with references="True" %}
{% include "mySharedTemplate2.html" with references="True" %}
{% endblock %}
...
{% include "mySharedTemplate.html" with content="True" %}
{% include "mySharedTemplate2.html" with content="True" %}
...
```
Template 3:
```
{% extends "layout.html" %}
{% load staticfiles %}
{% block references %}
<link rel="stylesheet" href="{% static "myStylesheet.css" %}" type="text/css">
{% include "mySharedTemplate2.html" with references="True" %}
{% include "mySharedTemplate3.html" with references="True" %}
{% include "mySharedTemplate4.html" with references="True" %}
{% endblock %}
...
{% include "mySharedTemplate4.html" with content="True" %}
{% include "mySharedTemplate3.html" with content="True" %}
{% include "mySharedTemplate2.html" with content="True" %}
...
```
Notice that Template 2 and Template 3 can use the the shared templates in ways that suit them without much boiler plate code. | I think you are lookig for `{{block.super}}`
for example Layout.html:
```
<html>
<head>
{% load staticfiles %}
{% block references %}
<link rel="stylesheet" href="{% static "mySharedTemplateStylesheet.css" %}" type="text/css">
{% endblock %}
</head>
<body>
{% block content %}{% endblock %}
</body>
</html>
```
and in Template.html:
```
{% extends "layout.html" %}
{% load staticfiles %}
{% block references %}
{{block.super}}
<link rel="stylesheet" href="{% static "myStylesheet.css" %}" type="text/css">
{% endblock %}
```
if you do not want to use the `mySharedTemplateStylesheet.css` for all your pages you only do not use `{{block.super}}` like Template2.html:
```
{% extends "layout.html" %}
{% load staticfiles %}
{% block references %}
<link rel="stylesheet" href="{% static "myStylesheet.css" %}" type="text/css">
{% endblock %}
``` | static files and django templates | [
"",
"python",
"css",
"django",
"dom",
"django-templates",
""
] |
I have a requirement in which i have to force the sql not to use a particular index which exists on a table.
for example,
```
create table t1(id varhcar2(10),data1 varchar2(3000));
create table t2(id varhcar2(10),data2 varchar2(3000));
create index id1 on t1(id);
select * from t1,t2 where t1.id=t2.id;
```
I cannot drop the index id1 and neither drop it as i dont have rights on it. therefore i want to add some kind of hint to avoid using it..
Is there any such hint, or is there any workaround for this.
Thanks in advance | Try using [NO\_INDEX](https://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements006.htm#BABHJBIB) hint
For instance
```
SELECT /*+ NO_INDEX(t1 id1) */
FROM t1,
t2
WHERE t1.id = t2.id;
``` | There's a general principle that for every query for which you want to specify the execution plan, you need something like two or three hints per table.
In this case, you're probably looking for a hash join resulting from two full table scans, which is fairly simple so the hint block would be something like:
```
select /*+ full(t1) full(t2) use_hash(t1 t2) */
...
``` | How to force oracle to not to use an index | [
"",
"sql",
"oracle",
"query-performance",
"hints",
""
] |
I am creating an adaptor to custom data from a client. I cannot change their schema or modify the values in their tables, though I can suggest new indices. The approach is to use CTEs to join and reformat the custom data to use our column names, enumerated values, etc. Once the data is reformatted, our standard CTEs can be appended, and a query forged from it that can perform our standard analyses.
Some values resulting from the reformatting are NULL due to LEFT JOINs that had no match, or due to values in their data that are actually NULL.
My task is to substitute default values for NULLs in many fields, and also allow WHERE clauses to be inserted into the query. Currently, ISNULL calls or CASE statements are used to handle the default values. And currently, by the time the WHERE condition is hit, this substitution has already been performed, so that the end user, who has access to our query builder, can filter on a value which might be the default value. If the filter value is the default value, then records with NULL values that were replaced with the default should be selected.
The problem is that if I have myField = ISNULL(myField, 'MyDefault') as my reformatting formula, and later have WHERE myField = 'MyDefault' in an outer layer of the onion (a later CTE), that this where clause is not sargable: the query optimizer does not choose my index on myField.
A partial solution that occurs to me is to not do any NULL replacement in my inner CTEs, then have a CTE that gets the WHERE clauses inserted, then have an outer CTE that performs all the NULL replacements. Such a query could use the indices. (I have verified this.) However, the where clauses could no longer expect that a test of the value against the default value will also pick up the records with NULL values, since that substitution would not yet have occurred.
Is there a way to perform null substitution, allow SARGABLE where filters, and filter on NULL values as though they held the default value?
NOTE on problem size: A typical example involves JOINing a 6 million record table to a 7 million record table with a many-to-many relationship that creates 12 million records. When the filter is SARGABLE, the query takes about 10 seconds. When it is not SARGABLE, it takes over 10 minutes on one machine, and over three minutes on a faster machine.
COMMENT ON CHOSEN SOLUTION:
The clever use of intersection to allow comparison of a field to either a NULL or a non-NULL without ISNULL or other non-sargable functions can be instrumented into our code with the fewest changes to our legacy queries.
COMMENT 2: Missing case
There are these six cases:
1. Selected value is not null and does not equal the default and does not match the filter value. Should exclude.
2. Selected value is not null and does not equal the default and DOES match the filter value. Should include.
3. Selected value is not null and DOES equal the default value and does not match the filter value. Should exclude.
4. Selected value is not null and DOES equal the default value and DOES match the filter value. Should include.
5. Selected value is null and the filter value is not the default. Should exclude.
6. Selected value is null and the filter value is the default. Should include.
Case 4 does not work using the offered solution. The selected field is not null, so the first half of the intersection has a record with anon-null value. But in the second half of the intersection, the NULLIF statement has created a record with a null value. The intersection produces zero records. The record is rejected. I am still looking for a solution that handles this case. So close...
Update Solution:
I have a fix. Say that I am fitering on [County Name] and my default value is 'Unknown'...
```
where EXISTS (
select [County Name]
intersect
(select NULLIF('User selected county name', 'Unknown') union select 'User selected county name')
)
``` | It looks like you already are building your query dynamically so when you get a value from your tool that needs to be filtered on you could build a query with a where clause that looks something like this.
[SQL Fiddle](http://sqlfiddle.com/#!3/d4a82/2)
**MS SQL Server 2008 Schema Setup**:
```
create table YourTable
(
ID int identity primary key,
Name varchar(20)
)
create index IX_YourTable_Name on YourTable(Name)
insert into YourTable values
('Name1'),
('Name2'),
(null)
```
**Query 1**:
```
declare @Param varchar(20)
set @Param = 'DefaultName'
select ID,
coalesce(Name, 'DefaultName') as Name
from YourTable
where exists(select Name intersect select nullif(@Param, 'DefaultName'))
```
**[Results](http://sqlfiddle.com/#!3/d4a82/2/0)**:
```
| ID | NAME |
--------------------
| 3 | DefaultName |
```
**Query 2**:
```
declare @Param varchar(20)
set @Param = 'Name1'
select ID,
coalesce(Name, 'DefaultName') as Name
from YourTable
where exists(select Name intersect select nullif(@Param, 'DefaultName'))
```
**[Results](http://sqlfiddle.com/#!3/d4a82/2/1)**:
```
| ID | NAME |
--------------
| 1 | Name1 |
```
The query plan for the query above will use IX\_YourTable\_Name for a seek.

Ref: [Undocumented Query Plans: Equality Comparisons](http://web.archive.org/web/20180422151947/http://sqlblog.com:80/blogs/paul_white/archive/2011/06/22/undocumented-query-plans-equality-comparisons.aspx) | You said you can't change the schema, but I'm thinking outside the box here. You could add a new database that has views that look into the existing database. For example:
```
use NewViewDb
GO
CREATE VIEW dbo.[T1T2View]
AS
SELECT field1, field2, COALESCE(field3, 'default value'), ...
FROM RealDb.dbo.Table1 t1 LEFT JOIN RealDb.dbo.Table2 t2
ON t1.Id = t2.Id
GO
``` | Need SARGABLE way to filter records and also specify a default value for NULLs | [
"",
"sql",
"sql-server",
"performance",
""
] |
I have the following code:
```
scores = [matrix[i][i] / sum(matrix[i]) for (i, scores) in enumerate(matrix)]
```
My problem is that `sum(matrix[i])` could be 0 in some cases, resulting in a `ZeroDivisionError`. But because `matrix[i][i]` is also 0 in that case, I solved this as follows:
```
scores = [divide(matrix[i][i], sum(matrix[i])) for (i, scores) in enumerate(matrix)]
```
The function `divide(x, y)` returns 1 if `y == 0` and `(x / y)` if `y > 0`. But I wonder if there is an easier way. Maybe I could use some ternary operator, but does that exist in Python? | Yes, in Python it's called the [conditional expression](http://docs.python.org/reference/expressions.html#conditional-expressions):
```
[matrix[i][i] / sum(matrix[i]) if sum(matrix[i]) != 0 else 0
for (i, scores) in enumerate(matrix)]
``` | ```
[(lambda x, y: 0 if y == 0 else x/y)(row[i], sum(row))
for i, row in enumerate(matrix)]
``` | Preventing dividing by zero in list comprehensions | [
"",
"python",
"list-comprehension",
"divide-by-zero",
""
] |
I have a python script which has a `while(1)` condition , takes input from a table in database
processes it and writes something to `stdout` but I am unable to redirect its output to a file.I tried all standard methods and found maybe because the script never stops and I have to
stop it with `Ctrl-Z` it is unbale to append the ouput of `stdout` to file.
Any clues?? | I'm guessing that it never writes a newline? If that's true, you need to `sys.stdout.flush()` occasionally. | You could also disable I/O buffering with the `-u` option: `python -u yourscript.py`. (This can diminish performance in some cases.) | Redirection of output in infinetely running python script | [
"",
"python",
""
] |
I am writing documentation for a project and I would like to make sure I did not miss any method. The code is written in Python and I am using PyCharm as an IDE.
Basically, I would need a REGEX to match something like:
```
def method_name(with, parameters):
someVar = something()
...
```
but it should NOT match:
```
def method_name(with, parameters):
""" The doc string """
...
```
I tried using PyCharm's search with REGEX feature with the pattern `):\s*[^"']` so it would match any line after `:` that doesn't start with `"` or `'` after whitespace, but it doesn't work. Any idea why? | I don't know python, but I do know my regex.
And your regex has issues. First of all, as comments have mentioned, you may have to escape the closing parenthesis. Secondly, you don't match the new line following the function declaration. Finally, you look for single or double quotations at the START of a line, yet the start of a line contains whitespace.
I was able to match your sample file with `\):\s*\n\s*["']`. This is a multiline regex. Not all programs are able to match multiline regex. With `grep`, for example, you'd have to use [this method](https://stackoverflow.com/a/7167115/1968462).
A quick explanation of what this regex matches: it looks for a closing parenthesis followed by a semicolon. Any number of optional whitespace may follow that. Then there should be a new line followed by any number of whitespace (indentation, in this case). Finally, there must be a single or double quote. Note that this matches functions that *do* have comments. You'd want to invert this to find those without. | You mentioned you were using PyCharm: there is an inspection "Missing, empty, or incorrect docstring" that you can enable and will do that for you.
Note that you can then change the severity for it to show up more or less prominently.
 | How to find undocumented methods in my code? | [
"",
"python",
"pycharm",
""
] |
I have an `ActiveRecord` model with the standard `created_at`, `updated_at` timestamps. I want to run a query to find all records created or updated in a time range.
It possible to do this in a single ActiveRecord query?
The following works great for a single clause (just querying on `created_at`, for example).
```
range = 2.weeks.ago .. 1.week.ago
=> Mon, 03 Jun 2013 14:49:54 UTC +00:00..Mon, 10 Jun 2013 14:49:54 UTC +00:00
Item.where(:created_at => range)
Item Load (0.4ms) SELECT `items`.* FROM `items` WHERE (`items`.`created_at` BETWEEN '2013-06-16 14:40:55' AND '2013-06-17 14:40:55')
=> [... results ...]
```
When I try to add the second clause `ActiveRecord` is producing an `AND` statement, which is not what I want:
```
Item.where(:created_at => range, :updated_at => range)
SELECT `items`.* FROM `items`
WHERE (`items`.`created_at` BETWEEN '2013-06-16 14:40:55' AND '2013-06-17 14:40:55')
AND (`items`.`updated_at` BETWEEN '2013-06-16 14:40:55' AND '2013-06-17 14:40:55')
```
I could do the query with a single statement and add the results to a set to remove duplicates, but it really feels like something that should be a possible on once query.
And ideas? | You'll have to write your own statement if you want to `OR` two conditions together:
```
Item.where("(created_at BETWEEN ? AND ?) or (updated_at BETWEEN ? and ?)", 2.weeks ago, 1.week ago, 2.weeks_ago, 1.week_ago)
``` | You overcomplicate your code by searching for `created_at` AND `updated_at`. You only need to search for `updated_at` because that gets set at a create too. So the solution for your problem is:
```
range = 2.weeks.ago .. 1.week.ago
Item.where(:updated_at => range)
``` | Find ActiveRecord objects created or updated in a time range | [
"",
"sql",
"ruby-on-rails",
"activerecord",
""
] |
So I'm a bit curious about why this doesn't work.
How come code like:
```
for a in range(10) and b in range(10):
print a + b
```
generates an error that says 'b is not defined'?
Also, code like:
```
for a,b in range(10):
print a + b
```
generates an error: 'int objects are not iterable'.
Why? I haven't established their value beforehand, so how would Python know they are int objects? Also, I know you could use a while loop instead, but is there any way to carry out the sort of operation I'm doing using a for-loop alone? | ```
for a,b in zip(range(10),range(10)):
print a + b
```
should work great... assuming I understood your question properly
if not then
```
for a in range(10):
for b in range(10):
print a+b
```
or even `[a+b for a in range(10) for b in range(10)]` | Other answers explained the right way to do it, but nobody explained what was wrong with what you did yet.
```
for a in range(10) and b in range(10):
print a + b
```
That's a cute idea of some intuitive syntax, but Python doesn't know that one. The precedence of what you programmed actually works out like this:
```
for a in ((range(10)) and (b in range(10))):
```
Python thinks you're trying to make a complex expression to generate a *single* iterable to iterate over. The first error occurs when it tries to evaluate b to build the value. If b was defined, then `b in range(10)` would result in `True` or `False`. The result of anding it with `range(10)` will also be a boolean. Then you'd hit another error trying to iterate over a boolean.
```
for a,b in range(10):
print a + b
```
This kind of syntax works, if the enumeration on the right contains elements that are 2-tuples. The first step in this for loop is the equivalent of trying `a,b = 0`. It tries to "unpack" the right hand side by iterating over it. But you can't iterate over a single integer. a and b are not defined yet, but the first element of range(10) is. That's the integer you can't iterate over. | Using the 'And' Operator in a For-Loop in Python | [
"",
"python",
"for-loop",
"typeerror",
""
] |
How to get last Thursday of every month in a year 2013 in oracle? i need to update this date into my table.
i need a output like
```
Last Thursday in a year 2013
----------------------
31.01.2013
28.02.2013
28.03.2013
24.04.2013
30.05.2013
27.06.2013
25.07.2013
29.08.2013
26.09.2013
31.10.2013
28.11.2013
26.12.2013
```
Thanks to do the needful. | This will do it:
```
select next_day (last_day (add_months(date '2013-01-01', rownum-1))-7, 'THU') as thurs
from dual
connect by level <= 12;
THURS
---------
31-JAN-13
28-FEB-13
28-MAR-13
25-APR-13
30-MAY-13
27-JUN-13
25-JUL-13
29-AUG-13
26-SEP-13
31-OCT-13
28-NOV-13
26-DEC-13
12 rows selected.
```
Explanation:
1) The following `select` is a way to generate a series of integers 1..12:
```
select rownum from dual connect by level <= 12;
```
2) This returns the 1st of each of the 12 months of 2012 by taking 1st January 2013 and adding 0 months, 1 month, ..., 11 months:
```
select add_months(date '2013-01-01', rownum-1)
from dual connect by level <= 12;
```
3) The `last_day` function returns the last day of the month for the given date, so that we now have 2013-01-31, 2013-02-28, ..., 2013-12-31.
4) `next_day (date, 'THU')` returns the next Thursday after the specified date. To get the last Thursday of the month we take the last day of the month, go back 7 days, then find the next Thursday. | I'd go with `dbms_scheduler`:
```
declare
start_dt date := date '2013-01-01';
months_last_thursday date;
begin
loop
dbms_scheduler.evaluate_calendar_string (
calendar_string => 'FREQ=MONTHLY;BYDAY=-1 THU',
start_date => start_dt,
return_date_after => start_dt,
next_run_date => months_last_thursday
);
exit when months_last_thursday > date '2013-12-31';
dbms_output.put_line(months_last_thursday);
start_dt := months_last_thursday;
end loop;
end;
/
``` | How to get last thursday of every month in a year 2013 in oracle? | [
"",
"sql",
"oracle",
""
] |
Suppose i have two tables.
```
table one:
| col1 |
- - - - -
| do |
| big |
| gone |
table two
| col1 | col2 | col3 | col4 |
- - - - - - - - - - - - - - -
| do | blah | blah | big |
| big | do | blah | gone |
| blah | blah | blah | blah |
```
how do i search from `table two` such that rows which are displayed contain all values of `col1` of `table one`
for eg. the result for the given situation should be
```
| col1 | col2 | col3 | col4 |
- - - - - - - - - - - - - - -
| big | do | blah | gone |
``` | Nasty problem...
```
SELECT two.*
FROM two
WHERE (SELECT COUNT(*) FROM one) =
(CASE WHEN col1 IN (SELECT * FROM one) THEN 1 ELSE 0 END +
CASE WHEN col2 IN (SELECT * FROM one) THEN 1 ELSE 0 END +
CASE WHEN col3 IN (SELECT * FROM one) THEN 1 ELSE 0 END +
CASE WHEN col4 IN (SELECT * FROM one) THEN 1 ELSE 0 END
)
```
The term 'efficiency' should not be mentioned in conjunction with this query. | Perhaps the trickiest part of this is guaranteeing that *all* the columns are covered in the second table. It is not enough just to count them, you also have be sure that all are the set:
```
select t.*
from two t left outer join
one o1
on o1.col1 = t.col1 left outer join
one o2
on o2.col1 = t.col2 and o2.col1 not in (coalesce(t.col1, '')) left outer join
one o3
on o3.col1 = t.col3 and o3.col1 not in (coalesce(t.col1, ''), coalesce(t.col2, '')) left outer join
one o4
on o4.col1 = t.col4 and o4.col1 not in (coalesce(t.col1, ''), coalesce(t.col2, ''), coalesce(t.col3, '')) cross join
(select count(*) as cnt from one) const
where const.cnt = ((case when o1.col1 is not null then 1 else 0 end) +
(case when o2.col1 is not null then 1 else 0 end) +
(case when o3.col1 is not null then 1 else 0 end) +
(case when o4.col1 is not null then 1 else 0 end)
)
```
This looks up each value in the `one` table, with the proviso that the value has not been seen before. If there are duplicates in the `one` table, there is question on how to handle them. Would that mean that the value has to appear that many times? | Search from a table using values of a column of another table | [
"",
"sql",
""
] |
I have 2 tables
```
TBL_HEADER
----------
HEADER_ID
COST_CENTER
TBL_RESULTS
-----------
WEEK_NO
COST_CENTER
HEADER_ID_FK
```
I have a requirement to copy all of the COST\_CENTER\_CODES from TBL\_HEADER into TBL\_RESULTS
joining on the HEADER\_ID > HEADER\_ID\_FK.
I tried this but the subquery is returning multiple rows
```
UPDATE
TBL_RESULTS R
SET
COST_CENTRE = (
SELECT
H.COST_CENTRE
FROM
TBL_HEADER H,
TBL_RESULTS R
WHERE
H.HEADER_ID = R.HEADER_ID_FK
)
```
Can someone point me in the right direction and explain why this is happening?
I'm using Oracle 10.2.0.4
many thanks
JC | You want a correlated subquery, not a subquery with a join:
```
UPDATE TBL_RESULTS
SET COST_CENTRE = (SELECT H.COST_CENTRE
FROM TBL_HEADER H
WHERE H.HEADER_ID = TBL_RESULTS.HEADER_ID_FK
)
```
The extra reference to `tbl_results` was causing problems. | ```
UPDATE
TBL_RESULTS R
SET
COST_CENTRE = (
SELECT
H.COST_CENTRE
FROM
TBL_HEADER H
WHERE
H.HEADER_ID = R.HEADER_ID_FK
)
``` | Update multiple rows on second table based on first table data | [
"",
"sql",
"oracle10g",
""
] |
im making a pygame and im trying to make the game progessively harder by speeding up thee enemies every ten seconds
here is the method Im trying:
```
def time_pass(self):
#timer for main game play
self.time_passed = time.clock()
if self.time_passed == (self.start_time + 10):
self.str_spd1 += 2
self.str_spd2 += 2
self.str_spd3 += 2
```
`self.star_time` is the time the gameplay started and `self.time_passed` is the time in seconds since the program started
so if the game has been runnig for ten seconds the speed should increase by 2 but its only happening when the time is exactly 10 then it goes back
if i use `>` instead `==` the game like crashes cause the stars accelarate
so i need a way to make it so that every ten seconds the speed of the stars will increase by 2 | Python 2.x does not have a reliable way to count seconds since launch.
`time.clock()` is very wrong—on any platform other than Windows, it's counting CPU time instead of wall clock time.
`time.time()` is closer, in that it's wall clock on every platform. It's not guaranteed to have sub-second precision, but when you're only checking every 10 seconds, that's no problem. What *is* a problem is that it doesn't handle changes to the system clock very nicely. For example, if you set your clock ahead an hour, the timer won't fire for 3610 seconds instead of just 10.
Fortunately, `PyGame` has its [`pygame.time`](http://www.pygame.org/docs/ref/time.html) module, with features designed specifically to deal with cases like this.
The easiest thing to do is to just use `pygame.game.set_timer`. Instead of checking the time every frame or every idle tick or whatever, just add an event handler and ask PyGame to fire that event every 10 seconds.
If that isn't appropriate, `pygame.time.get_ticks` gives you the number of milliseconds of wall clock time since the game started, which is probably what you'd want to use with Elazar's code.
But it's worth reading the linked page, and some of the examples, before deciding what you want to do. | in `__init__`: `self.i = 0`
```
if self.time_passed >= (self.start_time + 10 * self.i):
self.i += 1
...
``` | Changing variables after a certain amount of time | [
"",
"python",
"variables",
"time",
"pygame",
""
] |
I have a table called fields that looks like this:
```
Name | Table
---------------+----------------
DateFound | Table1
DateUpdate | Table2
DateCharge | Table3
DateLost | Table4
DateDismissed | Table5
```
And what I want to do is change the year for all of those fields in their specified table to 2013. They are all datetime fields in their respective table. So basically, I want to have DateFound to be changed in Table1 from 06/12/2009 16:14:23 to 06/12/2013 16:14:23.
Is there an easy to do this by saying something like:
```
SELECT (SELECT [Name]
FROM fields)
FROM (SELECT [Table]
From fields)
``` | Try with cursor and dynamic sql, something like this.
```
DECLARE @changeYear datetime
SET @changeYear = '2013-01-01'
DECLARE @tableName varchar(50), @columnName varchar(50)
Declare updateTime_Cursor Cursor FOR
select name, table from fields
OPEN updateTime_Cursor
FETCH NEXT FROM updateTime_Cursor
INTO @columnName, @tableName
WHILE @@FETCH_STATUS = 0
BEGIN
DECLARE @sql nvarchar(1000)
SELECT @sql = 'Update dbo.'+@tablename+' set '+@columnName+' = DATEADD(yy, DATEDIFF(yy, '+@columnName+', @changeYear), '+@columnName+')'
EXEC sp_executesql @sql, N'@changeYear datetime',changeYear
END
CLOSE updateTime_Cursor
DEALLOCATE updateTime_Cursor
``` | I have not tested this, but I think you can run this for each table/field it will do the trick.
```
UPDATE Table1
SET DateFound = DATEADD(YY, DATEDIFF(YY, DateFound, getdate()), DateFound)
``` | Updating data from multiple tables based on a row in another table | [
"",
"sql",
"sql-server",
"select",
""
] |
I have a program that recursively goes through 2 directories and puts the filename:sha256hash into 2 dicts, folder1 and folder2.
What I want to do is a comparison of the hashes and if the hashes match but the key is different, pub the key into a new list called "renamed". I have the logic in place to account for deleted files, new files, and files where the key is the same but the value(hash) is different (a modified file) but can't for the life of me get my head around doing the opposite.
```
# Put filename:hash into 2 dictionaries from the folders to compare
for root, dirs, files in os.walk(folder_1):
for file in files:
files1[file] = get_hash(os.path.join(root,file))
for root, dirs, files in os.walk(folder_2):
for file in files:
files2[file] = get_hash(os.path.join(root, file))
# Set up the operations to do for the comparison
set_files2, set_files1 = set(files2.keys()), set(files1.keys())
intersect = set_files2.intersection(set_files1)
# Compare and add to list for display
created.extend(set_files2 - intersect)
deleted.extend(set_files1 - intersect)
modified.extend(set(k for k in intersect if files1[k] != files2[k]))
unchanged.extend(set(k for k in intersect if files1[k] == files2[k]))
```
The issue with this is 1: it doesn't account for renamed files, 2: it puts renamed files into created, so once I have renamed files I have to created = created - renamed to filter those out of actual new files.
Any/all help is appreciated. I've come this far but for some reason my mind is on strike. | You can flip your `files1` and `files2` dicts:
```
name_from_hash1 = {v:k for k, v in file1.items()}
name_from_hash2 = {v:k for k, v in file2.items()}
```
(The flipping idiom I found on [this SO answer](https://stackoverflow.com/a/483833/400793).)
Then,
```
renamed = []
for h in name_from_hash1:
if h in name_from_hash2 and name_from_hash1[h] != name_from_hash2[h]:
renamed.append(name_from_hash2[h])
```
`renamed` is then the list of renamed filenames by their current names. You can get the list of the original names of the renamed files by changing `name_from_hash2` to `name_from_hash` in the last line. | I've got a simple solution for you: rather than having the filenames as keys and hashes as values, have the hashes as keys and filenames as values (after all, you want the keys to be unique, not the values). You'd simply have to adjust the rest of your program to account for that. (Oops, looks like Bitwise already mentioned that in a comment. Oh well.)
If you don't want to change the rest of your code, here's a good one-liner method to create a set of renamed files, if you're using Python 2.7+:
```
renamedfiles = {k for k, v in hashes1.items() if v in hashes2.values()}
```
For slightly increased efficiency in Python 2.7, use `iteritems()` and `itervalues()` instead (Python 3 represents its key, item, and value views as iterators by default).
Addendum: You could also do `renamedfiles = filter(lambda item:item in hashes2.values(), hashes1.items())`, though that would result in an iterator over the qualifying key/value pairs rather than a set or dict. Also, I believe comprehensions are generally preferred in Python even though `filter()` is one of the built-in methods. | Comparing 2 dicts, if values are the same, but key is different, add key to a new list in python | [
"",
"python",
"dictionary",
"comparison",
""
] |
Why am I getting duplicates? Seems simple but it isn't getting though my thick skull.
```
SELECT MOPACTIVITY.MOPID STRICT,
TO_CHAR(MOPNOTES.MOPNOTEDATE,
'yyyy-mm-dd hh24:mi') "MOPNOTEDATE"
FROM MOPUSER.MOPACTIVITY
INNER JOIN MOPUSER.MOPNOTES
ON MOPACTIVITY.MOPID=MOPNOTES.MOPID
```
How do I get only one child record to the parent and the child record being the most recent one by MOPNOTES.MOPNOTEDATE? | Try grouping, and using a maximum:
```
SELECT MOPACTIVITY.MOPID STRICT,
TO_CHAR(max(MOPNOTES.MOPNOTEDATE), 'yyyy-mm-dd hh24:mi') "MOPNOTEDATE"
FROM MOPUSER.MOPACTIVITY
INNER JOIN MOPUSER.MOPNOTES ON MOPACTIVITY.MOPID=MOPNOTES.MOPID
GROUP BY MOPACTIVITY.MOPID
``` | If you only need those columns from `MOPNOTES`, then this should do:
```
SELECT MOPACTIVITY.MOPID STRICT,
TO_CHAR(MN.MOPNOTEDATE,
'yyyy-mm-dd hh24:mi') "MOPNOTEDATE"
FROM MOPUSER.MOPACTIVITY
INNER JOIN (SELECT MOPID, MAX(MOPNOTEDATE) AS "MOPNOTEDATE"
FROM MOPUSER.MOPNOTES
GROUP BY MOPID) MN
ON MOPACTIVITY.MOPID=MN.MOPID
``` | Why Duplicates in my join | [
"",
"sql",
"oracle",
"select",
"join",
""
] |
I am trying to get a list with a specific output from an index in another list,
for example:
```
L = [(0, 1, 2, 3, 4, 5), (6, 7, 8, 9, 10,...etc), (...etc)]
multiple_index = [entry[0, 3, 4] for entry in L]
#----> I know this specific code is wrong
```
I would love it if the above code could output:
```
[(0, 3, 4), (6, 9, 10), (...etc)]
```
I want the individual sub-indices from each index in the main list to be grouped as shown, if that is at all possible, and am wondering what code I could use to properly pull this off, thanks.
EDIT:
Also, How could I format it to display as rows cleanly, I am outputting them to a text file using .writelines and a separate output line, thanks again! | Use [`operator.itemgetter()`](http://docs.python.org/2/library/operator.html#operator.itemgetter):
```
from operator import itemgetter
multiple_index = map(itemgetter(0, 3, 4), L)
```
or in a list comprehension:
```
multiple_index = [itemgetter(0, 3, 4)(i) for i in L]
``` | Here is one option:
```
L = [(0, 1, 2, 3, 4, 5), (6, 7, 8, 9, 10, 11), (11, 12, 13, 14, 15, 16)]
multiple_index = [(entry[0], entry[3], entry[4]) for entry in L]
```
Or using [`operator.itemgetter()`](http://docs.python.org/2/library/operator.html#operator.itemgetter):
```
from operator import itemgetter
indices = itemgetter(0, 3, 4)
multiple_index = [indices(entry) for entry in L]
``` | Python list comprehension with multiple variables | [
"",
"python",
"list",
"list-comprehension",
""
] |
I have the following query
```
SELECT dbo.tblRegion.RegionName,
dbo.tblDistributionLocation.DistributionLocationName,
dbo.tblTSA.TSAName,
TEmailInfo.EmailCM,
COUNT(*) AS EmailCount
FROM dbo.tblArea
INNER JOIN dbo.tblTerritory
ON dbo.tblArea.AreaID = dbo.tblTerritory.AreaID
INNER JOIN dbo.tblDistribution
ON dbo.tblTerritory.TerritoryID = dbo.tblDistribution.TerritoryID
INNER JOIN dbo.tblDistributionLocation
ON dbo.tblDistribution.DistributionID = dbo.tblDistributionLocation.DistributionID
INNER JOIN dbo.tblRegion
ON dbo.tblArea.RegionID = dbo.tblRegion.RegionID
INNER JOIN dbo.tblTSA
ON dbo.tblDistributionLocation.DistributionLocationID =
dbo.tblTSA.DistributionLocationID
INNER JOIN dbo.tblTSAEmail
ON dbo.tblTSA.TSAID = dbo.tblTSAEmail.TSAID
INNER JOIN (SELECT *
FROM dbo.tblCMEvalEmail
WHERE ( dbo.tblCMEvalEmail.EmailSentDate
BETWEEN '2013-05-19 00:00:00' AND '2013-06-16 23:59:59' )) AS TCMEvalEmail
ON dbo.tblTSAEmail.TSAEmail = TCMEvalEmail.EmailSenderEmail
INNER JOIN (SELECT *
FROM dbo.tblCMEvalEmailInfo
WHERE dbo.tblCMEvalEmailInfo.EmailCMFacingDate
BETWEEN '2013-05-19 00:00:00' AND '2013-06-16 23:59:59') AS TEmailInfo
ON TCMEvalEmail.EmailID = TEmailInfo.EmailID
WHERE ( dbo.tblTSA.TSAActive = 1 )
AND TCMEvalEmail.EmailStatus = 'Success'
GROUP BY dbo.tblRegion.RegionName,
dbo.tblDistributionLocation.DistributionLocationName,
dbo.tblTSA.TSAName,
TEmailInfo.EmailCM
```
What's wrong with this query that it takes so much time?
But if I shorten time '2013-05-20 00:00:00' and '2013-06-16 23:59:59' then it replies so quick. What's problem with my query that it takes so much time? | Performance tuning is not just flipping a magic switch - it's hard work.
So start with the most obvious : try to reduce your query to the absolute minimum.
E.g.
* why are you selecting `SELECT *` in your inner queries, when you're only ever using a single (or two) columns from that data? Only select what you **really need**!
In the first case, if I'm not mistaken, you only ever need the `EmailSenderEMail` column - so select only that!
```
INNER JOIN
(
select EmailSenderEmail
from dbo.tblCMEvalEmail
where (dbo.tblCMEvalEmail.EmailSentDate BETWEEN '2013-05-19 00:00:00'
AND '2013-06-16 23:59:59')
) as TCMEvalEmail ON dbo.tblTSAEmail.TSAEmail = TCMEvalEmail.EmailSenderEmail
```
In the second case, you need the `EmailID` for the JOIN, and the `EmailCM` in the output of the `SELECT` - so select only those two columns!
```
INNER JOIN
(
select EMailID, EMailCM
from dbo.tblCMEvalEmailInfo
where dbo.tblCMEvalEmailInfo.EmailCMFacingDate BETWEEN '2013-05-19 00:00:00'
and '2013-06-16 23:59:59'
) as TEmailInfo ON TCMEvalEmail.EmailID = TEmailInfo.EmailID
```
* next step: make sure you have the appropriate indexes in place. If you have subselects like these, it's extremely valuable to have an index that will *cover your query*, e.g. that will return exactly those columns you need. So do you have an index on `dbo.tblCMEvalEmail` with the `EmailSenderEMail` column? Do you have an index on `dbo.tblCMEvalEmailInfo` that contains the two columns `EMailID, EMailCM` ?
* another thing: all foreign key columns should be indexed, to improve the speed of JOIN operations, and to help speed up foreign key constraint checks. Are you foreign keys used here all indexed? | As marc\_s points out, optimization is not a quick thing to do, nor is it a one-trick-fixes-all solution. Your best bet is to read up on the topic (see <http://beginner-sql-tutorial.com/sql-query-tuning.htm> for some starter tips).
You should also read up on the EXPLAIN PLAN tool (or equivalent variation for your DB) which is a vital optimisation tool; it will highlight things that can be slowing down your query on your particular database, like full table scans - eliminating these typically gives you quick wins and often a noticable improvement.
Just off the bat though, the two things that jumps out at me are:
1. Do you have indices set up on all the IDs you're using to join? If not, this will have a negative hit on performance
2. TCMEvalEmail.EmailStatus='Success' is a string match which is typically a slow comparison to do; without seeing the results of your Explain Plan it's hard to say, but you might want to consider replacing this with a numeric status code (e.g. a Foreign Key to a STATUS table) - but since this could be a big task you should only do it if Explain Plan highlights it as an issue. | Query execution takes too much time | [
"",
"sql",
"sql-server-2008",
""
] |
I'm looking for way to remove every lines before line which contains specific string in multiline string like this:
```
string1
string2
string3
==== bump
string4
string5
string6
==== bump
```
But only first matching one...
At the end I would like to have this as an output:
```
==== bump
string4
string5
string6
==== bump
``` | ```
import re
text = '''\
string1
string2
string3
==== bump
string4
string5
string6
==== bump'''
print(re.split(r'(=== bump)', text, maxsplit=1)[-1])
```
yields
```
string4
string5
string6
==== bump
``` | ## Alternative Language: Use Perl's Flip-Flop Operator
Assuming that you've stored your text in */tmp/corpus*, you could use the following Perl one-liner:
```
perl -ne 'print if /\A==== bump/ ... /\A==== bump/' /tmp/corpus
```
This leverages the power of Perl's [range operator](http://perldoc.perl.org/perlop.html#Range-Operators). If you want to capture the output from Perl within your Python program, you can use the [Python subprocess](http://docs.python.org/2/library/subprocess.html) module. For example:
```
import subprocess
result = subprocess.check_output(
"perl -ne 'print if /\A==== bump/ ... /\A==== bump/' /tmp/corpus",
shell=True)
print result
``` | How to find line with regex and remove any preceding lines | [
"",
"python",
"regex",
""
] |
I am using time.sleep(10) in my program. Can display the countdown in the shell when I run my program?
```
>>>run_my_program()
tasks done, now sleeping for 10 seconds
```
and then I want it to do 10,9,8,7....
is this possible? | you could always do
```
#do some stuff
print 'tasks done, now sleeping for 10 seconds'
for i in xrange(10,0,-1):
time.sleep(1)
print i
```
This snippet has the slightly annoying feature that each number gets printed out on a newline. To avoid this, you can
```
import sys
import time
for i in xrange(10,0,-1):
sys.stdout.write(str(i)+' ')
sys.stdout.flush()
time.sleep(1)
``` | This is the best way to display a timer in the console for Python 3.x:
```
import time
import sys
for remaining in range(10, 0, -1):
sys.stdout.write("\r")
sys.stdout.write("{:2d} seconds remaining.".format(remaining))
sys.stdout.flush()
time.sleep(1)
sys.stdout.write("\rComplete! \n")
```
This writes over the previous line on each cycle. | Display a countdown for the python sleep function | [
"",
"python",
"time",
"sleep",
"stopwatch",
"countdowntimer",
""
] |
I have 3 lists-of-lists.
The sub-lists' field 1 is a name, field 2 is a number, and field 3 is a number. This format is always the same, and doesn't change. There are always the same names in the 3 lists; however, the **order may not be the same**.
```
a = [['jane', '1', '120'], ['bob', '3', '35'], ['joe', '5', '70']]
b = [['bob', '1', '12'], ['jane', '2', '240'], ['joe', '1', '100']]
c = [['joe', '2', '30'], ['jane', '5', '45'], ['bob', '0', '0']]
```
I would like a result (any object type) with the sum of fields 2 & 3 of the lists' sub-lists.
```
result = [['jane', '8', '405'], ['bob', '4', '47'], ['joe', '8', '200']]
```
---
In pseudo Python3 code, I'm guessing it'd look like this, but I cannot figure out the correct way to do it in Python3. Let alone doing it in a Pythonic way:
```
def sum_f2_f3(list_a, list_b)
where element[0] in list_a.sub_list == element[0] in list_b.sub_list:
x = element[0]
result[x:1] = list_a.sub_list[0:1] + list_b.sub_list[0:1]
result[x:2] = list_a.sub_list[0:2] + list_b.sub_list[0:2]
return result
result = sum_f2_f3(sum_f2_f3(a,b), c)
```
Any ideas? What built-in Python tools can help me with this? | To illustrate why using the right data structures makes things a lot easier…
Let's say that `a`, `b`, and `c` were actually `dict`s, and your numbers were actually `int`s instead of `str`s. After all, the whole point of a `dict` is to look things up by name, and the whole point of an `int` is to be able to do arithmetic. So:
```
a = {'jane': [1, 120], 'bob': [3, 35], 'joe': [5, 70]}
b = {'bob': [1, 12], 'jane': [2, 240], 'joe': [1, 100]}
c = {'joe': [2, 30], 'jane': [5, 45], 'bob': [0, 0]}
```
Now, all you have to do is this:
```
result = {}
for d in a, b, c:
for k, v in d.items():
if not k in result:
result[k] = [0, 0]
result[k][0] += v[0]
result[k][1] += v[1]
```
And the result is:
```
{'bob': [4, 47], 'jane': [8, 405], 'joe': [8, 200]}
```
There's still a bit of room for improvement—you can use a `defaultdict` to get rid of the `if not k in result:` bit—but even with just novice-level stuff this is pretty compact and simple.
---
But what if you got those lists as input—you'd like to have nice dicts in the end, but you don't start there?
You can write a function to convert them, like this:
```
def convert(list_of_lists):
result = {}
for element in list_of_lists:
key = element[0]
values = []
for value in element[1:]:
values.append(int(value))
result[key] = values
return result
```
And if you spot the familiar `values = []… for value in … values.append(…)` pattern, you can turn that into the simple list comprehension `[int(value) for value in element[1:]]`. And then the whole thing is the dict equivalent of the same pattern, so you can reduce all of it to:
```
return {element[0]: [int(value) for value in element[1:]] for element in list_of_lists}
```
Meanwhile, if you need to convert back to the original form, that's just:
```
def unconvert(dict_of_lists):
result = []
for key, values in dict_of_lists.items():
element = [key] + [str(value) for value in values]
result.append(element)
return result
``` | This seems to give what you want using more pythonic list comprehensions.
```
>>> [[e[0][0], sum(int(r[1]) for r in e), sum(int(r[2]) for r in e)]
for e in zip(a, b, c)]
[['jane', 8, 405], ['bob', 4, 47], ['joe', 8, 200]]
```
If you want it to work with out of order names, you could do something like this
```
>>> from itertools import groupby
>>> [[name] +
reduce(
lambda a, b: [int(c) + int(d) for (c,d) in zip(a, b)],
[r[1:] for r in records])
for name, records
in groupby(
sorted(r for l in [a, b, c] for r in l),
lambda r: r[0])
]
[['bob', 4, 47], ['jane', 8, 405], ['joe', 8, 200]]
```
Don't judge me. I don't really write code like that. | How do I sum multiple lists-of-lists on certain sub-list's fields? | [
"",
"python",
"python-3.x",
""
] |
I have some models and tables in EF that you can see one of those here:

Now when I want to generate database from model it adds 's' to name of tables in generated sql:
```
CREATE TABLE [dbo].[Options] (
[Id] int IDENTITY(1,1) NOT NULL,
[Name] nvarchar(50) NOT NULL,
[Price] int NOT NULL
);
```
I also disabled pluralizing of names as this but nothing changed:

This cause errors on deploying web application. How can I prevent pluralizing ? | Just override the OnModelCreating method and remove that “PluralizingTableNameConvention” convention. So you are telling Entity Framework not to pluralise table names, simply add
Updated
```
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Conventions.Remove<PluralizingTableNameConvention>();
}
```
It will remove the Pluralising convention that is by default attached to all model builders
Also you need to add a namespace
```
System.Data.Entity.ModelConfiguration.Conventions;
```
Hope it will help | You Should uncheck pluralize tick when you are creating EDMX file | EF pluralize table's name on generating database from model | [
"",
"sql",
"visual-studio-2010",
"entity-framework",
"ef-model-first",
"pluralize",
""
] |
I have two tables which have the same structure, let’s say TA and TB. Most records in TA and TB are the same. There are some records in TA not in TB and some in TB are not in TA. I just want get all the records from TA and TB without duplicity:
```
Select * from TA
Union
Select * from TB
```
This query does give me the results I want. But the performance is not good as in production, there are more than half million data in both tables. Is there a simple way to get all the records from both table? Both tables have an id column which has unique value and can be joined by. | You could use a `NOT EXISTS` + `UNION ALL`:
```
Select * from TA
UNION ALL
Select * from TB where not exists (select * from TA where TA.KEY_ID = TB.KEY_ID)
```
This gets you all data in `TA` and non-duplicates from `TB`. | Try:
```
SELECT COALESCE(A.ID,B.ID), COALESCE(A.field2,B.field2), etc.
FROM A
FULL JOIN B
ON A.Id = B.ID
``` | rewrite a union query | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have a list of over 100000 values and I am iterating over these values and checking if each one is contained in another list of random values (of the same size).
I am doing this by using `if item[x] in randomList`.
How efficient is this? Does python do some sort of hashing for each container or is it internally doing a straight up search of the other container to find the element I am looking for?
Also, if it does this search linearly, then does it create a dictionary of the randomList and do the lookup with that? | `in` is implemented by the `__contains__` magic method of the object it applies to, so the efficiency is dependent upon that. For instance, `set`, `dict` and `frozenset` will be hash based lookups, while `list` will require a linear search. However, `xrange` (or `range` in Python 3.x) has a `__contains__` method that doesn't require a linear search, but instead can use the start/stop/step information to determine a truthy value. (eg: `7 in xrange(4, 1000000)` isn't done linearly).
Custom classes are free to implement `__contains__` however they see fit but ideally should provide some information about how it does so in documentation if "not obvious". | You will want to pre-convert your list to a set, where hashing can be used for O(1) lookup.
See <http://wiki.python.org/moin/TimeComplexity>
(Normally, you have to search every element in a 'classical' list to tell if something is in it (unless your data structure also keeps a set of all elements, but that would add a huge amount of time and space complexity, and the programmer can implement it themselves).) | How efficient is Python's 'in' or 'not in' operators, for large lists? | [
"",
"python",
"iteration",
"complexity-theory",
""
] |
SQL Server 2008 R2(SP2) 10.50.4263
I have application number in col1 and an indicator (0 or 1) for first time buyers(ftb) in col2. Each application can have 1 or 2 applicants. For applications with two applicants, I get two entries for application as so:
```
application ftb
----------- ---
1234 0
12345 0
12345 1
2345 1
23456 0
23456 0
```
The desired result is each unique application and ftb. If an application has multiple ftb values...take the highest (its always a 0 or 1).
I would like to see this:
```
application ftb
----------- ---
1234 0
12345 1
2345 1
23456 0
```
I've been trying to use PARTITION but...not any luck. The table is provided to me and I'm unable to alter it. This needs to be done in sql and not excel. I've tried a million permutations and even self joins to solve this. Stuck.
Can some kind soul point the way? | ```
SELECT
application,
MAX(ftb) AS ftb
FROM
yourTable
GROUP BY application
```
I understood your question like you want to have only the rows presented in your desired output in the table and delete the others, right? The simplest way to achieve this, is to use a temporary table.
```
SELECT
application,
MAX(ftb) AS ftb
INTO #yourTempTable
FROM
yourTable
GROUP BY application;
```
Then delete all other rows either by
```
DELETE FROM yourTable WHERE application IN (SELECT DISTINCT application FROM #yourTempTable)
```
And finally insert from the temporary table back into your table again.
```
INSERT INTO yourTable (application, ftb)
SELECT application, ftb FROM #yourTempTable;
```
And that's it. | To return the sequence number of a row within a partition of column "application", use the ranking function [ROW\_NUMBER](http://msdn.microsoft.com/en-us/library/ms186734.aspx) with the ORDER BY clause on column "ftb". So you can delete all the duplicate rows
```
;WITH cte AS
(
SELECT ROW_NUMBER() OVER(PARTITION BY application ORDER BY ftb DESC) AS rn
FROM dbo.YourTable
)
DELETE cte
WHERE rn > 1
```
See demo on [**SQLFiddle**](http://sqlfiddle.com/#!3/bafb7/1) | Remove duplicates from col1 when condition in col2 is met | [
"",
"sql",
"sql-server-2008-r2",
""
] |
I'm setting up Travis-CI for the first time. I install scipy in what I believe is the standard way:
```
language: python
python:
- "2.7"
# command to install dependencies
before_install:
- sudo apt-get -qq update
- sudo apt-get -qq install python-numpy python-scipy python-opencv
- sudo apt-get -qq install libhdf5-serial-dev hdf5-tools
install:
- "pip install numexpr"
- "pip install cython"
- "pip install -r requirements.txt --use-mirrors"
# command to run tests
script: nosetests
```
Everything builds. But when the nosetests begin, I get
```
ImportError: No module named scipy.ndimage
```
**Update:** Here is a more direct demonstration of the problem.
```
$ sudo apt-get install python-numpy python-scipy python-opencv
$ python -c 'import scipy'
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: No module named scipy
The command "python -c 'import scipy'" failed and exited with 1 during install.
```
I tried installing scipy using pip also. I tried installing gfortran first. [Here is one example of a failed build](https://travis-ci.org/danielballan/mr/builds/8291036). Any suggestions?
**Another Update**: Travis has since added official documentation on using conda with Travis. See ostrokach's answer. | I found two ways around this difficulty:
1. As @unutbu suggested, build your own virtual environment and install everything using pip inside that environment. I got the build to pass, but installing scipy from source this way is very slow.
2. Following the approach used by the pandas project in [this .travis.yml file and the shell scripts that it calls](https://github.com/pydata/pandas/blob/master/.travis.yml), force travis to use system-wide site-packages, and install numpy and scipy using apt-get. This is much faster. The key lines are
```
virtualenv:
system_site_packages: true
```
in travis.yml before the `before_install` group, followed by these shell commands
```
SITE_PKG_DIR=$VIRTUAL_ENV/lib/python$TRAVIS_PYTHON_VERSION/site-packages
rm -f $VIRTUAL_ENV/lib/python$TRAVIS_PYTHON_VERSION/no-global-site-packages.txt
```
and then finally
```
apt-get install python-numpy
apt-get install python-scipy
```
which will be found when nosetests tries to import them.
**Update**
I now prefer a conda-based build, which is faster than either of the strategies above. Here is [one example](https://github.com/soft-matter/trackpy/blob/master/.travis.yml) on a project I maintain. | This is covered in the official conda documentation: [Using conda with Travis CI](http://conda.pydata.org/docs/travis.html#using-conda-with-travis-ci).
---
> ## The `.travis.yml` file
>
> The following shows how to modify the `.travis.yml` file to use [Miniconda](http://conda.pydata.org/miniconda.html) for a project that supports Python 2.6, 2.7, 3.3, and 3.4.
>
> NOTE: Please see the Travis CI website for information about the [basic configuration for Travis](http://docs.travis-ci.com/user/languages/python/#Examples).
```
language: python
python:
# We don't actually use the Travis Python, but this keeps it organized.
- "2.6"
- "2.7"
- "3.3"
- "3.4"
install:
- sudo apt-get update
# We do this conditionally because it saves us some downloading if the
# version is the same.
- if [[ "$TRAVIS_PYTHON_VERSION" == "2.7" ]]; then
wget https://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh -O miniconda.sh;
else
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh;
fi
- bash miniconda.sh -b -p $HOME/miniconda
- export PATH="$HOME/miniconda/bin:$PATH"
- hash -r
- conda config --set always_yes yes --set changeps1 no
- conda update -q conda
# Useful for debugging any issues with conda
- conda info -a
# Replace dep1 dep2 ... with your dependencies
- conda create -q -n test-environment python=$TRAVIS_PYTHON_VERSION dep1 dep2 ...
- source activate test-environment
- python setup.py install
script:
# Your test script goes here
``` | scipy ImportError on travis-ci | [
"",
"python",
"scipy",
"travis-ci",
""
] |
I want to separate code in different class and put them to different files. However those class are dependent on each other.
**main.py:**
```
from lib import A, B
def main():
a = A()
b = B()
a.hello()
b.hello()
if __name__ == '__main__':
main()
```
**lib/\_*init*\_.py**:
```
from a import A
from b import B
```
**lib/a.py:**
```
import lib.B
class A():
def __init__(self):
print "A"
def hello(self):
print "hello A"
b = B()
```
**lib/b.py:**
```
import lib.A
class B():
def __init__(self):
print "B"
def hello(self):
print "hello B"
a = A()
```
**Is it possible to do that in Python?**
**EDIT:**
I get this error message:
```
pydev debugger: starting
Traceback (most recent call last):
File "eclipse-python/plugins/org.python.pydev_2.7.1.2012100913/pysrc/pydevd.py", line 1397, in <module>
debugger.run(setup['file'], None, None)
File "eclipse-python/plugins/org.python.pydev_2.7.1.2012100913/pysrc/pydevd.py", line 1090, in run
pydev_imports.execfile(file, globals, locals) #execute the script
File "main.py", line 2, in <module>
from lib import A, B
File "lib/__init__.py", line 1, in <module>
from a import A
File "lib/a.py", line 1, in <module>
import lib.B
ImportError: No module named B
``` | Instead of importing the modules on top, you could import the other module within the hello function.
```
class B():
def __init__(self):
print "B"
def hello(self):
from lib import A
print "hello B"
a = A()
``` | When you have two classes depending on each other usually means that either they really belong to the same module or that you have a too tight coupling that should be resolved using dependency injection.
Now there are indeed a couple corner cases where importing from within the function is the "least worst" solution but that's still something you should avoid as much as possible. | Simple cross import in python | [
"",
"python",
"import",
"cross-reference",
""
] |
I'm trying to draw an arbitrary quadrilateral over an image using the polylines function in opencv. When I do I get the following error
> OpenCV Error: Assertion failed (p.checkVector(2, CV\_32S) >= 0) in
> polylines, file
> /tmp/buildd/ros-fuerte-opencv2-2.4.2-1precise-20130312-1306/modules/core/src/d
> rawing.cpp, line 2065
I call the function as like so,
```
cv2.polylines(img, points, 1, (255,255,255))
```
Where points is as numpy array as shown below (The image size is 1280x960):
```
[[910 641]
[206 632]
[696 488]
[458 485]]
```
and img is just a normal image that I'm able to imshow. Currently I'm just drawing lines between these points myself, but I'm looking for a more elegant solution.
How should I correct this error? | The problem in my case was that `numpy.array` created `int64`-bit numbers by default. So I had to explicitly convert it to `int32`:
```
points = np.array([[910, 641], [206, 632], [696, 488], [458, 485]])
# points.dtype => 'int64'
cv2.polylines(img, np.int32([points]), 1, (255,255,255))
```
(Looks like a bug in cv2 python binding, it should've verified `dtype`) | This function is not enough well documented and the error are also not very useful. In any case, [`cv2.polylines`](http://docs.opencv.org/modules/core/doc/drawing_functions.html#polylines) expects a list of points, just change your line to this:
```
import cv2
import numpy as np
img = np.zeros((768, 1024, 3), dtype='uint8')
points = np.array([[910, 641], [206, 632], [696, 488], [458, 485]])
cv2.polylines(img, [points], 1, (255,255,255))
winname = 'example'
cv2.namedWindow(winname)
cv2.imshow(winname, img)
cv2.waitKey()
cv2.destroyWindow(winname)
```
The example above will print the following image (rescaled):
 | Opencv polylines function in python throws exception | [
"",
"python",
"opencv",
"numpy",
"points",
""
] |
Let's say I have a string that looks something like:
```
first_string = "(white cats || 'dogs) && ($1,000 || $500-$900' || 1,000+)"
```
And I replace each word with the text "replace" by doing:
```
new_string = re.sub(r'[\w$\-+,][\w$\-+,\t ]*[\w$\-+,]|[\w$\-+,],', "replace", first_string, flags=re.IGNORECASE)
```
And I get out:
```
new_string = "(replace || replace) && (replace || replace || replace)"
```
This works fine. But I'd like to validate that new\_string has a particular format.
For example, is there a way using a regex, to make sure that new\_string fits the above general format where:
* There are always sets of parens, separated by an `&&`
* Each paren set contains strings separated by `||`
* Where the number of strings in each paren set and the number of paren sets could vary? | You can check your string structure with this pattern:
```
^(?:(?:^|\s*[&|]{2}\s*)\([^|)]+(?:\s*\|\|\s*[^|)]+)*\))*$
```
if `&&` can be inside parenthesis too, you can use:
```
^(?:(?:^|\s*[&|]{2}\s*)\([^&|)]+(?:\s*[&|]{2}\s*[^&|)]+)*\))*$
```
If your replacement pattern is good you don't need to check if the parent and the "child" have the same structure.
*Notice: if you want to allow void parenthesis, replace all the `+` quantifiers by `*`* | Not used regex.
```
def is_valid(s):
def surrounded_by_parens(s, next_validation):
s = s.strip()
return s.startswith('(') and s.endswith(')') and next_validation(s[1:-1])
def separated_by_bars(s):
return all(x.strip() == 'replace' for x in s.split('||'))
return all(surrounded_by_parens(x, separated_by_bars) for x in s.split('&&'))
assert is_valid("(replace || replace) && (replace || replace || replace)")
assert is_valid("(replace || replace)")
assert not is_valid("(replace replace) && (replace || replace || replace)")
assert not is_valid("(replace || replace) (replace || replace || replace)")
``` | How to Use a Regex to Validate the Format of a String | [
"",
"python",
"regex",
""
] |
I have a table I am trying to clean up so I just need the first address in the table for each `ClientId`
Table `Addresses` has these columns
```
Pk[Id]
[ClientId]
[AddressLine1]
[AddressLine2]
```
Query I use:
```
SELECT *
FROM Addresses
ORDER BY ClientId
```
result =
```
1 1 foo bar
2 1 foo2 bar2
3 1 foo3 bar3
4 1 foo4 bar4
5 2 foo bar2
95 2 foo bar5
97 2 foo bar6
8 3 foo2 bar7
```
wanted result =
```
1 1 foo bar <--is first match for clientid = 1
5 2 foo bar2 <-- is first match for clientid = 2
8 3 foo2 bar7 <-- is first match for clientid = 3
```
This need to work for n clientids
I tried
```
SELECT *
FROM Addresses
GROUP BY ClientId
```
The resulting error is (Column 'Id' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.)
What am I missing here? | SQL Server requires that when using a GROUP BY you need to either use an aggregate function on the columns in the select list or add them to the GROUP BY.
Your original query could be changed to use the following:
```
select a.id, a.clientid, a.address1, a.address2
from addresses a
inner join
(
select clientid, MIN(id) id
from addresses
group by clientid
) d
on a.clientid = d.clientid
and a.id = d.id;
```
As you can see this uses a subquery that returns the `min(id)` for each `clientId`, this only groups by the `clientid`. You then join this to your table to return only those rows with the min id for each client. | You are getting multiple lines because you're selecting a field you don't want to be grouping by. If you just want the first entry for each clientID you can use an analytic function like ROW\_NUMBER
Try:
```
SELECT *
FROM (SELECT *, ROW_NUMBER() OVER(PARTITION BY clientid ORDER BY ID) as RowRank
FROM Addresses)sub
WHERE RowRank = 1
``` | MS SQL Server 2008 error on GROUP BY | [
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
In my stored procedure I need to select based on the user input
The query would be like this
```
SELECT A,B FROM MYTABLE WHERE B=array[0] OR B=array[1] OR B=array[2]
```
The number of items in array is unknown to me. The user selection will decide the number of elements in the array.
How can I achieve this? If I could do this, I could avoid using same procedure for each elements in the array. | Instead of array you can create an [User Defined Table type](http://msdn.microsoft.com/en-us/library/bb522526%28v=sql.105%29.aspx)
```
CREATE TYPE dbo.type_name AS TABLE
(
column1 INT NOT NULL
)
```
Pass a single columned `DataTable` from page as parameter (values are same as in that array) .
And in procedure you can use it as follows
```
CREATE PROCEDURE proc_name
@array dbo.type_name READONLY
AS
SELECT A,B FROM MYTABLE WHERE B IN (select column1 from @array)
``` | You could have a look at [Use Table-Valued Parameters (Database Engine)](http://msdn.microsoft.com/en-us/library/bb510489.aspx)
> Table-valued parameters are declared by using user-defined table
> types. You can use table-valued parameters to send multiple rows of
> data to a Transact-SQL statement or a routine, such as a stored
> procedure or function, without creating a temporary table or many
> parameters.
Further to that, you might want to pass it as an XML parameter, and then convert that to a table in the SP. [Using XML in SQL Server](http://msdn.microsoft.com/en-us/library/ms190936%28v=sql.90%29.aspx)
You could even pass in a delimited string, and split that into a table [Split strings the right way – or the next best way](http://www.sqlperformance.com/2012/07/t-sql-queries/split-strings) | How can I pass and use array in a stored procedure? | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"ado.net",
""
] |
I'm certain this has been asked a million times, but it's difficult to search for something when you don't know the correct terminology :(
I'm attempting (again... I've *never* understood OO, since I got taught it very badly 8 years ago, and avoid it as much as possible, to the horror of every other programmer I know - my mind doesn't seem wired to get it, at all) to teach myself OO and PyQt simultaneously.
I don't even know if this is logically possible, but I've got a PyQt *action*, which is referred to by 2 different *things*; one of the arguments of the action is an Icon. When the action called by one of those *things*, I'd like to change the icon; code snippet:
```
self.menu()
self.toolbar()
self.actions()
def menu(self):
self.fileMenu = QtGui.QMenu("&File", self)
self.fileMenu.addAction(self.exitAct)
def toolbar(self):
self.toolbar = self.addToolBar("Exit")
self.toolbar.addAction(self.exitAct)
def actions(self):
self.exitIcon = QtGui.QIcon('exit.png')
self.exitAct = QtGui.QAction(self.exitIcon, "&Exit", self, triggered=self.close)
```
In the toolbar, I'd like a different icon (`exitdoor.png`). The documentation for `QtIcon` has an `addFile` method, so I tried:
```
self.toolbar.addAction(self.exitAct)
self.exitIcon.addFile("exitdoor.png")
```
but this didn't work, with the error ("'QToolBar' object has no attribute 'addFile'"), and
```
self.exitAct.addFile("exitdoor.png")
```
with the error 'QAction' object has no attribute 'addFile' (I do understand why this doesn't work and what the error means).
What's my stupid mistake?! (Apart from the mistake of putting myself through the pain of continuing to try and learn OO...) | After talking to some much cleverer people than me, the short answer is that what I was trying to do is seemingly impossible. Once an `action` has been defined, it is not possible to change any attribute of it.
At the moment, my knowledge of python isn't great enough to understand exactly why this is (or how I could have realised this from the docs), but it seems that when an action is defined, it is effectively a blackbox stored in memory, and other objects can only use, and not modify, the blackbox.
Further comments still welcome! | > 'QToolBar' object has no attribute 'addFile'
Hmm, since you called addFile on self.exitIcon, it looks like you have the wrong kind of object in the self.exitIcon variable. It seems you want it to be a QtGui.QIcon type, but instead it's a QToolBar type.
You should look at where you are making assignments to self.exitIcon.
In this case, trying to learn object-oriented programming through Python is not the easiest way. Python is a fine object-oriented language, but it does not catch your errors as immediately as other languages. When you get an error like the above, the mistake was not in that line of code, but rather in a line of code that ran a while ago. Other languages would catch the mistake even before you run your program, and point you directly at the line you need to fix. It might be worthwhile for you to practice a little basic Java to get trained in OOP by a strict teacher before you go off into the wilds of Python. | Very basic OO: changing attribute of a method | [
"",
"python",
"pyqt",
""
] |
If I have the following code:
```
conn = sqlite3.connect('abc.db')
```
.. it will create DB fle called `abc.db` if it doesn't already exist. What do I do if I don't want to create a new file and only want the connection to succeed if the file already exists? | You can check if the file already exists or not:
```
import os
if not os.path.exists('abc.db'):
conn = sqlite3.connect('abc.db')
``` | Use sqlite3\_open\_v2() with either SQLITE\_OPEN\_READONLY or SQLITE\_OPEN\_READWRITE.
As described here:
<http://www.sqlite.org/c3ref/open.html> | Don't want to create a new database if it doesn't already exists | [
"",
"python",
"sqlite",
""
] |
I feel like I'm missing something obvious, but there it is... I would like to go from:
```
lst = [[0, 1, 3, 7, 8, 11, 12], [8, 0, 1, 2, 3, 14], 2]
```
to:
```
output = [0, 1, 3, 7, 8, 11, 12, 8, 0, 1, 2, 3, 14, 2]
```
I can do this with a for loop such as:
```
output = []
for l in lst:
if hasattr(l, '__iter__'):
output.extend(l)
else:
output.append(l)
```
Maybe the for-loop is fine, but it feels like there should be a more elegant way to do this... Trying to do this with numpy seems even more convoluted because ragged arrays aren't easily handled... so you can't (for example):
```
output = np.asanyarray(lst).flatten().tolist()
```
Thanks in advance.
**Update:**
Here's my comparison between the two methods provided by @T.J and @Ashwini - thanks to both!
```
In [5]: %paste
from itertools import chain
from collections import Iterable
lis = [[0, 1, 3, 7, 8, 11, 12], [8, 0, 1, 2, 3, 14], 2]
def solve(lis):
for x in lis:
if isinstance(x, Iterable) and not isinstance(x, basestring):
yield x
else:
yield [x]
%timeit list(chain.from_iterable(solve(lis)))
%timeit [a for x in lis for a in (x if isinstance(x, Iterable) and not isinstance(x,basestring) else [x])]
## -- End pasted text --
100000 loops, best of 3: 10.1 us per loop
100000 loops, best of 3: 8.12 us per loop
```
**Update2:**
```
...
lis = lis *10**5
%timeit list(chain.from_iterable(solve(lis)))
%timeit [a for x in lis for a in (x if isinstance(x, Iterable) and not isinstance(x,basestring) else [x])]
## -- End pasted text --
1 loops, best of 3: 699 ms per loop
1 loops, best of 3: 698 ms per loop
``` | Here is a pretty straightforward approach that uses a list comprehension:
```
>>> data = [[0, 1, 3, 7, 8, 11, 12], [8, 0, 1, 2, 3, 14], 2]
>>> [a for x in data for a in (x if isinstance(x, list) else [x])]
[0, 1, 3, 7, 8, 11, 12, 8, 0, 1, 2, 3, 14, 2]
```
Here are timing comparisons, it looks like my version is slightly faster (note that I modified my code to use `collections.Iterable` as well to make sure the comparison is fair):
```
In [9]: %timeit list(chain.from_iterable(solve(data)))
100000 loops, best of 3: 9.22 us per loop
In [10]: %timeit [a for x in data for a in (x if isinstance(x, Iterable) else [x])]
100000 loops, best of 3: 6.45 us per loop
``` | You can use `itertools.chain` like this:
```
>>> from itertools import chain
>>> from collections import Iterable
>>> lis = [[0, 1, 3, 7, 8, 11, 12], [8, 0, 1, 2, 3, 14], 2]
def solve(lis):
for x in lis:
if isinstance(x, Iterable) and not isinstance(x, basestring):
yield x
else:
yield [x]
...
>>> list(chain.from_iterable(solve(lis)))
[0, 1, 3, 7, 8, 11, 12, 8, 0, 1, 2, 3, 14, 2]
```
Works fine for strings too:
```
>>> lis = [[0, 1, 3, 7, 8, 11, 12], [8, 0, 1, 2, 3, 14], "234"]
>>> list(chain.from_iterable(solve(lis)))
[0, 1, 3, 7, 8, 11, 12, 8, 0, 1, 2, 3, 14, '234']
```
Timing comparisons:
```
>>> lis = lis *(10**4)
#modified version of FJ's answer that works for strings as well
>>> %timeit [a for x in lis for a in (x if isinstance(x, Iterable) and not isinstance(x,basestring) else [x])]
10 loops, best of 3: 110 ms per loop
>>> %timeit list(chain.from_iterable(solve(lis)))
1 loops, best of 3: 98.3 ms per loop
``` | Concatenation of inner lists or ints | [
"",
"python",
"numpy",
"concatenation",
"flatten",
""
] |
Here is the table structure:
```
ID
FIELD_A
FIELD_B
DATE_FIELD
VALUE_FIELD
```
The ID field is just a number that increments with each record. We get 2 "sets" of records in a day where FIELD\_A, FIELD\_B, DATE\_FIELD are exactly the same but the VALUE\_FIELD can be different. We get a morning set and an evening set. You can tell which is which because of the ID value being lower for that set of records for the morning set (because they were inserted first) and the ID being higher for the evening records (because they were inserted after) in that set.
The question is, if I want to have a view that looks at the highest ID value for FIELD\_A, FIELD\_B, DATE\_FIELD for every record how would I do that? Basically I don't care about the morning sets in my view for anything and only want to see the evening. However, on the current day if we are still in the morning the "highest" ID value will be the only ID value at that time and so a max(id) will give me what we have for current day in the morning at least. | You would use the analytic function `row_number()`:
```
create view vw_afternoon as
select id, field_a, field_b, date_field, value_field
from (select t.*,
row_number() over (partition by field_a, field_b, date_field
order by id desc
) as seqnum
from t
) t
where seqnum = 1;
```
The function `row_number()` assigns a sequential value to rows within a group (defined by the `partition` clause). The one with the highest id gets a value of `1`, followed by the rest (as defined by the `order by` clause). | If anyone needs an answer that works in SQL environments that don't support partition\_by, I think you can also use:
```
create view my_view as SELECT * FROM my_table WHERE id IN
(SELECT max(id) FROM my_table GROUP BY field_a, field_b, date_field)
``` | oracle sql 2 records per day per unique fields, how to get the latest | [
"",
"sql",
"oracle",
""
] |
I am trying to use the values of a 1-dim array to slice/return the rows and columns from a 2-dim array in Numpy. For example, say I have the following one dim array:
[1,3,5)]
and the following 2 dim array:
```
array([[1, 0, 0, 0, 0, 0],
[0, 4, 0, 0, 0, 1],
[0, 0, 3, 0, 0, 0],
[0, 1, 0, 7, 0, 10],
[0, 0, 0, 0, 8, 0],
[0, 2, 0, 0, 0, 9]])
```
How do I return the following:
```
array([[4, 0, 1],
[1, 7, 10],
[2, 0, 9]])
```
I would also like to be able to produce a 6x6 mask using the same example. So that I would get this:
```
array([[True, True, True, True, True, True],
[True, False, True, False, True, False],
[True, True, True, True, True, True],
[True, False, True, False, True, False],
[True, True, True, True, True, True],
[True, False, True, False, True, False]],)
```
I have tried many different things and nothing seems to get exactly what I need. I know I could do it by writing a couple of loops, but I figured there must be an easier way. I have also done a number of searches and still no luck. Thanks in advance! | Is this what you want?
```
>>> a = array([[1, 0, 0, 0, 0, 0],
... [0, 4, 0, 0, 0, 1],
... [0, 0, 3, 0, 0, 0],
... [0, 1, 0, 7, 0, 10],
... [0, 0, 0, 0, 8, 0],
... [0, 2, 0, 0, 0, 9]])
>>>
>>> a[1::2,1::2]
array([[ 4, 0, 1],
[ 1, 7, 10],
[ 2, 0, 9]])
```
Since your stride access is so regular, you can accomplish this with basic slicing. As for the mask:
```
>>> a = np.ones(a.shape,dtype=bool)
>>> a[1::2,1::2] = False
>>> a
array([[ True, True, True, True, True, True],
[ True, False, True, False, True, False],
[ True, True, True, True, True, True],
[ True, False, True, False, True, False],
[ True, True, True, True, True, True],
[ True, False, True, False, True, False]], dtype=bool)
```
Of course, this answer is assuming you want every other element along the axis (starting with index 1). You could modify the slice to stop when the index is 6: `a[1:6:2,1:6:2]` or to take every 3rd element, `a[1::3,1::3]`, but if you need random access into the array, that becomes a bit harder...
You can do something like this:
```
>>> b = [1,3,5]
>>> a[:,b][b]
array([[ 4, 0, 1],
[ 1, 7, 10],
[ 2, 0, 9]])
>>> a[b][:,b] #I think the same thing, but depending on data layout, one may be faster than the other
array([[ 4, 0, 1],
[ 1, 7, 10],
[ 2, 0, 9]])
```
At this point though, you're probably making a copy of the array rather than just getting a view. This is less efficient and you won't be able to use it to construct the boolean mask as we did previously I don't think. | Maybe could be useful the routine np.meshgrid():
```
a = array([[1, 0, 0, 0, 0, 0],
[0, 4, 0, 0, 0, 1],
[0, 0, 3, 0, 0, 0],
[0, 1, 0, 7, 0, 10],
[0, 0, 0, 0, 8, 0],
[0, 2, 0, 0, 0, 9]])
b = np.array([1, 3, 5])
B = np.meshgrid(b,b)
print a[B].T
Out: [[ 4 0 1]
[ 1 7 10]
[ 2 0 9]]
```
I think that is the desired result. | Return rows and columns from a 2d array using values from a 1d array in Numpy | [
"",
"python",
"arrays",
"numpy",
""
] |
If I have a dictionary {key : [a b c c d]} and I want to print only the unique values corresponding to each key (In this case, (a,b,d)) what is the most efficient way to do this apart from just looping through each element and keeping a count of it? | If elements are sorted as in your example; you could use [`itertools.groupby()`](http://docs.python.org/2/library/itertools.html#itertools.groupby):
```
from itertools import groupby
print " ".join([k for k, group in groupby(d['key']) if len(list(group)) == 1])
# -> a b d
``` | One option, use [collections.Counter](http://docs.python.org/2/library/collections.html#collections.Counter)
```
from collections import Counter
d = {'k': ['a', 'b', 'c', 'c', 'd']}
c = Counter(d['k'])
print [k for k in c if c[k] == 1]
['a', 'b', 'd']
``` | how to print elements which occur only once in a list, without counting | [
"",
"python",
""
] |
Alright here the question
**Table 1**:
```
Id1, Id2
```
**Table 2**
```
Id, Username
```
Now this `Table 1 Id1` and `Id2` variables are unique keys in `Table 2`
I want to select `Table 1` all values as this
```
Username1, Username2
```
So how can I write this query ?
Full example
**Table 1**
```
1,3
3,5
```
**Table 2**
```
1,Furkan
3,Obama
5,USA
```
Result
```
Furkan, Obama
Obama, USA
```
Thank you | ```
SELECT table2_1.Username AS Username1, table2_2.Username AS Username2
FROM table1
JOIN table2 AS table2_1 ON table1.Id1 = table2_1.Id
JOIN table2 AS table2_2 ON table1.Id2 = table2_2.Id
``` | Try this
here is <http://www.sqlfiddle.com/#!2/e10b7/2/0>
```
CREATE TABLE table1
(
Id1 int primary key,
Id2 int
);
INSERT INTO table1
VALUES (1, 3),
(3, 5);
CREATE TABLE table2
(
Id int primary key,
Username varchar(255)
);
INSERT INTO table2
VALUES (1, 'Furkan'),
(3, 'Obama'),
(5, 'USA');
```
Your SQL query
```
select
(select Username
from table2 as t2
where t2.Id = t1.Id1) as coloum1 ,
(select Username
from table2 as t2
where t2.Id = t1.Id2) as coloum2
from table1 as t1
``` | How to select 2 variables from another table SQL server query | [
"",
"sql",
"sql-server",
"t-sql",
"join",
"sql-server-2012",
""
] |
I'm opening files in multiple functions, and it seems a bit messy/superfluous to keep 'gracefully' handling potential IOErrors:
```
try:
fileHandle = open(file, 'r')
except:
print "Error: failed to open file %s" % (file)
sys.exit(2)
```
In what contexts is it acceptable to just:
```
fileHandle = open(file, 'r')
```
and expect the user to eyeball traceback messages in the case of exception? | This is the principle of exceptions as implemented in Python and other languages. The exception handling in not necessary *local* to the function that raise the exception.
If some local processing is meaningful, do it. If you can't do anything useful, just let the exception go up in the call stack until a proper exception handler is found.
<http://docs.python.org/2/tutorial/errors.html#handling-exceptions>
---
If you *catch* an exception just in order to log it, you might want to re-raise it:
```
try:
fileHandle = open(file, 'r')
except IOError:
print "Error: failed to open file %s" % (file, )
raise
```
<http://docs.python.org/2/tutorial/errors.html#raising-exceptions> | Use the 'with' keyword.
```
with open(file, 'r') as fileHandle:
do_whatever()
```
This code is more-or-less equivalent to
```
try:
fileHandle = open(file, 'r')
except IOError:
pass
else:
do_whatever()
finally:
fileHandle.close()
```
Basically, it makes sure that you open and close your files correctly, and catches exceptions. | Python: Necessary to catch open() exceptions? | [
"",
"python",
"coding-style",
""
] |
I have these different lines with values in a text file
```
sample1:1
sample2:1
sample3:0
sample4:15
sample5:500
```
and I want the number after the ":" to be updated sometimes
I know I can split the name by ":" and get a list with 2 values.
```
f = open("test.txt","r")
lines = f.readlines()
lineSplit = lines[0].split(":",1)
lineSplit[1] #this is the value I want to change
```
im not quite sure how to update the lineSplit[1] value with the write functions | You can use the `fileinput` module, if you're trying to modify the same file:
```
>>> strs = "sample4:15"
```
Take the advantage of sequence unpacking to store the results in variables after splitting.
```
>>> sample, value = strs.split(':')
>>> sample
'sample4'
>>> value
'15'
```
Code:
```
import fileinput
for line in fileinput.input(filename, inplace = True):
sample, value = line.split(':')
value = int(value) #convert value to int for calculation purpose
if some_condition:
# do some calculations on sample and value
# modify sample, value if required
#now the write the data(either modified or still the old one) to back to file
print "{}:{}".format(sample, value)
``` | Strings are immutable, meaning, you can't assign new values inside them by index.
But you can split up the whole file into a list of lines, and change individual lines (strings) entirely. This is what you're doing in lineSplit[1] = A\_NEW\_INTEGER
```
with open(filename, 'r') as f:
lines = f.read().splitlines()
for i, line in enumerate(lines):
if condition:
lineSplit = line.split(':')
lineSplit[1] = new_integer
lines[i] = ':'.join(lineSplit)
with open(filename, 'w') as f:
f.write('\n'.join(lines)
``` | Replace character in line inside a file | [
"",
"python",
"file",
""
] |
Every time I enter 4, 6 or 12 it doesn't accept it. Why? The code looks fine to me. Please tell me how to correct or what to change.
```
import random
def roll_the_dice():
print("Roll The Dice")
print()
repeat = True
while repeat:
number_of_sides = input("Please select a dice with 4, 6 or 12 sides: ")
if (number_of_sides in [4,6,12] and len(number_of_sides) == 0 and
number_of_sides == int):
user_score = random.randint(1,number_of_sides)
print("{0} sided dice thrown, score {1}".format(
number_of_sides,user_score))
roll_again = input("Do you want to roll the dice again? ")
roll_again.lower()
if roll_again == "no":
print("Have a nice day")
repeat = False
elif len(roll_again) == 0:
print("Error please type 'yes' or 'no'")
roll_again = input("Do you want to roll the dice again? ")
else:
print("You have entered an incorrect value, please try again")
number_of_sides = input("Please select a dice with 4, 6 or 12 sides: ")
``` | Haidro gave you the reason, but here is a different way to approach your problem:
```
def get_dice_size():
dice_size = input('Enter the number of sides on the dice: ')
while dice_size not in ['4','6','12']:
print 'Sorry, please enter one of 4, 6 or 12:'
dice_size = input('Enter the number of sides on the dice: ')
return int(dice_size)
def main():
dice_size = get_dice_size()
repeat = True
while repeat:
print('Rolling the dice...')
user_score = random.randint(1,dice_size)
print("{0} sided dice thrown, score {1}".format(dice_size,user_score))
roll_again = input("Do you want to roll the dice again? ")
if roll_again.lower() == 'yes':
dice_size = get_dice_size()
else:
repeat = False
print('Thank you for playing!')
``` | In Python 3, when using `input()`, it returns a *string*. Thus, you would have something like `"4"`. And `"4" is not 4`.
So in your script, specifically at the `if number_of_sides in [4,6,12]`, it will always be `False`, because you are really saying `if "4" in [4,6,12]` (I'm just doing 4 as an example).
Convert the string to an integer:
```
>>> int("4")
4
```
---
It also looks like you are trying to determine if an input was given. `len(...) == 0` is not needed. You can just say `if number_of_sides`. Because an empty string is `False`, and if one was entered, then the if-statement will not execute.
---
Also, `number_of_sides == int` is not the way to check if an object is an integer. Use `isinstance()`:
```
>>> isinstance("4", int)
False
>>> isinstance(4, int)
True
```
---
Some other tiny things:
* `.lower()` does not sort the string in place, as strings are immutable in python. You might just want to attach `.lower()` onto the end of the `input()`.
* You might also want to use a `while` loop for your second input. Observe:
```
roll_again = ''
while True:
roll_again = input('Do you want to roll the dice again? ')
if roll_again in ('yes', 'no'):
break
print("You have entered an incorrect value, please try again")
if roll_again == "no":
print("Have a nice day")
repeat = False
else:
print("Let's go again!")
``` | What is wrong with this Python code? Refuses input | [
"",
"python",
""
] |
I have the following query which will return the number of users in table transactions who have earned between $100 and $200
```
SELECT COUNT(users.id)
FROM transactions
LEFT JOIN users ON users.id = transactions.user_id
WHERE transactions.amount > 100 AND transactions.amount < 200
```
The above query returns the correct result below:
```
COUNT(users.id)
559
```
I would like to extend it so that the query can return data in the following format:
```
COUNT(users.id) : amount
1678 : 0-100
559 : 100-200
13 : 200-300
```
How can I do this? | You can use a CASE expression inside of your aggregate function which will get the result in columns:
```
SELECT
COUNT(case when amount >= 0 and amount <= 100 then users.id end) Amt0_100,
COUNT(case when amount >= 101 and amount <= 200 then users.id end) Amt101_200,
COUNT(case when amount >= 201 and amount <= 300 then users.id end) Amt201_300
FROM transactions
LEFT JOIN users
ON users.id = transactions.user_id;
```
See [SQL Fiddle with Demo](http://www.sqlfiddle.com/#!2/22afb/3)
You will notice that I altered the ranges from 0-100, 101-200, 201-300 otherwise you will have user ids being counted twice on the 100, 200 values.
If you want the values in rows, then you can use:
```
select count(u.id),
CASE
WHEN amount >=0 and amount <=100 THEN '0-100'
WHEN amount >=101 and amount <=200 THEN '101-200'
WHEN amount >=201 and amount <=300 THEN '101-300'
END Amount
from transactions t
left join users u
on u.id = t.user_id
group by
CASE
WHEN amount >=0 and amount <=100 THEN '0-100'
WHEN amount >=101 and amount <=200 THEN '101-200'
WHEN amount >=201 and amount <=300 THEN '101-300'
END
```
See [SQL Fiddle with Demo](http://www.sqlfiddle.com/#!2/22afb/4)
But if you have many ranges that you need to calculate the counts on, then you might want to consider creating a table with the ranges, similar to the following:
```
create table report_range
(
start_range int,
end_range int
);
insert into report_range values
(0, 100),
(101, 200),
(201, 300);
```
Then you can use this table to join to your current tables and group by the range values:
```
select count(u.id) Total, concat(start_range, '-', end_range) amount
from transactions t
left join users u
on u.id = t.user_id
left join report_range r
on t.amount >= r.start_range
and t.amount<= r.end_range
group by concat(start_range, '-', end_range);
```
See [SQL Fiddle with Demo](http://www.sqlfiddle.com/#!2/b8a22/4).
If you don't want to create a new table with the ranges, then you can always use a derived table to get the same result:
```
select count(u.id) Total, concat(start_range, '-', end_range) amount
from transactions t
left join users u
on u.id = t.user_id
left join
(
select 0 start_range, 100 end_range union all
select 101 start_range, 200 end_range union all
select 201 start_range, 300 end_range
) r
on t.amount >= r.start_range
and t.amount<= r.end_range
group by concat(start_range, '-', end_range);
```
See [SQL Fiddle with Demo](http://www.sqlfiddle.com/#!2/22afb/1) | One way to do this would be to use a case/when statement in your group by.
```
SELECT
-- NB this must match your group by statement exactly
-- otherwise you will get an error
CASE
WHEN amount <= 100
THEN '0-100'
WHEN amount <= 200
THEN '100-200'
ELSE '201+'
END Amount,
COUNT(*)
FROM
transactions
GROUP BY
CASE
WHEN amount <= 100
THEN '0-100'
WHEN amount <= 200
THEN '100-200'
ELSE '201+'
END
```
If you plan on using the grouping elsewhere, it probably makes sense to define it as a scalar function (it will also look cleaner)
e.g.
```
SELECT
AmountGrouping(amount),
COUNT(*)
FROM
transactions
GROUP BY
AmountGrouping(amount)
```
If you want to be fully generic:
```
SELECT
concat(((amount DIV 100) * 100),'-',(((amount DIV 100) + 1) * 100)) AmountGroup,
COUNT(*)
FROM
transactions
GROUP BY
AmountGroup
```
[Sql Fiddle](http://www.sqlfiddle.com/#!2/22afb/9) | SQL Group By Number Of Users Within Range | [
"",
"mysql",
"sql",
""
] |
I want to have a map from objects of a class called MyObject to integers. I read some stuff [here](https://stackoverflow.com/questions/4901815/object-as-a-dictionary-key), but didn't understand anything and that doesn't seem to be quite the thing I want. Hash function
can only be used for collision. What I want to do is to retrieve the value(integer) by
giving the object as argument. Basically what I look for is a function(in mathematical terms)
from MyObject to integers.
So suppose this is the definition of my class:
```
class MyObject:
def __init__(self,number):
self.name = name
self.marked=False
```
So for example
```
a=MyObject("object 1")
b=MyObject("object 2")
```
Now I want some mapping like f, that I could assign 25 to a and 36 to b.
And be able to get:
```
f(a)=25
f(b)=36
``` | I don't completely understand yor question. My interpretation is that you want to use objects to index some integers. If that is the intent, you can use a `dict`.
```
class MyClass:
# your code here
c1 = MyClass()
c2 = MyClass()
D = dict({c1:1,c2:2})
D[c1] # will return 1
D[c2] # will return 2
``` | You seem to be talking about the id() function. From the [manual](http://docs.python.org/2/library/functions.html#id): Return the “identity” of an object. This is an integer (or long integer) which is guaranteed to be unique and constant for this object during its lifetime. Two objects with non-overlapping lifetimes may have the same id() value. | How to map objects to integers in python? | [
"",
"python",
"oop",
"hash",
"dictionary",
""
] |
---
## Situation:
I am tentatively considering a situation to use Django to
1. serve HTML (by Django's template)
2. serve all the static files like CSS, JS from the Django project
and my intent to use Django stops here. After the javascript files are downloaded to client side, they communicates with a Flask backend with RESTful API (the Ajax way).
---
## Why two frameworks? And why in this way?
The frontend guy of this project I am working with knows Django very well, and I think I mainly want to use his CSS / HTML template / jquery skill.
I want to have an independent API server and I feel Flask is an ideal option for my need (from building API service perspective).
I guess people would suggest "why not ask the Django guy use Jinga2 for templating?" (in that way, we can do away with Django) I guess my current answer would be: I don't want him to invest too much time (to learn)
I guess people would suggest "why not use Django to serve the Restful API call?" (in that way, we can do away with Flask) I guess my current answer would be: I (as the person implementing API logic) like Flask.
---
## My question
Short one: is this doable? or does this sound a crazy idea?
Long one: Can you kindly give some guidance?
Thanks, | If I were you I would take the Django templates from the designer and convert them to Jinja2, then build the application 100% in Flask. But since you asked...
> is this doable? or does this sound a crazy idea?
Yes to both :)
> Can you kindly give some guidance?
Here is a simple way:
You write the two applications, one in Flask and one in Django. Let's assume you solve all the problems you will have when trying to share database or other resources and now you have the two applications running, each with its own web server and each listening for requests on a different port.
You now put a proxy web server as your front web server to the outside world and proxy the requests that come from clients to one or the other application depending on the URL. You can make all the URLs for the Flask application have the format `http://hostname/api/...`, and then key off of the `api` in the URL to separate the requests in the proxy server and give them to the proper application.
Because from the outside all requests go to the same hostname and port (that of the proxy server) you will not have any trouble with cross-site scripting. | I am a little late to the party but [Application Dispatching](http://flask.pocoo.org/docs/latest/patterns/appdispatch/) should help with this. According to the intro in the documentation this is how it goes:
> Application dispatching is the process of combining multiple Flask
> applications on the WSGI level. You can not only combine Flask
> applications into something larger but any WSGI application. This
> would even allow you to run a Django and a Flask application in the
> same interpreter side by side if you want. | Using Django along with Flask | [
"",
"python",
"flask",
""
] |
I wish to use the Python `all()` function to help me compute something, but this something could take substantially longer if the `all()` does not evaluate as soon as it hits a `False`. I'm thinking it probably is short-circuit evaluated, but I just wanted to make sure. Also, is there a way to tell in Python how the function gets evaluated?
---
**Editor's note**: Because `any` and `all` are **functions**, their arguments must be evaluated before they are called. That often creates the impression of no short-circuiting - but they do still short-circuit.
To make sure that the short-circuiting can be effective, pass a *generator expression*, or other lazily evaluated expression, rather than a sequence. See [Lazy function evaluation in any() / all()](https://stackoverflow.com/questions/64090762) for details. Similarly, to force evaluation up-front, build a list or tuple explicitly; see [How to prevent short-circuit evaluation?](https://stackoverflow.com/questions/12281469) . | Yes, it short-circuits:
```
>>> def test():
... yield True
... print('one')
... yield False
... print('two')
... yield True
... print('three')
...
>>> all(test())
one
False
```
From the [docs](http://docs.python.org/2/library/functions.html#all):
> Return `True` if all elements of the *iterable* are true (or if the iterable is empty). Equivalent to:
>
> ```
> def all(iterable):
> for element in iterable:
> if not element:
> return False
> return True
> ```
So when it `return`s False, then the function immediately breaks. | Yes, `all` does use short-circuit evaluation. For example:
```
all(1.0/x < 0.5 for x in [4, 8, 1, 0])
=> False
```
The above stops when `x` reaches `1` in the list, when the condition becomes false. If `all` weren't short-circuiting, we'd get a division by zero when `x` reached `0`. | Does Python's `all` function use short circuit evaluation? | [
"",
"python",
"evaluation",
"short-circuiting",
""
] |
I've been asked to write a query to get all the automatic entries that were placed 1 day before the current day. However, the problem is that the date is in this format and I cannot alter the database.
E.g. 20130329.134405990
I'm not sure how I would do this. I was thinking I trim it to the first 8 digits and then minus the current day by 1 day. I'm not sure how the code would look though, can anyone make any suggestions to my existing code? That would be greatly appreciated
Existing Query:
select \* from JnlDB
where UserCode = 'Automation'; | You can use this to get one day before your current date (SQL SERVER):
```
SELECT DATEADD(day,-1,GETDATE())
```
I think it would be:
```
SELECT *
FROM Table
WHERE LEFT(CAST(datefield AS VARCHAR(25)),8) = CAST(DATEADD(day,-1,GETDATE()) AS VARCHAR(25))
```
Update, wasn't comparing to today as you requested. | Just off the top of my head:
You could run the following query:
```
SELECT * FROM JnlDB WHERE UserCode = 'Automation' AND
SUBSTR(ENTEREDDATE,1,8)=TO_CHAR(SYSDATE-1,'YYYYMMDD');
```
This will work on Oracle.
In SQL Server though:
```
SELECT * FROM JnlDB WHERE UserCode = 'Automation' AND
CONVERT(VARCHAR,LEFT(ENTEREDDATE,8))=CONVERT(VARCHAR(8),GETDATE()-1,112)
```
I'm sure there's a better, more performant way of doing this in SQL Server but I've been away from it for long. | Query for Trimming the date in SQL | [
"",
"sql",
"date",
"trim",
""
] |
In a table that contains the Employee ID of every employee, the login time and the logout time for each employee, I need to extract the minimum from login time and maximum from logout time for each day. The problem is that every day an employee can login and logout multiple times. So, we have sample data that looks something like this,
```
NAME EMPID LOGIN TIME LOGOUT TIME
user1 37 16-JAN-12 03.07.37 16-JAN-12 03.07.44
user5 21 16-JAN-12 02.00.36 16-JAN-12 04.45.34
user3 12 16-JAN-12 05.35.35 16-JAN-12 06.39.57
user3 40 16-JAN-12 02.54.13 16-JAN-12 07.12.16
user4 33 16-JAN-12 07.29.43 16-JAN-12 07.59.42
user1 40 16-JAN-12 07.12.39 16-JAN-12 07.59.50
user3 30 16-JAN-12 11.30.50 16-JAN-12 08.02.42
user990 31 17-JAN-12 11.46.12 17-JAN-12 01.46.13
user29 23 17-JAN-12 10.39.18 17-JAN-12 05.00.02
user20 21 17-JAN-12 04.59.37 17-JAN-12 05.00.17
user990 40 17-JAN-12 10.55.48 17-JAN-12 05.00.50
user4 23 17-JAN-12 05.00.11 17-JAN-12 05.01.08
user4 21 17-JAN-12 04.59.37 17-JAN-12 05.01.21
```
I tried this. It is working for only one day,
```
SELECT username, MIN(login_time), MAX(logout_time)
FROM Table_Name
WHERE trunc(login_time) = '19-JAN-12'
GROUP BY username;
```
This gives me the following result for the date I've entered. This is what I want for every date,
```
NAME
user1 19-JAN-12 11.00.26 19-JAN-12 08.00.53
user3 19-JAN-12 11.05.53 19-JAN-12 11.36.02
user29 19-JAN-12 09.49.32 19-JAN-12 06.48.08
user990 19-JAN-12 10.59.59 19-JAN-12 08.11.15
use23 19-JAN-12 06.40.36 19-JAN-12 08.36.07
user43 19-JAN-12 11.23.05 19-JAN-12 08.28.02
user89 19-JAN-12 02.38.54 19-JAN-12 07.28.02
user4 19-JAN-12 01.24.09 19-JAN-12 06.01.07
user7 19-JAN-12 03.29.17 19-JAN-12 08.34.02
user9 19-JAN-12 09.42.13 19-JAN-12 06.35.54
``` | ```
SELECT username, TRUNC(login_time), min(login_time), max(logout_time)
FROM table_name
GROUP BY username, TRUNC(login_time)
ORDER BY username, TRUNC(login_time);
``` | It could be as simple as:
```
SELECT TRUNC(LOGIN_TIME]),NAME, EMPID, MIN(LOGIN_TIME), MAX(LOGOUT_TIME)
FROM Table_Name
GROUP BY TRUNC(LOGIN_TIME),NAME, EMPID
ORDER BY EMPID, TRUNC(LOGIN_TIME)
```
That will return one line per employee/day, earliest login and latest logout for that employee/day. | How to extract minimum and maximum time from a table in Oracle? | [
"",
"sql",
"oracle",
"oracle-sqldeveloper",
""
] |
I use "$ipython notebook --pylab inline" to start the ipython notebook. The display matplotlib figure size is too big for me, and I have to adjust it manually. How to set the default size for the figure displayed in cell? | I believe the following work in version 0.11 and above. To check the version:
```
$ ipython --version
```
It may be worth adding this information to your question.
Solution:
You need to find the file `ipython_notebook_config.py`. Depending on your installation process this should be in somewhere like
```
.config/ipython/profile_default/ipython_notebook_config.py
```
where `.config` is in your home directory.
Once you have located this file find the following lines
```
# Subset of matplotlib rcParams that should be different for the inline backend.
# c.InlineBackend.rc = {'font.size': 10, 'figure.figsize': (6.0, 4.0), 'figure.facecolor': 'white', 'savefig.dpi': 72, 'figure.subplot.bottom': 0.125, 'figure.edgecolor': 'white'}
```
Uncomment this line `c.InlineBack...` and define your default figsize in the second dictionary entry.
Note that this could be done in a python script (and hence interactively in IPython) using
```
pylab.rcParams['figure.figsize'] = (10.0, 8.0)
``` | Worked liked a charm for me:
```
matplotlib.rcParams['figure.figsize'] = (20, 10)
``` | How to set the matplotlib figure default size in ipython notebook? | [
"",
"python",
"matplotlib",
"jupyter-notebook",
""
] |
What I'm trying to do is have a dict that goes like
```
((category1: subcategory1, sub2, sub3), (cat2: sub12, sub22...))
```
However, I also want the subcategories to have their own values as well:
```
((subcat1: subitem1, subitem2), (subcat2: subitem12, subitem22)...)
```
I want to be able to reference the subitem such that it connects to the original category, not just subcategory. Is there a way to do this?
EDIT:
```
import json
import win32com.client as win32
from glob import glob
import io
import locale
from collections import defaultdict
import pprint
#import re
#importing needed clients
raw_files = glob('*.xlsx')
x = locale.getpreferredencoding()
print x
ex = win32.gencache.EnsureDispatch('Excel.Application')
ex.Visible = False
oFile = open("rawsort.txt", "w+")
#oFile = io.open("rawsort.txt", "w+", encoding = "utf-8")#text dump
for f in raw_files:
ex.Workbooks.Open(f)
sheet = ex.ActiveWorkbook
#print sheet.Sheets("MRD").Cells(1,2).Value
wincs = sheet.Worksheets('MRD')
if sheet.Sheets("MRD").Cells(1, 3).Value== None:
wincs.Cells(1,3).Value = u'N/A'
if sheet.Sheets("MRD").Cells(1, 4).Value== None:
wincs.Cells(1,4).Value = u'N/A'
if sheet.Sheets("MRD").Cells(1, 5).Value== None:
wincs.Cells(1,5).Value = u'N/A'
if not wincs.Cells(1,2).Value == None:
category[sheet.Sheets('MRD').Cells(1,2).Value][wincs.Cells(1,3).Value][wincs.Cells(1,4).Value] = wincs.Cells(1,5).Value
# print listy
print sheet.Sheets('MRD').UsedRange.Rows.Count
for x in xrange(2, sheet.Sheets("MRD").UsedRange.Rows.Count+1):#for loop to print through paragraphs
if sheet.Sheets("MRD").Cells(x, 3).Value== None:
wincs.Cells(x,3).Value = u'N/A'
if sheet.Sheets("MRD").Cells(x, 4).Value== None:
wincs.Cells(x,4).Value = u'N/A'
if sheet.Sheets("MRD").Cells(x, 5).Value== None:
wincs.Cells(x,5).Value = u'N/A'
if not sheet.Sheets("MRD").Cells(x, 2).Value== None:
category[sheet.Sheets('MRD').Cells(x,2).Value][wincs.Cells(x,3).Value][wincs.Cells(x,4).Value] = wincs.Cells(x,5).Value
#oFile.write(unicode(listy))
oFile.write("\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n")
oFile.close()
```
However, this is my output:

Any help? | You probably want a tree data structure. Have a look at [this github gist](https://gist.github.com/hrldcpr/2012250) for an example of how you implement one with dictionaries in python. | You could store it like this:
```
categories = {
'category1': {
'subcat1': ['subitem1', 'subitem2'],
'subcat2': ['subitem12', 'subitem22']
},
'category2': {
# ...
}
}
``` | Can I use a value as a key for another dict? | [
"",
"python",
"dictionary",
"key",
""
] |
I have one understanding probem:
I want to create user profile for my page, which will have many attributes. But since I am new to User Model of django's auth, i dont know if i can extend that model with new coming attributes that i set for user profile. e.g. address, age, ...
if not, why is User Model there? is it only for auth purposes? how do I handle and differentiate these two User models?
Sample Case:
user wants to log in, I check the permission with auth's user model and set some relation between django's user model and mine? I just kind of confused here | I shall try to explain in the perspective of pre-django 1.5
The django's auth models provides a set of "convenience" methods like login, logout, password reset, etc for you which work seamlessly. It is a very common scenario to have more fields - so One approach would be to create a userprofile model, which either `inherits` from the django auth user model, or has a `OneToOne` relationship on the same. This way, you do not have to reimplement some of the already implemented features. In addition, there is `groups` and `permissions` models in the package which add a whole layer of functionality in terms of user permissionsing.
example:
```
from django.contrib.auth.models import User
class MyCustomProfile(User):
#inherits all the attributes of default `User` models
#additional models here.
```
OR
```
from django.contrib.auth.models import User
class MyCustomProfile(models.Model):
user = models.OneToOneField(User)
#additional models here.
```
This way, you can use the all the features and build on top of it.
This has changed a little bit in django-1.5 which lets users have Custom fields, which saves you from creating a `UserProfile` model on top of the already existing `User` model. | Since Django 1.5 you can create your own User Model with your custom fields. And tell Django to use that model for auth.
Link to documentation: [Custom User Model](https://docs.djangoproject.com/en/dev/topics/auth/customizing/#specifying-a-custom-user-model)
**Update:**
If you are using Django 1.4, then [django-primate](https://github.com/aino/django-primate) can help you to define custom user model. | django user model - when do i need it? | [
"",
"python",
"django",
"django-models",
"django-users",
""
] |
Given two CIDR addresses say 192.168.2.0/14 and 192.168.2.0/32
How do I check if two ip addresses overlap in "python2.6"??
I have gone through netaddr and it allows to check if
192.168.2.0 is in CIDR address 192.168.2.0/14 by
```
from netaddr import IPNetwork, IPAddress
bool = IPAddress("192.168.2.0") in IPNetwork("192.168.2.0/14"):
```
But how to check for two CIDR address??
I found a reference :: [How can I check if an ip is in a network in python](https://stackoverflow.com/questions/819355/how-can-i-check-if-an-ip-is-in-a-network-in-python) | Using [ipaddr](https://pypi.python.org/pypi/ipaddr):
```
>>> import ipaddr
>>> n1 = ipaddr.IPNetwork('192.168.1.0/24')
>>> n2 = ipaddr.IPNetwork('192.168.2.0/24')
>>> n3 = ipaddr.IPNetwork('192.168.2.0/25')
>>> n1.overlaps(n2)
False
>>> n1.overlaps(n3)
False
>>> n2.overlaps(n3)
True
>>> n2.overlaps(n1)
False
``` | I'll assume you actually want both CIDRs to represent ranges, even though in your example, 192.168.2.0/32 represents only one address. Also note that in 192.168.2.0/14, the .2. is meaningless, because the 14-bit prefix doesn't reach the third octet.
Anyway, there are a several ways to do this. You could notice that for them to overlap, one must always be a subset of the other:
```
def cidrsOverlap(cidr0, cidr1):
return cidr0 in cidr1 or cidr1 in cidr0
```
Or you could notice that for the ranges to overlap, the first range's lowest address must be less than or equal to the second range's highest address, and vice versa. Thus:
```
def cidrsOverlap(cidr0, cidr1):
return cidr0.first <= cidr1.last and cidr1.first <= cidr0.last
print cidrsOverlap(IPNetwork('192.168.2.0/24'), IPNetwork('192.168.3.0/24'))
# prints False
print cidrsOverlap(IPNetwork('192.168.2.0/23'), IPNetwork('192.168.3.0/24'))
# prints True
``` | Check if two CIDR addresses intersect? | [
"",
"python",
"ip-address",
"cidr",
""
] |
I have a dictionary of dictionaries, and I'm trying to output the information within them in a certain way so that it will be usable for downstream analysis. Note: All the keys in `dict` are in also in `list`.
```
for item in list:
for key, value in dict[item].items():
print item, key, value
```
This is the closest I've gotten to what I want, but it's still a long way off. Ideally what I want is:
```
item1 item2 item3 item4
key1 value value value value
key2 value value value value
key2 value value value value
```
Is this even possible? | First, if I understand your structure, the list is just a way of ordering the keys for the outer dictionary, and a lot of your complexity is trying to use these two together to simulate an ordered dictionary. If so, there's a much easier way to do that: use [`collections.OrderedDict`](http://docs.python.org/3/library/collections.html#collections.OrderedDict). I'll come back to that at the end.
---
First, you need to get all of the keys of your sub-dictionaries, because those are the rows of your output.
From comments, it sounds like all of the sub-dictionaries in `dct` have the same keys, so you can just pull the keys out of any arbitrary one of them:
```
keys = dct.values()[0].keys()
```
If each sub-dictionary can have a different subset of keys, you'll need to instead do a first pass over `dct` to get all the keys:
```
keys = reduce(set.union, map(set, dct.values()))
```
Some people find `reduce` hard to understand, even when you're really just using it as "`sum` with a different operator". For them, here's how to do the same thing explicitly:
```
keys = set()
for subdct in dct.values():
keys |= set(subdct)
```
---
Now, for each key's row, we need to get a column for each sub-dictionary (that is, each value in the outer dictionary), in the order specified by using the elements of the list as keys into the outer dictionary.
So, for each column `item`, we want to get the outer-dictionary value corresponding to the key in `item`, and then in the resulting sub-dictionary, get the value corresponding to the row's `key`. That's hard to say in English, but in Python, it's just:
```
dct[item][key]
```
If you don't actually have all the same keys in all of the sub-dictionaries, it's only slightly more complicated:
```
dct[item].get(key, '')
```
So, if you didn't want any headers, it would look like this:
```
with open('output.csv', 'wb') as f:
w = csv.writer(f, delimiter='\t')
for key in keys:
w.writerow(dct[item].get(key, '') for item in lst)
```
---
To add a header column, just prepend the header (in this case, `key`) to each of those rows:
```
with open('output.csv', 'wb') as f:
w = csv.writer(f, delimiter='\t')
for key in keys:
w.writerow([key], [dct[item].get(key, '') for item in lst])
```
Notice that I turned the genexp into a list comprehension so I could use list concatenation to prepend the `key`. It's conceptually cleaner to leave it as an iterator, and prepend with [`itertools.chain`](http://docs.python.org/3/library/itertools.html#itertools.chain), but in trivial cases like this with tiny iterables, I think that's just making the code harder to read:
```
with open('output.csv', 'wb') as f:
w = csv.writer(f, delimiter='\t')
for key in keys:
w.writerow(chain([key], (dct[item].get(key, '') for item in lst)))
```
---
You also want a header row. That's even easier; it's just the items in the list, with a blank column prepended for the header column:
```
with open('output.csv', 'wb') as f:
w = csv.writer(f, delimiter='\t')
w.writerow([''] + lst)
for key in keys:
w.writerow([key] + [dct[item].get(key, '') for item in lst])
```
---
However, there are two ways to make things even simpler.
First, you can use an `OrderedDict`, so you don't need the separate key list. If you're stuck with the separate `list` and `dict`, you can still build an `OrderedDict` on the fly to make your code easier to read. For example:
```
od = collections.OrderedDict((item, dct[item]) for item in lst)
```
And now:
```
with open('output.csv', 'wb') as f:
w = csv.writer(f, delimiter='\t')
w.writerow([''] + od.keys())
for key in keys:
w.writerow([key] + [subdct.get(key, '') for subdct in od.values()])
```
---
Second, you could just build the transposed structure:
```
transposed = {key_b: {key_a: dct[key_a].get(key_b, '') for key_a in dct}
for key_b in keys}
```
And then iterate over it in the obvious order (or use a [`DictWriter`](http://docs.python.org/3/library/csv.html#csv.DictWriter) to handle the ordering of the columns for you, and use its `writerows` method to deal with the rows, so the whole thing becomes a one-liner). | To store objects in Python so that you can re-use them later, you can you use the `shelve` module. This a module that lets you write objects to a shelf file and re-open it and retrieve the objects later, but it's operating system-dependent, so it won't work if say you made it on a Mac and later you want to open it on a Windows machine.
```
import shelve
shelf = shelve.open("filename", flag='c')
#with flag='c', you have to delete the old shelf if you want to overwrite it
dict1 = #something
dict2 = #something
shelf['key1'] = dict1
shelf['key2'] = dict2
shelf.close()
```
To read objects from a shelf:
```
shelf_reader = shelve.open("filename", flag='r')
for k in shelf_reader.keys():
retrieved = shelf_reader[k]
print(retrieved) #prints the retrieved dictionary
shelf_reader.close()
``` | How do I write the contents of nested dictionaries to a file in a certain format? | [
"",
"python",
"output",
"output-formatting",
""
] |
I have two tables Project and Comptes i want dispalay Project and his chef in one Query
so did like this
```
Query q=se.createQuery("SELECT p.idpro,p.IdProjet,p.NomProjet,p.DateDeb,p.DateFin,p.nomimg (SELECT c.Nom,c.Prenom FROM Compte u WHERE u.Id = p.IdChef group by u.id) FROM Projets p ");
listPrj=q.list();
data.setWrappedData(listPrj);
```
but is still Get Error
```
Caused by: org.hibernate.QueryException: aggregate function expected before ( in SELECT [SELECT p.idpro,p.IdProjet,p.NomProjet,p.DateDeb,p.DateFin,p.nomimg (SELECT c.Nom,c.Prenom FROM com.persistence.Compte u WHERE u.Id = p.IdChef group by u.id) FROM com.persistence.Projets p ]
at org.hibernate.hql.classic.SelectParser.token(SelectParser.java:100)
at org.hibernate.hql.classic.ClauseParser.token(ClauseParser.java:86)
at org.hibernate.hql.classic.ClauseParser.end(ClauseParser.java:113)
at org.hibernate.hql.classic.PreprocessingParser.end(PreprocessingParser.java:122)
at org.hibernate.hql.classic.ParserHelper.parse(ParserHelper.java:29)
```
I have no idea how to fix it and | An immediate cause of your problem is in the inner query:
```
SELECT c.Nom, -- <-- Aggregate (min, max, sum etc.) expected here
c.Prenom -- <-- Aggregate (min, max, sum etc.) expected here
FROM Compte u
WHERE u.Id = p.IdChef
GROUP BY u.id
```
when GROUP BY is used, all fields within SELECT should be either in aggregate function or
in GROUP BY
The query itself is very cumbersome, something like that, IMHO, is expected:
```
SELECT p.idpro,
p.IdProjet,
p.NomProjet,
p.DateDeb,
p.DateFin,
p.nomimg,
c.Nom,
c.Prenom
FROM Compte c,
Projets p
WHERE (c.Id = p.IdChef)
``` | ```
SELECT p.idpro,p.IdProjet,p.NomProjet,p.DateDeb,p.DateFin,p.nomimg,c.Nom,c.Prenom
from projects p,Compte c
where c.Id=p-IdChef
group by c.uid
``` | aggregate function expected before ( in SELECT with subquery | [
"",
"mysql",
"sql",
"hibernate",
"aggregate",
""
] |
I am attempting to use matplotlib to plot some figures for a paper I am working on. I have two sets of data in 2D numpy arrays: An ascii hillshade raster which I can happily plot and tweak using:
```
import matplotlib.pyplot as pp
import numpy as np
hillshade = np.genfromtxt('hs.asc', delimiter=' ', skip_header=6)[:,:-1]
pp.imshow(hillshade, vmin=0, vmax=255)
pp.gray()
pp.show()
```
Which gives:

And a second ascii raster which delineates properties of a river flowing across the landscape. This data can be plotted in the same manner as above, however values in the array which do not correspond to the river network are assigned a no data value of -9999. The aim is to have the no data values set to be transparent so the river values overlie the hillshade.
This is the river data, ideally every pixel represented here as 0 would be completely transparent.

Having done some research on this it seems I may be able to convert my data into an RGBA array and set the alpha values to only make the unwanted cells transparent. However, the values in the river array are floats and cannot be transformed (as the original values are the whole point of the figure) and I believe the `imshow` function can only take unsigned integers if using the RGBA format.
Is there any way around this limitation? I had hoped I could simply create a tuple with the pixel value and the alpha value and plot them like that, but this does not seem possible.
I have also had a play with `PIL` to attempt to create a PNG file of the river data with the no data value transparent, however this seems to automatically scale the pixel values to 0-255, thereby losing the values I need to preserve.
I would welcome any insight anyone has on this problem. | Just [mask](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ma.masked_where.html) your "river" array.
e.g.
```
rivers = np.ma.masked_where(rivers == 0, rivers)
```
As a quick example of overlaying two plots in this manner:
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
# Generate some data...
gray_data = np.arange(10000).reshape(100, 100)
masked_data = np.random.random((100,100))
masked_data = np.ma.masked_where(masked_data < 0.9, masked_data)
# Overlay the two images
fig, ax = plt.subplots()
ax.imshow(gray_data, cmap=cm.gray)
ax.imshow(masked_data, cmap=cm.jet, interpolation='none')
plt.show()
```

Also, on a side note, `imshow` will happily accept floats for its RGBA format. It just expects everything to be in a range between 0 and 1. | An alternate way to do this with out using masked arrays is to set how the color map deals with clipping values below the minimum of `clim` (shamelessly using Joe Kington's example):
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
# Generate some data...
gray_data = np.arange(10000).reshape(100, 100)
masked_data = np.random.random((100,100))
my_cmap = cm.jet
my_cmap.set_under('k', alpha=0)
# Overlay the two images
fig, ax = plt.subplots()
ax.imshow(gray_data, cmap=cm.gray)
im = ax.imshow(masked_data, cmap=my_cmap,
interpolation='none',
clim=[0.9, 1])
plt.show()
```

There as also a `set_over` for clipping off the top and a `set_bad` for setting how the color map handles 'bad' values in the data.
An advantage of doing it this way is you can change your threshold by just adjusting `clim` with `im.set_clim([bot, top])` | Setting Transparency Based on Pixel Values in Matplotlib | [
"",
"python",
"matplotlib",
"python-imaging-library",
""
] |
I was curious if there was any indication of which of `operator.itemgetter(0)` or `lambda x:x[0]` is better to use, specifically in `sorted()` as the `key` keyword argument as that's the use that springs to mind first. Are there any known performance differences? Are there any PEP related preferences or guidance on the matter? | According to my benchmark on a list of 1000 tuples, using `itemgetter` is almost twice as quick as the plain `lambda` method. The following is my code:
```
In [1]: a = list(range(1000))
In [2]: b = list(range(1000))
In [3]: import random
In [4]: random.shuffle(a)
In [5]: random.shuffle(b)
In [6]: c = list(zip(a, b))
In [7]: %timeit c.sort(key=lambda x: x[1])
81.4 µs ± 433 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [8]: random.shuffle(c)
In [9]: from operator import itemgetter
In [10]: %timeit c.sort(key=itemgetter(1))
47 µs ± 202 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```
I have also tested the performance (run time in µs) of this two method for various list size.
```
+-----------+--------+------------+
| List size | lambda | itemgetter |
+-----------+--------+------------+
| 100 | 8.19 | 5.09 |
+-----------+--------+------------+
| 1000 | 81.4 | 47 |
+-----------+--------+------------+
| 10000 | 855 | 498 |
+-----------+--------+------------+
| 100000 | 14600 | 10100 |
+-----------+--------+------------+
| 1000000 | 172000 | 131000 |
+-----------+--------+------------+
```
[](https://i.stack.imgur.com/Lyfjg.jpg)
(The code producing the above image can be found [here](https://gist.github.com/jdhao/5569afa7efc13abf75a5baf18e7c29d6))
Combined with the conciseness to select multiple elements from a list, `itemgetter` is clearly the winner to use in sort method. | The performance of itemgetter is slightly better:
```
>>> f1 = lambda: sorted(w, key=lambda x: x[1])
>>> f2 = lambda: sorted(w, key=itemgetter(1))
>>> timeit(f1)
21.33667682500527
>>> timeit(f2)
16.99106214600033
``` | operator.itemgetter or lambda | [
"",
"python",
"python-2.7",
"python-3.x",
""
] |
A table in my database holds values as below,
```
TBLFlow
FlowId FlowName ProcessId
------------------------------------------------
F00 Flow1 PID01-PID02-PID03
F01 Flow2 PID01-PID03-PID02
```
The Name of the process are listed in another table as below,
```
TBLProcess
ProcessId ProcessName
---------------------------
PID01 Process1
PID02 Process2
PID03 Process3
```
Now, I would like to split the values in the table 'TBLFlow' in order to get their name from the table 'TBLProcess' by perform a join(preferably 'Inner Join') between two tables.
Finally, when I execute the query, I would like the result to be as below,
```
FlowId FlowName ProcessId ProcessName
------------------------------------------------------------------------------
F01 Flow1 PID01-PID02-PID03 Process1-Process2-Process3
F01 Flow1 PID01-PID03-PID02 Process1-Process3-Process2
```
I am working on SQL Server 2008 and would like to do this operation in a single Stored Procedure.Could you help me on the query to written in the Stored Procedure.
**EDIT:**
Table 'TBLFlow' can be recontructed as below,
```
TBLFlow
FlowId FlowName ProcessId
------------------------------------------------
F00 Flow1 PID01
F00 Flow1 PID02
F00 Flow1 PID03
F01 Flow2 PID01
F01 Flow2 PID03
F01 Flow2 PID02
``` | I imagined such a monster
```
DECLARE @TBLFlow table (FlowId varchar(20), FlowName varchar(100), ProcessId varchar(1000))
DECLARE @TBLProcess table (ProcessId varchar(20), ProcessName varchar(100))
insert into @TBLFlow values ('F00','Flow1','PID01-PID02-PID03'), ('F01','Flow2','PID01-PID03-PID02')
insert into @TBLProcess values ('PID01','Process1'), ('PID02','Process2'), ('PID03','Process3')
;with c as
(
select
1 as rn,
FlowId,
FlowName,
CHARINDEX('-',ProcessId,1) as Pos,
case when CHARINDEX('-',ProcessId,1)>0 then SUBSTRING(ProcessId,1,CHARINDEX('-',ProcessId,1)-1) else ProcessId end as value,
case when CHARINDEX('-',ProcessId,1)>0 then SUBSTRING(ProcessId,CHARINDEX('-',ProcessId,1)+1,LEN(ProcessId)-CHARINDEX('-',ProcessId,1)) else '' end as ProcessId
from @TBLFlow
union all
select
rn + 1 as rn,
FlowId,
FlowName,
CHARINDEX('-',ProcessId,1) as Pos,
case when CHARINDEX('-',ProcessId,1)>0 then SUBSTRING(ProcessId,1,CHARINDEX('-',ProcessId,1)-1) else ProcessId end as Value,
case when CHARINDEX('-',ProcessId,1)>0 then SUBSTRING(ProcessId,CHARINDEX('-',ProcessId,1)+1,LEN(ProcessId)-CHARINDEX('-',ProcessId,1)) else '' end as ProcessId
from c
where LEN(ProcessId)>0
)
select
f.FlowId,
f.FlowName,
f.ProcessId,
stuff(
(
select '-'+p.ProcessName
from c
inner join @TBLProcess p on p.ProcessId=c.value
where c.flowid=f.flowid
order by c.rn
FOR XML PATH('')
),
1,
1,
''
) as ProcessName
from @TBLFlow f
``` | You can try this
```
SELECT F.*,ISNULL(P1.ProcessName,'') + '-' + ISNULL(P2.ProcessName,'') + '-' +
ISNULL(P3.ProcessName,'')
FROM TBLFlow AS F
LEFT JOIN TBLProcess AS P1
ON SUBSTRING(F.ProcessId,0,CHARINDEX('-',F.ProcessId)) = P1.ProcessId
LEFT JOIN TBLProcess AS P2
ON LEFT(RIGHT(F.ProcessId,LEN(F.ProcessId)-CHARINDEX('-',F.ProcessId,1)),
CHARINDEX('- ',RIGHT(F.ProcessId,LEN(F.ProcessId)-
CHARINDEX('-',F.ProcessId,1)),1)-1)
= P2.ProcessId
LEFT JOIN TBLProcess AS P3
ON SUBSTRING(F.ProcessId, LEN(RIGHT(F.ProcessId,LEN(F.ProcessId)-
CHARINDEX('-',F.ProcessId))) + 2,LEN(F.ProcessId)) = P3.ProcessId
``` | Split values by hyphen that exist in the same row and integrate with a INNER JOIN? | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I'm creating a simple blog on Flask and I'm trying to implement Flask-Admin to manage my posts. If I go to the admin area I can see a list of all my post from the DB but when I try to create a new one I got the next error:
```
Failed to create model. __init__() takes exactly 4 arguments (1 given)
```
This is my post model:
```
class Post(db.Model):
__tablename__ = 'news'
nid = db.Column(db.Integer, primary_key = True)
title = db.Column(db.String(100))
content = db.Column(db.Text)
created_at = db.Column(db.DateTime)
def __init__(self, title, content):
self.title = title.title()
self.content = content
self.created_at = datetime.datetime.now()
```
And this is my code to add the model to the UI:
```
from flask import Flask, session
from models import db, Post
from flask.ext.admin import Admin
from flask.ext.admin.contrib.sqlamodel import ModelView
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql://root:pass@localhost/appname'
db.init_app(app)
admin = Admin(app)
admin.add_view(ModelView(Post, db.session))
```
I DO can edit models through the admin panel but not create new ones. I know I'm missing something really stupid but I can't figure out what it is.
Edit: it works if I don't implement **init** on the model. How can I fix this? | Take a look at the relevant part in the source code for Flask-Admin [here](https://github.com/mrjoes/flask-admin/blob/08a4de5781bc55b431f406b8ac9230363ad6e9d5/flask_admin/contrib/sqla/view.py#L887).
The model is created without passing any arguments:
```
model = self.model()
```
So you should support a constructor that takes no arguments as well. For example, declare your `__init__` constructor with default arguments:
```
def __init__(self, title = "", content = ""):
self.title = title.title()
self.content = content
self.created_at = datetime.datetime.now()
``` | So, this is how I've implemented a Post class in my application:
```
class Post(db.Model):
id = db.Column(db.Integer, primary_key=True)
title = db.Column(db.Unicode(80))
body = db.Column(db.UnicodeText)
create_date = db.Column(db.DateTime, default=datetime.utcnow())
update_date = db.Column(db.DateTime, default=datetime.utcnow())
status = db.Column(db.Integer, default=DRAFT)
user_id = db.Column(db.Integer, db.ForeignKey('user.id'))
def __init__(self, title, body, createdate, updatedate, status, user_id):
self.title = title
self.body = body
self.create_date = create_date
self.update_date = update_date
self.status = status
self.user_id = user_id
```
If you're going to stick with instanciating your model with a `created_at` value of `datetime.datetime.now()`, you may want to reference my code above, wherein the equivalent `datetime.utcnow()` function is set as the default for `create_date` and `update_date`.
One thing I'm curious about is your use of `self.title=title.title()` and `self.content = content.title()`; are those values coming from a function?
If not and you're passing them as strings, I think you'd want to update those to `self.title = title` and `self.content = content`
That could explain why you're seeing your issue. If `content.title()` isn't a function, that would result in no argument for that parameter...
you might try using the following and seeing if it resolves your issue:
```
class Post(db.Model):
__tablename__ = 'news'
nid = db.Column(db.Integer, primary_key = True)
title = db.Column(db.String(100))
content = db.Column(db.Text)
created_at = db.Column(db.DateTime, default=datetime.datetime.now())
def __init__(self, title, content, created_at):
self.title = title
self.content = content
self.created_at = created_at
``` | Unable to create models on Flask-admin | [
"",
"python",
"flask",
"flask-sqlalchemy",
"flask-admin",
""
] |
```
import copy
a = "deepak"
b = 1, 2, 3, 4
c = [1, 2, 3, 4]
d = {1: 10, 2: 20, 3: 30}
a1 = copy.copy(a)
b1 = copy.copy(b)
c1 = copy.copy(c)
d1 = copy.copy(d)
print("immutable - id(a)==id(a1)", id(a) == id(a1))
print("immutable - id(b)==id(b1)", id(b) == id(b1))
print("mutable - id(c)==id(c1)", id(c) == id(c1))
print("mutable - id(d)==id(d1)", id(d) == id(d1))
```
I get the following results:
```
immutable - id(a)==id(a1) True
immutable - id(b)==id(b1) True
mutable - id(c)==id(c1) False
mutable - id(d)==id(d1) False
```
If I perform deepcopy:
```
a1 = copy.deepcopy(a)
b1 = copy.deepcopy(b)
c1 = copy.deepcopy(c)
d1 = copy.deepcopy(d)
```
results are the same:
```
immutable - id(a)==id(a1) True
immutable - id(b)==id(b1) True
mutable - id(c)==id(c1) False
mutable - id(d)==id(d1) False
```
If I work on assignment operations:
```
a1 = a
b1 = b
c1 = c
d1 = d
```
then results are:
```
immutable - id(a)==id(a1) True
immutable - id(b)==id(b1) True
mutable - id(c)==id(c1) True
mutable - id(d)==id(d1) True
```
Can somebody explain what exactly makes a difference between the copies? Is it something related to mutable & immutable objects? If so, can you please explain it to me? | Normal assignment operations will simply point the new variable towards the existing object. The [docs](http://docs.python.org/2/library/copy.html) explain the difference between shallow and deep copies:
> The difference between shallow and deep copying is only relevant for
> compound objects (objects that contain other objects, like lists or
> class instances):
>
> * A shallow copy constructs a new compound object and then (to the extent possible) inserts references into it to the objects found in the original.
> * A deep copy constructs a new compound object and then, recursively, inserts copies into it of the objects found in the
> original.
Here's a little demonstration:
```
import copy
a = [1, 2, 3]
b = [4, 5, 6]
c = [a, b]
```
Using normal assignment operatings to copy:
```
d = c
print id(c) == id(d) # True - d is the same object as c
print id(c[0]) == id(d[0]) # True - d[0] is the same object as c[0]
```
Using a shallow copy:
```
d = copy.copy(c)
print id(c) == id(d) # False - d is now a new object
print id(c[0]) == id(d[0]) # True - d[0] is the same object as c[0]
```
Using a deep copy:
```
d = copy.deepcopy(c)
print id(c) == id(d) # False - d is now a new object
print id(c[0]) == id(d[0]) # False - d[0] is now a new object
``` | For immutable objects, there is no need for copying because the data will never change, so Python uses the same data; ids are always the same. For mutable objects, since they can potentially change, [shallow] copy creates a new object.
Deep copy is related to nested structures. If you have list of lists, then deepcopy `copies` the nested lists also, so it is a recursive copy. With just copy, you have a new outer list, but inner lists are references.
Assignment does not copy. It simply sets the reference to the old data. So you need copy to create a new list with the same contents. | What is the difference between shallow copy, deepcopy and normal assignment operation? | [
"",
"python",
"copy",
"variable-assignment",
"immutability",
"deep-copy",
""
] |
The below code gives error:
```
Traceback (most recent call last):
File "pdf.py", line 14, in <module>
create_pdf(render_template('templates.htm'))
File "/usr/local/lib/python2.7/dist-packages/flask/templating.py", line 123, in render_template
ctx.app.update_template_context(context)
AttributeError: 'NoneType' object has no attribute 'app'
```
Code:
```
from xhtml2pdf import pisa
from StringIO import StringIO
from flask import render_template,Flask
app=Flask(__name__)
app.debug=True
@app.route("/")
def create_pdf(pdf_data):
filename= "file.pdf"
pdf=pisa.CreatePDF( StringIO(pdf_data),file(filename, "wb"))
if __name__ == "__main__":
create_pdf(render_template('templates.htm'))
``` | From the code, I can see that you want to allow user to download pdf.
```
from xhtml2pdf import pisa
from StringIO import StringIO
from flask import render_template,Flask, Response
app=Flask(__name__)
app.debug=True
@app.route("/")
def create_pdf(pdf_data):
filename= "file.pdf"
pdf=pisa.CreatePDF( StringIO(pdf_data),file(filename, "wb"))
return Response(pdf, mimetype='application/octet-stream',
headers={"Content-Disposition": "attachment;filename=%s" % filename})
if __name__ == "__main__":
app.run()
```
Now, run `python aboveprogram.py`
Go to `http://localhost:5000`
Browser prompts to download PDF. | Martin's answer gives a good explanation of *why* this error occurs.
The accepted answer fixes the problem posed but it's certainly not the only way. In my case I had something more like:
```
import threading
from flask import Flask, render_template
app = Flask("myapp")
app.route('/')
def get_thing(thing_id):
thing = cache.get(thing_id)
if thing is None:
# Handle cache miss...
elif is_old(thing):
# We'll serve the stale content but let's
# update the cache in a background thread
t = threading.Thread(
target=get_thing_from_datastore_render_and_cache_it,
args=(thing_id,)
)
t.start()
return thing
def get_thing_from_datastore_render_and_cache_it(thing_id):
thing = datastore.get(thing_id)
cache.set(render_template(thing))
```
But when `get_thing_from_datastore_render_and_cache_it` was run in the background thread outside the Flask request cycle I was getting the error shown above because that thread did not have access to a request context.
The error occurs because Flask offers a developer shortcut to allow accessing request variables in the template automagically - put another way, it is caused by the decisions Flask made about how to wrap Jinja2's functionality, not Jinja2 itself. My approach to solving this was just to use Jinja2's rendering directly:
```
import jinja2
def render_without_request(template_name, **template_vars):
"""
Usage is the same as flask.render_template:
render_without_request('my_template.html', var1='foo', var2='bar')
"""
env = jinja2.Environment(
loader=jinja2.PackageLoader('name.ofmy.package','templates')
)
template = env.get_template(template_name)
return template.render(**template_vars)
```
That function assumes that your Flask app has the traditional templates subfolder. Specifically, the project structure here would be
```
.
└── name/
├── ofmy/
| ├── package/
| | ├── __init__.py <--- Where your Flask application object is defined
| | └── templates/
| | └── my_template.html
| └── __init__.py
└── __init__.py
```
If you have a subdirectory structure under `templates/`, you just pass the relative path from the root of the templates folder the same as you would when using Flask's `render_template`. | AttributeError: 'NoneType' object has no attribute 'app' | [
"",
"python",
"flask",
"xhtml2pdf",
""
] |
My 'Location' table contains 6 columns. (ID, Name, Alias, Area, X, Y)
Example of some entries from the 'Name' column:
```
Blackwood Highschool, Paris, France
Hilltop Market, Barcelona, Spain
Roundwell Plaza, Melbourne, Australia
Rurk Mount, Moscow, Russia(mountain)
History Museum, Prague, Czech Republic
Narrow River (river), Bombay, India
```
Some entries include "(mountain)", "(river)" or "(...)" within the name (8 different ones). I don't know why the table was created this way. It should have had an extra column for this data, but well.
I want to remove just the "(...)" substrings from the Location Names. I don't know how to do it, it's something like this so you get an idea:
```
DELETE FROM 'Location'
WHERE 'Name'
LIKE '%(%)%';
```
I know this would delete the whole row, but I just want to remove the (%) term from the 'Name' string. | If you only have 8 variations, and this is a one time thing, you could do it with a replace.
```
update location
set name = replace(name , '(river)','')
where name like '%(river)%';
``` | You can do this with brute force string operations:
```
select concat(left(name, instr(name, '(') - 1),
right(name, length(name) - instr(val, ')'))
)
```
Actually, you want this in an `update` statement:
```
update location
set name = concat(left(name, instr(name, '(') - 1),
right(name, length(name) - instr(val, ')'))
)
where name like '%(%)%';
```
You do *not* want `delete`, because that deletes entire rows. | MySQL - Remove substring from an entry | [
"",
"mysql",
"sql",
"substring",
"sql-like",
""
] |
I try to get select from two tables and put some data from one to other with ussing WHERE
(PL/SQL)
I have two tables like those:
table1
```
ID NAME COLOR COMPANY_SHORT_NR
1 a Green 1
2 b Red 23
3 c Blue null
4 a Green null
5 g Green 1
```
table2
```
ID SHORT COMP_NAME
1 1 company_name_1
2 23 comapny_name_2
```
and now I would like to get all data from table 1 with companies names and if its null get info it is null like that
```
1 a Green company_name_1
2 b Red comapny_name_2
3 c Blue null
4 a Green null
5 g Green company_name_1
```
I tried do it like this:
```
select ID
,NAME
,COLOR
,COMPANY_SHORT_NR
from table1
,table2
where COMPANY_SHORT_NR = SHORT
```
but this give me only not null values:
```
1 a Green company_name_1
2 b Red comapny_name_2
5 g Green company_name_1
```
if i use sth like this:
```
select ID
,NAME
,COLOR
,COMPANY_SHORT_NR
from table1
,table2
where COMPANY_SHORT_NR = SHORT or COMPANY_SHORT_NR is null
```
I get thousends of records ...
If I use only `IS NULL` than it returns me only 2 rows as it should be.
Where I make mistake ? | you neen OUTER JOIN for that
```
select ID
,NAME
,COLOR
,COMPANY_SHORT_NR
from table1
LEFT OUTER JOIN table2 ON ( COMPANY_SHORT_NR = SHORT )
``` | You have to use `left join` as below
```
select ID
,NAME
,COLOR
,COMPANY_SHORT_NR
from table1 t1
left join table2 T2 on t1.COMPANY_SHORT_NR = t2.SHORT
``` | One select on two tables with NULL and not null | [
"",
"sql",
"oracle",
"select",
"plsql",
"isnull",
""
] |
```
SqlConnection con = new SqlConnection("My path");
SqlCommand cmd;
SqlDataReader dr;
Label1.Text = Session["name"].ToString();
con.Open();
cmd = new SqlCommand("Select '"+Session["name"]+"' from table1",con);
dr = cmd.ExecuteReader();
while(dr.read()){
Dropdownlist1.Items.Add(); ------->Stuck here
}
```
I am selecting a column from a table which will have 3 or more entries. And I want to put them in dropdownlist. I tried different approaches but nothing is giving me output I require. It shows error or display first entry only.
Please feel free to edit my code and provide a suitable idea.
THANKS
This is my table:
```
SKY | SEA | LAND
-----------------------
EAGLE| SHARK | LION
CROW | FISH | TIGER
DUCK | WHALE | DEER
```
Where `Session["name"]` can be SKY/SEA/LAND and depending on that I want my drop down to show the entries below it IF SKY is Session["name"] then dropdwonlist should display EAGLE CROW DUCK | Base on your question you are passing Session["name"] which is your table Column Name to your select query.
There is also need to change on your code:
Lets say your Session["name"] value is **SKY**
```
"Select '"+Session["name"]+"' from tableA" --session["name"] is treated here as a string
```
this will be rendered as `Select 'Session[Name]' from TableA` which is incorrect.
so the result will be :
```
Session[Name]
Session[Name]
Session[Name]
```
Remove the single quote on the query and let's say : Session["name"] value is **SKY**
```
cmd = new SqlCommand("Select "+Session["name"].ToString()+" from table1",con);
```
the query will be -> `Select SKY from table1`
and the select result
```
EAGLE
CROW
DUCK
```
If so try this :
```
SqlConnection con = new SqlConnection("My path");
SqlCommand cmd;
SqlDataReader dr;
Label1.Text = Session["name"].ToString();
con.Open();
cmd = new SqlCommand("Select "+Session["name"].ToString()+" from table1",con);
dr = cmd.ExecuteReader();
while (dr.Read())
{
// get the results of each column
Dropdownlist1.Items.Add(dr[0].ToString()); //this will get your first column even your Session["name"] will contain any column values
}
```
Check also this tutorial: [Lesson 04: Reading Data with the SqlDataReader](http://csharp-station.com/Tutorial/AdoDotNet/Lesson04)
Best Regards | Have you tried?
```
while (dr.read()){
Dropdownlist1.Items.Add(dr["Name"].ToString());
}
```
If you want to go get only once and without also having to specify the column name then you could use GetString(0) like:
```
while (dr.read()){
Dropdownlist1.Items.Add(dr.GetString(0));
break;
}
``` | Not able to display column entries using Data Reader | [
"",
"asp.net",
"sql",
"sql-server",
"c#-4.0",
""
] |
Say I have two lists one longer than the other, `x = [1,2,3,4,5,6,7,8]` and `y = [a,b,c]` and I want to merge each element in y to every 3rd index in x so the resulting list z would look like: `z = [1,2,a,3,4,b,5,6,c,7,8]`
What would be the best way of going about this in python? | Here is an adapted version of the roundrobin recipe from the [itertools documentation](http://docs.python.org/2/library/itertools.html#recipes) that should do what you want:
```
from itertools import cycle, islice
def merge(a, b, pos):
"merge('ABCDEF', [1,2,3], 3) --> A B 1 C D 2 E F 3"
iterables = [iter(a)]*(pos-1) + [iter(b)]
pending = len(iterables)
nexts = cycle(iter(it).next for it in iterables)
while pending:
try:
for next in nexts:
yield next()
except StopIteration:
pending -= 1
nexts = cycle(islice(nexts, pending))
```
Example:
```
>>> list(merge(xrange(1, 9), 'abc', 3)) # note that this works for any iterable!
[1, 2, 'a', 3, 4, 'b', 5, 6, 'c', 7, 8]
```
Or here is how you could use `roundrobin()` as it is without any modifications:
```
>>> x = [1,2,3,4,5,6,7,8]
>>> y = ['a','b','c']
>>> list(roundrobin(*([iter(x)]*2 + [y])))
[1, 2, 'a', 3, 4, 'b', 5, 6, 'c', 7, 8]
```
Or an equivalent but slightly more readable version:
```
>>> xiter = iter(x)
>>> list(roundrobin(xiter, xiter, y))
[1, 2, 'a', 3, 4, 'b', 5, 6, 'c', 7, 8]
```
Note that both of these methods work with any iterable, not just sequences.
Here is the original `roundrobin()` implementation:
```
from itertools import cycle, islice
def roundrobin(*iterables):
"roundrobin('ABC', 'D', 'EF') --> A D E B F C"
# Recipe credited to George Sakkis
pending = len(iterables)
nexts = cycle(iter(it).next for it in iterables)
while pending:
try:
for next in nexts:
yield next()
except StopIteration:
pending -= 1
nexts = cycle(islice(nexts, pending))
``` | Here's another way:
```
x = range(1, 9)
y = list('abc')
from itertools import count, izip
from operator import itemgetter
from heapq import merge
print map(itemgetter(1), merge(enumerate(x), izip(count(1, 2), y)))
# [1, 2, 'a', 3, 4, 'b', 5, 6, 'c', 7, 8]
```
This keeps it all lazy before building the new list, and lets `merge` naturally merge the sequences... kind of a decorate/undecorate... It does require Python 2.7 for `count` to have a `step` argument though.
So, to walk it through a bit:
```
a = list(enumerate(x))
# [(0, 1), (1, 2), (2, 3), (3, 4), (4, 5), (5, 6), (6, 7), (7, 8)]
b = zip(count(1, 2), y)
# [(1, 'a'), (3, 'b'), (5, 'c')]
print list(merge(a, b))
# [(0, 1), (1, 2), (1, 'a'), (2, 3), (3, 4), (3, 'b'), (4, 5), (5, 6), (5, 'c'), (6, 7), (7, 8)]
```
Then the `itemgetter(1)` just takes the actual value removing the index... | Merge 2 lists at every x position | [
"",
"python",
""
] |
In pandas, how can I add a new column which enumerates rows based on a given grouping?
For instance, assume the following DataFrame:
```
import pandas as pd
import numpy as np
a_list = ['A', 'B', 'C', 'A', 'A', 'C', 'B', 'B', 'A', 'C']
df = pd.DataFrame({'col_a': a_list, 'col_b': range(10)})
df
```
```
col_a col_b
0 A 0
1 B 1
2 C 2
3 A 3
4 A 4
5 C 5
6 B 6
7 B 7
8 A 8
9 C 9
```
I'd like to add a `col_c` that gives me the Nth row of the "group" based on a grouping of `col_a` and sorting of `col_b`.
Desired output:
```
col_a col_b col_c
0 A 0 1
3 A 3 2
4 A 4 3
8 A 8 4
1 B 1 1
6 B 6 2
7 B 7 3
2 C 2 1
5 C 5 2
9 C 9 3
```
I'm struggling to get to `col_c`. You can get to the proper grouping and sorting with `.sort_index(by=['col_a', 'col_b'])`, it's now a matter of getting to that new column and labeling each row. | There's [cumcount](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html), for precisely this case:
```
df['col_c'] = g.cumcount()
```
As it says in the docs:
> Number each item in each group from 0 to the length of that group - 1.
---
*Original answer (before cumcount was defined).*
You could create a helper function to do this:
```
def add_col_c(x):
x['col_c'] = np.arange(len(x))
return x
```
First sort by column col\_a:
```
In [11]: df.sort('col_a', inplace=True)
```
then apply this function across each group:
```
In [12]: g = df.groupby('col_a', as_index=False)
In [13]: g.apply(add_col_c)
Out[13]:
col_a col_b col_c
3 A 3 0
8 A 8 1
0 A 0 2
4 A 4 3
6 B 6 0
1 B 1 1
7 B 7 2
9 C 9 0
2 C 2 1
5 C 5 2
```
*In order to get `1,2,...` you couls use `np.arange(1, len(x) + 1)`.* | The given answers both involve calling a python function for each group, and if you have many groups a vectorized approach should be faster (I havent checked).
Here is my pure numpy suggestion:
```
In [5]: df.sort(['col_a', 'col_b'], inplace=True, ascending=(False, False))
In [6]: sizes = df.groupby('col_a', sort=False).size().values
In [7]: df['col_c'] = np.arange(sizes.sum()) - np.repeat(sizes.cumsum() - sizes, sizes)
In [8]: print df
col_a col_b col_c
9 C 9 0
5 C 5 1
2 C 2 2
7 B 7 0
6 B 6 1
1 B 1 2
8 A 8 0
4 A 4 1
3 A 3 2
0 A 0 3
``` | Enumerate each row for each group in a DataFrame | [
"",
"python",
"pandas",
"dataframe",
"running-count",
""
] |
The deeper i dive in mysql the more i lost and now i'm completed lost.
So i have some tables:
MainTable
```
id|message|name
-----------------
1 |test |OP
2 |test2 |jim
3 |test3 |ted
```
Table1
```
id|likes
---------
2 | 1
3 | 0
```
Table2
```
id|likes
---------
2 | 1
```
Table3
```
id|likes
---------
1 | 1
2 | 1
3 | 0
```
What i want to do is get the total number of `likes` (where a like is equal to 1) for every id in one column so i can count the total likes of a message(with its respective id).
Until now i have managed to join my tables so i get the a `likes` column at the end:
```
SELECT id,Table1.likes,Table2.likes,Table3.likes
FROM MainTable
LEFT JOIN Table1.id ON MainTable.id = Table1.id LEFT JOIN Table2.id ON MainTable.id = Table2.id LEFT JOIN Table3.id ON MainTable.id = Table3.id
```
First of all,is it possible? I know my code is not great but at least its a start!
Thanks! | I am guessing you are looking for one of these...
```
SELECT
id
,SUM(Table1.likes)
,SUM(Table2.likes)
,SUM(Table3.likes)
FROM MainTable
LEFT JOIN Table1 ON MainTable.id = Table1.id
LEFT JOIN Table2 ON MainTable.id = Table2.id
LEFT JOIN Table3 ON MainTable.id = Table3.id
GROUP BY MainTable.id
SELECT
id
,SUM(Table1.likes)+SUM(Table2.likes)+SUM(Table3.likes)
FROM MainTable
LEFT JOIN Table1 ON MainTable.id = Table1.id
LEFT JOIN Table2 ON MainTable.id = Table2.id
LEFT JOIN Table3 ON MainTable.id = Table3.id
GROUP BY MainTable.id
``` | Here is the query, counting non-zero likes for every row in MainTable:
```
SELECT
MainTable.id,
MainTable.name,
MainTable.message,
COUNT(Table1.likes) + COUNT(Table2.likes)
+ COUNT(Table3.likes) AS n_likes
FROM
MainTable
LEFT JOIN
Table1 ON MainTable.id = Table1.id
AND
Table1.likes=1
LEFT JOIN
Table2 ON MainTable.id = Table2.id
AND
Table2.likes=1
LEFT JOIN
Table3
ON
MainTable.id = Table3.id
AND
Table3.likes=1
GROUP BY
MainTable.id;
```
[fiddle](http://www.sqlfiddle.com/#!2/a4e6e/1)
Beware, you have an error in your SQL syntax:
```
LEFT JOIN Table1.id
```
you must write the table name, without the column, when joining:
```
LEFT JOIN Table1
``` | SQL get the sum of entries from join tables | [
"",
"mysql",
"sql",
"join",
""
] |
Working on a webpage I used the next line:
```
Model.select(:column).where("column IS NOT NULL")
```
I was wondering if there was a more Rails-ish way to do this, like using a hash for example
```
Model.select(:column).where(column: !nil)
``` | The [Squeel](https://github.com/ernie/squeel) gem will allow you to use != nil type syntax, but natively rails will not.
Example: `Model.where{column != nil}` | I would prefer to use a scope as its more readable as well as its more manageable later (like merging with other scopes)
```
class Model < ActiveRecord::Base
scope :not_null, lambda { |column|
{:select => column,
:conditions => "#{column} NOT NULL"
}
}
end
```
then use
```
Model.not_null("column_name")
``` | Rails - Simplify WHERE condition IS NOT NULL | [
"",
"sql",
"ruby-on-rails",
"activerecord",
""
] |
I'm selecting data from two different rows in the same table using a single sql.
```
"id" "borrowMax" "holder" "category" "country"
"1" "2" "0" "3" "US"
"2" "0" "1" "10" "US"
```
What I'm trying to do works out to this.
```
select id, holder from mytable where id = 2
select borrowMax from mytable where id = (
holder from the above select, in this case it's 1
) and category = 3
```
**The way I do it after looking at examples online is**
```
SELECT col1.id, col1.holder, col2.borrowMax
FROM collection_db col1
JOIN collection_db col2
ON col2.holder = col1.id
WHERE col1.id = 2 //Here, 2 is the value supplied by me
AND col2.category = 3
```
Sure, this works. But since it's something I pieced together myself, I have my doubts. How would `you` do something like this? Am I on the right track? (I'm sure I'm not). | I would make use of nested select statements for a use case like yours. There is no JOIN operation being used, just a select query over a already filtered set of results and a logically more coherent code.
```
SELECT borrowmax, holder, id FROM mytable WHERE
id = (SELECT holder FROM mytable WHERE id = 2 )
AND category = 3
``` | You can also use table alias for this.
```
select t1.id, t1.holder, t2.borrowMax from mytable t1, mytable t2 where t1.id = t2.holder
``` | Selecting data from different rows in the same table with a single statement | [
"",
"mysql",
"sql",
""
] |
My Payment Schedule holds a row for a payment schedule that gets run on a specific day, based on an 'effective date'.
```
CREATE TABLE [dbo].[PaymentSchedule] (
[PaymentScheduleId] INT IDENTITY (1, 1) NOT NULL,
[EffectiveDate] DATE NOT NULL,
[EffectiveDays] INT NOT NULL,
CONSTRAINT [pk_PaymentSchedule] PRIMARY KEY CLUSTERED ([PaymentScheduleId] ASC)
);
```
So, if the `effectivedate` is '01-JAN-2013', and the '`EffectiveDays`' is 7, then payment get made on the 1st of January, and then every 7 days after that. So, on the 8th of January, a payment must be made. On the 15th, a payment must be made.. etc etc.
If the `effectivedate` was '01-JAN-2013', and the `EffectiveDays` was 20, then the first payment is the 1st of Jan, the next payment day is the 21th of Jan, and the next after that would be 9th Feb, 2013.. etc etc.
What I am trying to do, is make a function that uses the above table, or a stored proc for that matter, that returns 'Next Payment Date', and takes in a DATE type. So, based on the date passed in, what is the next payment date? And also, 'Is today a payment date'.
Can this be done efficiently? In 7 years time, would I be able to tell if a date is a payment day, for example? | Your description of the problem is wrong. If the first payment is on Jan 1, the subsequent payments would be on the Jan 8, Jan 15 and so on.
The answer to your question about the current date is `datediff()` along with the modulus operator. To see if today is a payment date, take the difference and see if it is an exact multiple of the period you are looking at:
```
select getdate()
from PaymentSchedule ps
where datediff(day, ps.EffectiveDate, getdate()) % ps.EffectiveDays = 0;
```
The `%` is the modulus operator that takes the remainder between two values. So, `3%2` is 1 and `10%5` is 0.
For the next date, the answer is similar:
```
select dateadd(day,
ps.EffectiveDays - datediff(day, ps.EffectiveDate, today) % ps.EffectiveDays,
today) as NextDate
from PaymentSchedule ps cross join
(select cast(getdate() as date) as today) const
```
I've structured this as a subquery that defines the current date as `today`. This makes it easier to substitute in any other date that you might want. | You can use the method DATEDIFF(datepart, startdate, enddate) setting as datepart "dayofyear" to the result of this method will give you the number of days between the two dates, and divide as Modulo (%) this result by the EffectiveDays and if the result is 0 there is payment day; and if not you will have the days passed from the last payment day (if you sustract it from the EffectivedDays you must have the restant days to the next payment day).
Here is some doc for the DATEDIFF method:
<http://msdn.microsoft.com/en-us/library/ms189794.aspx> | Table driven payment schedule | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a question about dealing with m2m / through models and their presentation in django rest framework. Let's take a classic example:
models.py:
```
from django.db import models
class Member(models.Model):
name = models.CharField(max_length = 20)
groups = models.ManyToManyField('Group', through = 'Membership')
class Group(models.Model):
name = models.CharField(max_length = 20)
class Membership(models.Model):
member = models.ForeignKey('Member')
group = models.ForeignKey('Group')
join_date = models.DateTimeField()
```
serializers.py:
```
imports...
class MemberSerializer(ModelSerializer):
class Meta:
model = Member
class GroupSerializer(ModelSerializer):
class Meta:
model = Group
```
views.py:
```
imports...
class MemberViewSet(ModelViewSet):
queryset = Member.objects.all()
serializer_class = MemberSerializer
class GroupViewSet(ModelViewSet):
queryset = Group.objects.all()
serializer_class = GroupSerializer
```
When GETing an instance of Member, I successfully receive all of the member's fields and also its groups - however I only get the groups' details, without extra details that comes from the Membership model.
In other words I **expect** to receive:
```
{
'id' : 2,
'name' : 'some member',
'groups' : [
{
'id' : 55,
'name' : 'group 1'
'join_date' : 34151564
},
{
'id' : 56,
'name' : 'group 2'
'join_date' : 11200299
}
]
}
```
Note the **join\_date**.
I have tried oh so many solutions, including of course [Django Rest-Framework official page about it](http://www.django-rest-framework.org/api-guide/relations/#manytomanyfields-with-a-through-model) and no one seems to give a proper plain answer about it - what do I need to do to include these extra fields? I found it more straight-forward with django-tastypie but had some other problems and prefer rest-framework. | How about.....
On your MemberSerializer, define a field on it like:
```
groups = MembershipSerializer(source='membership_set', many=True)
```
and then on your membership serializer you can create this:
```
class MembershipSerializer(serializers.HyperlinkedModelSerializer):
id = serializers.Field(source='group.id')
name = serializers.Field(source='group.name')
class Meta:
model = Membership
fields = ('id', 'name', 'join_date', )
```
That has the overall effect of creating a serialized value, groups, that has as its source the membership you want, and then it uses a custom serializer to pull out the bits you want to display.
EDIT: as commented by @bryanph, `serializers.field` was renamed to `serializers.ReadOnlyField` in DRF 3.0, so this should read:
```
class MembershipSerializer(serializers.HyperlinkedModelSerializer):
id = serializers.ReadOnlyField(source='group.id')
name = serializers.ReadOnlyField(source='group.name')
class Meta:
model = Membership
fields = ('id', 'name', 'join_date', )
```
for any modern implementations | I was facing this problem and my solution (using DRF 3.6) was to use SerializerMethodField on the object and explicitly query the Membership table like so:
```
class MembershipSerializer(serializers.ModelSerializer):
"""Used as a nested serializer by MemberSerializer"""
class Meta:
model = Membership
fields = ('id','group','join_date')
class MemberSerializer(serializers.ModelSerializer):
groups = serializers.SerializerMethodField()
class Meta:
model = Member
fields = ('id','name','groups')
def get_groups(self, obj):
"obj is a Member instance. Returns list of dicts"""
qset = Membership.objects.filter(member=obj)
return [MembershipSerializer(m).data for m in qset]
```
This will return a list of dicts for the groups key where each dict is serialized from the MembershipSerializer. To make it writable, you can define your own create/update method inside the MemberSerializer where you iterate over the input data and explicitly create or update Membership model instances. | Include intermediary (through model) in responses in Django Rest Framework | [
"",
"python",
"django",
"django-rest-framework",
""
] |
I am wanting to search and count the number of times a string comes up in a webscrape. However I want to search between x and y within the webscrape.
Can anyone tell me the easiest method to count SEA BASS between MAIN FISHERMAN and SECONDARY FISHERMAN in the following example webscrape.
```
<p style="color: #555555;
font-family: Arial,Helvetica,sans-serif;
font-size: 12px;
line-height: 18px;">June 21, 2013 By FISH PPL Admin </small>
</div>
<!-- Post Body Copy -->
<div class="post-bodycopy clearfix"><p>MAIN FISHERMAN – </p>
<p><strong>CHAMP</strong> – Pedro 00777<br />
BAIT – LOCATION1 – 2:30 – SEA BASS (3 LBS 11/4)<br />
MULTI – LOCATION2 – 7:30 – COD (3 LBS 13/8)<br />
LURE – LOCATION5 – 3:20 – RUDD (2 LBS 6/1)</p>
<p>JOE BLOGGS <a href="url">url</a><br />
BAIT – LOCATION4 – 4:45 – ROACH (5 LBS 3/1)<br />
MULTI – LOCATION2 – 5:50 – PERCH (3 LBS 6/1)<br />
LURE – LOCATION1 – 3:45 – PIKE (2 LBS 5/1) </p>
BAIT – LOCATION1 – 2:30 – SEA BASS (3 LBS 11/4)<br />
MULTI – LOCATION1 – 3:45 – JUST THE JUDGE (3 LBS 3/1)<br />
LURE – LOCATION3 – 8:25 – SCHOOL FEES (2 LBS 7/1)</p>
<div class="post-bodycopy clearfix"><p>SECONDARY FISHERMAN – </p>
<p><strong>SPOON – <a href="url">url</a></strong><br />
BAIT – LOCATION1 – 2:30 – SEA BASS (3 LBS 11/4)<br />
MULTI – LOCATION2 – 7:30 – COD (3 LBS 7/4)<br />
LURE – LOCATION1 – 4:25 – TROUT (2 LBS 5/1)</p>
```
I attempted to use the following code to achieve this, but to no avail.
```
html = website.read()
pattern_to_exclude_unwanted_data = re.compile('MAIN FISHERMAN(.*)SECONDARY FISHERMAN')
excluding_unwanted_data = re.findall(pattern_to_exclude_unwanted_data, html)
print excluding_unwanted_data("SEA BASS")
``` | Do it in two steps:
1. Extract the substring between MAIN FISHERMAN and SECONDARY FISHERMAN.
2. Count SEA BASS
Like this:
```
relevant = re.search(r"MAIN FISHERMAN(.*)SECONDARY FISHERMAN", html, re.DOTALL).group(1)
found = relevant.count("SEA BASS")
``` | If you want to use `'MAIN FISHERMAN'` and `'SECONDARY FISHERMAN'` as markers to find `<div>` elements to count `'SEA BASS'` within:
```
import re
from bs4 import BeautifulSoup # $ pip install beautifulsoup4
soup = BeautifulSoup(html)
inbetween = False
count = 0
for div in soup.find_all('div', ["post-bodycopy", "clearfix"]):
if not inbetween:
inbetween = div.find(text=re.compile('MAIN FISHERMAN')) # check start
else: # inbetween
inbetween = not div.find(text=re.compile('SECONDARY FISHERMAN')) # end
if inbetween:
count += len(div.find_all(text=re.compile('SEA BASS')))
print(count)
``` | Restricting the area of text that is searched by python | [
"",
"python",
"regex",
"python-2.7",
"web-scraping",
"urllib",
""
] |
I have a very large dataset were I want to replace strings with numbers. I would like to operate on the dataset without typing a mapping function for each key (column) in the dataset. (similar to the fillna method, but replace specific string with assosiated value).
Is there anyway to do this?
Here is an example of my dataset
```
data
resp A B C
0 1 poor poor good
1 2 good poor good
2 3 very good very good very good
3 4 bad poor bad
4 5 very bad very bad very bad
5 6 poor good very bad
6 7 good good good
7 8 very good very good very good
8 9 bad bad very bad
9 10 very bad very bad very bad
```
The desired result:
```
data
resp A B C
0 1 3 3 4
1 2 4 3 4
2 3 5 5 5
3 4 2 3 2
4 5 1 1 1
5 6 3 4 1
6 7 4 4 4
7 8 5 5 5
8 9 2 2 1
9 10 1 1 1
```
very bad=1, bad=2, poor=3, good=4, very good=5
//Jonas | Use [replace](http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.replace.html)
```
In [126]: df.replace(['very bad', 'bad', 'poor', 'good', 'very good'],
[1, 2, 3, 4, 5])
Out[126]:
resp A B C
0 1 3 3 4
1 2 4 3 4
2 3 5 5 5
3 4 2 3 2
4 5 1 1 1
5 6 3 4 1
6 7 4 4 4
7 8 5 5 5
8 9 2 2 1
9 10 1 1 1
``` | Considering `data` is your `pandas DataFrame` you can also use:
```
data.replace({'very bad': 1, 'bad': 2, 'poor': 3, 'good': 4, 'very good': 5}, inplace=True)
``` | Replace string/value in entire DataFrame | [
"",
"python",
"replace",
"dataframe",
"pandas",
""
] |
I have tried the Sequence Generator like Lambda, List comprehension and others but it seems that I am not able to get what I really want. My final goal is to print sequence of words from a string like *string[1:3]*
What I am looking for :
```
a = [0,13,26,39]
b = [12,25,38,51]
str = 'If you are done with the file, move to the command area across from the file name in the RL screen and type'
read = str.split()
read[0:12]
['If', 'you', 'are', 'done', 'with', 'the', 'file,', 'move', 'to', 'the', 'command', 'area']
read[13:25]
['from', 'the', 'file', 'name', 'in', 'the', 'RL', 'screen', 'and', 'type']
``` | Use `zip`:
```
>>> a = [0,13,26,39]
>>> b = [12,25,38,51]
>>> strs = 'If you are done with the file, move to the command area across from the file name in the RL screen and type'
>>> spl = strs.split()
>>> for x,y in zip(a,b):
... print spl[x:y]
...
['If', 'you', 'are', 'done', 'with', 'the', 'file,', 'move', 'to', 'the', 'command', 'area']
['from', 'the', 'file', 'name', 'in', 'the', 'RL', 'screen', 'and', 'type']
[]
[]
```
`zip` returns list of tuples, where each tuple contains items on the same index from the iterables passed to it:
```
>>> zip(a,b)
[(0, 12), (13, 25), (26, 38), (39, 51)]
```
Use `itertools.izip` if you want memory efficient solution, as it returns an iterator.
You can use `str.join` if you want to create a string from that sliced list:
```
for x,y in zip(a,b):
print " ".join(spl[x:y])
...
If you are done with the file, move to the command area
from the file name in the RL screen and type
```
**Update:** Creating `a` and `b`:
```
>>> n = 5
>>> a = range(0, 13*n, 13)
>>> b = [ x + 12 for x in a]
>>> a
[0, 13, 26, 39, 52]
>>> b
[12, 25, 38, 51, 64]
``` | Do you mean:
```
>>> [read[i:j] for i, j in zip(a,b)]
[['If', 'you', 'are', 'done', 'with', 'the', 'file,', 'move', 'to', 'the',
'command', 'area'], ['from', 'the', 'file', 'name', 'in', 'the', 'RL',
'screen', 'and', 'type'], [], []]
```
or
```
>>> ' '.join[read[i:j] for i, j in zip(a,b)][0])
'If you are done with the file, move to the command area'
>>> ' '.join[read[i:j] for i, j in zip(a,b)][1])
'from the file name in the RL screen and type'
``` | Sequence Generation with Number applied to string | [
"",
"python",
"string",
"python-2.7",
""
] |
I've declared a list of tuples that I would like to manipulate. I have a function that returns an option from the user. I would like to see if the user has entered any one of the keys 'A', 'W', 'K'. With a dictionary, I would say this: `while option not in author.items() option = get_option()`. How can I accomplish this with a list of tuples?
```
authors = [('A', "Aho"), ('W', "Weinberger"), ('K', "Kernighan")]
``` | ```
authors = [('A', "Aho"), ('W', "Weinberger"), ('K', "Kernighan")]
option = get_option()
while option not in (x[0] for x in authors):
option = get_option()
```
How this works :
`(x[0] for x in authors)` is an generator expression, this yield the `[0]th` element of each item one by one from authors list, and that element is then matched against the `option`. As soon as match is found it short-circuits and exits.
Generator expressions yield one item at a time, so are memory efficient. | How about something like
```
option in zip(*authors)[0]
```
We are using [`zip`](http://docs.python.org/2/library/functions.html#zip) to essentially separate the letters from the words. Nevertheless, since we are dealing with a *list* of tuples, we must [unpack](http://docs.python.org/2/tutorial/controlflow.html#unpacking-argument-lists) it using `*`:
```
>>> zip(*authors)
[('A', 'W', 'K'), ('Aho', 'Weinberger', 'Kernighan')]
>>> zip(*authors)[0]
('A', 'W', 'K')
```
Then we simply use `option in` to test if option is contained in `zip(*authors)[0]`. | Python - input from list of tuples | [
"",
"python",
""
] |
How can I check if there's another record in the table with the same unique ID (the column name is `ID`), then and if one exists not do anything, otherwise insert the record?
Do I need to index the `ID` column and set it as unique/primary key? | One option is to use `INSERT IGNORE`. As the name implies it will INSERT rows that do not exist, and it will do nothing with duplicate rows.
```
CREATE TABLE Table1 (
`id` int NOT NULL PRIMARY KEY,
`foo` varchar(20)
);
INSERT IGNORE INTO Table1
(`id`, `foo`)
VALUES
(1, 'a'),
(2, 'a'),
(3, 'a'),
(4, 'a'),
(1, 'a'),
(2, 'a'),
(1, 'a');
```
The above will only insert the first 4 rows. The final 3 rows are not inserted because they have duplicate keys.
[SQL Fiddle Demo](http://sqlfiddle.com/#!2/d0d00/1)
`INSERT IGNORE` requires that you have either a primary key or unique index.
More info on `INSERT IGNORE` (as well as other ways to handle duplicates) here: [MySQL Handling Duplicates](http://www.tutorialspoint.com/mysql/mysql-handling-duplicates.htm) | You may use `REPLACE` as `INSERT OR UPDATE` for that | Check if a record with the same unique ID exists in SQL table before insert | [
"",
"mysql",
"sql",
""
] |
I am trying to write a program that will write a list of info in a text file. Here's an example of what I have so far
```
f.open('blah.txt','w')
x = input('put something here')
y = input('put something here')
z = input('put something here')
info = [x,y,z]
a = info[0]
b = info[1]
c = info[2]
f.write(a)
f.write(b)
f.write(c)
f.close()
```
However i need it to write it in a list-like format so that if I input
```
x = 1 y = 2 z = 3
```
then the file will read
```
1,2,3
```
and so that the next time I input info it will write it in a newline like
```
1,2,3
4,5,6
```
How can I fix this? | Format a string and write it:
```
s = ','.join(info)
f.write(s + '\n')
``` | Try this:
```
f.open('blah.txt','a') # append mode, if you want to re-write to the same file
x = input('put something here')
y = input('put something here')
z = input('put something here')
f.write('%d,%d,%d\n' %(x,y,z))
f.close()
``` | Python write to file in list format | [
"",
"python",
"python-3.x",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.