Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I am finding difficulties to creating a a query. Let say I have a Products and Brands table. I can add a foreign key using this command, ``` ALTER TABLE Products ADD FOREIGN KEY (BrandID) REFERENCES Brands(ID) ``` But I need to only run this command if Foreign Key does not exist. A similar thing I need is that drop a Foreign Key Constraint If Exist without using name.
First of all, you should always name your FKs and all other constraints in order to save yourself trouble like this. But, if you don't know the name of FK you can check it using multiple system views: ``` IF NOT EXISTS ( SELECT * FROM sys.foreign_key_columns fk INNER JOIN sys.columns pc ON pc.object_id = fk.parent_object_id AND pc.column_id = fk.parent_column_id INNER JOIN sys.columns rc ON rc.object_id = fk.referenced_object_id AND rc.column_id = fk.referenced_column_id WHERE fk.parent_object_id = object_id('Products') AND pc.name = 'BrandID' AND fk.referenced_object_id = object_id('Brands') AND rc.NAME = 'ID' ) ALTER TABLE Products ADD CONSTRAINT Your_New_FK_NAME FOREIGN KEY (BrandID) REFERENCES Brands(ID) ```
Try this: ``` IF NOT EXISTS (SELECT * FROM sys.objects o WHERE o.object_id = object_id(N'[dbo].[FK_Products_Brands]') AND OBJECTPROPERTY(o.object_id, N'IsForeignKey') = 1) BEGIN ALTER TABLE [dbo].[Products] WITH CHECK ADD CONSTRAINT [FK_Products_Brands] FOREIGN KEY([BrandID]) REFERENCES [dbo].[Brands] ([Id]) END ```
If Foreign Key Not Exist Then Add Foreign Key Constraint(Or Drop a Foreign Key Constraint If Exist) without using Name?
[ "", "sql", "sql-server-2008", "" ]
as for the question mentioned, i cant find any code which can fixed the gui window and positions of all the labels, buttons etc.. ``` import tkinter import tkinter.messagebox class Menu: def __init__(self): self.main = tkinter.Tk(height = 200, width = 400) self.position = tkinter.Label(self.main, text = '123',\ bg = 'Purple',\ height = 2, width = 8) self.position.place(height=50, width= 100) self.position.pack() tkinter.mainloop() gui = Menu() ``` for this, i can only do the sizing of the label, not the position and the size of the main window. it gives this error ``` Traceback (most recent call last): File "C:\Python33\Saves\Label position.py", line 18, in <module> gui = Menu() File "C:\Python33\Saves\Label position.py", line 7, in __init__ self.main = tkinter.Tk(height = 200, width = 400) TypeError: __init__() got an unexpected keyword argument 'height' ```
It looks like you cannot set the width and height of the Tk element in the constructor. However, you can use the `geometry` method: ``` self.main = tkinter.Tk() self.main.geometry("400x200") ```
Use the `minsize` and `maxsize` methods to set the size of the window. The following code will make a fixed size window. Of course, you can skip one of them to give your user the option to resize the window in any one direction. ``` top = tkinter.Tk() top.minsize(width=300, height=300) top.maxsize(width=300, height=300) ```
how to make python3.3 gui window fixed sizes?
[ "", "python", "user-interface", "tkinter", "python-3.3", "" ]
I am working on a Django application that will have two types of users: Admins and Users. Both are groups in my project, and depending on which group the individual logging in belongs to I'd like to redirect them to separate pages. Right now I have this in my settings.py ``` LOGIN_REDIRECT_URL = 'admin_list' ``` This redirects all users who sign in to 'admin\_list', but the view is only accessible to members of the Admins group -- otherwise it returns a 403. As for the login view itself, I'm just using the one Django provides. I've added this to my main urls.py file to use these views: ``` url(r'^accounts/', include('django.contrib.auth.urls')), ``` How can I make this so that only members of the Admins group are redirect to this view, and everyone else is redirected to a different view?
Create a separate view that redirects user's based on whether they are in the admin group. ``` from django.shortcuts import redirect def login_success(request): """ Redirects users based on whether they are in the admins group """ if request.user.groups.filter(name="admins").exists(): # user is an admin return redirect("admin_list") else: return redirect("other_view") ``` Add the view to your `urls.py`, ``` url(r'login_success/$', views.login_success, name='login_success') ``` then use it for your `LOGIN_REDIRECT_URL` setting. ``` LOGIN_REDIRECT_URL = 'login_success' ```
I use an intermediate view to accomplish the same thing: ``` LOGIN_REDIRECT_URL = "/wherenext/" ``` then in my urls.py: ``` (r'^wherenext/$', views.where_next), ``` then in the view: ``` @login_required def wherenext(request): """Simple redirector to figure out where the user goes next.""" if request.user.is_staff: return HttpResponseRedirect(reverse('admin-home')) else: return HttpResponseRedirect(reverse('user-home')) ```
Django -- Conditional Login Redirect
[ "", "python", "django", "django-admin", "django-views", "" ]
I have a model which has the fields `word` and `definition`. model of dictionary. in db, i have for example these objects: ``` word definition ------------------------- Banana Fruit Apple also Fruit Coffee drink ``` I want to make a query which gives me, sorting by the first letter of word, this: ``` Apple - also Fruit Banana - Fruit Coffee -drink ``` this is my model: ``` class Wiki(models.Model): word = models.TextField() definition = models.TextField() ``` I want to make it in views, not in template. how is this possible in django?
Given the model... ``` class Wiki(models.Model): word = models.TextField() definition = models.TextField() ``` ...the code... ``` my_words = Wiki.objects.order_by('word') ``` ...should return the records in the correct order. However, you won't be able to create an index on the `word` field if the type is `TextField`, so sorting by `word` will take a long time if there are a lot of rows in your table. I'd suggest changing it to... ``` class Wiki(models.Model): word = models.CharField(max_length=255, unique=True) definition = models.TextField() ``` ...which will not only create an index on the `word` column, but also ensure you can't define the same word twice.
Since you tagged your question Django, I will answer how to do it using Django entities. First, define your entity like: ``` class FruitWords(models.Model): word = models.StringField() definition = models.StringField() def __str__(self): return "%s - %s" % (self.word, self.definition) ``` To get the list: ``` for fruit in FruitWords.all_objects.order_by("word"): print str(fruit) ```
django - how to sort objects alphabetically by first letter of name field
[ "", "python", "django", "" ]
I found a couple of SQL [tasks](http://www.jitbit.com/news/181-jitbits-sql-interview-questions/) on Hacker News today, however I am stuck on solving the second task in Postgres, which I'll describe here: You have the following, simple table structure: ![enter image description here](https://i.stack.imgur.com/WXaMU.png) List the employees who have the biggest salary in their respective departments. I set up an SQL Fiddle [here](http://sqlfiddle.com/#!2/778bb) for you to play with. It should return Terry Robinson, Laura White. Along with their names it should have their salary and department name. Furthermore, I'd be curious to know of a query which would return Terry Robinsons (maximum salary from the Sales department) and Laura White (maximum salary in the Marketing department) and an empty row for the IT department, with `null` as the employee; explicitly stating that there are no employees (thus nobody with the highest salary) in that department.
### Return *one* employee with the highest salary per dept. Use [`DISTINCT ON`](http://www.postgresql.org/docs/current/interactive/sql-select.html#SQL-DISTINCT) for a much simpler and faster query that does all you are asking for: ``` SELECT DISTINCT ON (d.id) d.id AS department_id, d.name AS department ,e.id AS employee_id, e.name AS employee, e.salary FROM departments d LEFT JOIN employees e ON e.department_id = d.id ORDER BY d.id, e.salary DESC; ``` [->SQLfiddle](http://sqlfiddle.com/#!12/49881/3) (for Postgres). Also note the [`LEFT [OUTER] JOIN`](http://www.postgresql.org/docs/current/interactive/sql-select.html#SQL-FROM) that keeps departments with no employees in the result. This picks only `one` employee per department. If there are multiple sharing the highest salary, you can add more ORDER BY items to pick one in particular. Else, an arbitrary one is picked from peers. If there are no employees, the department is still listed, with `NULL` values for employee columns. You can simply add any columns you need in the `SELECT` list. Find a detailed explanation, links and a benchmark for the technique in this related answer: [Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564) Aside: It is an anti-pattern to use non-descriptive column names like `name` or `id`. Should be `employee_id`, `employee` etc. ### Return *all* employees with the highest salary per dept. Use the window function `rank()` (like [@Scotch already posted](https://stackoverflow.com/a/16799701/939860), just simpler and faster): ``` SELECT d.name AS department, e.employee, e.salary FROM departments d LEFT JOIN ( SELECT name AS employee, salary, department_id ,rank() OVER (PARTITION BY department_id ORDER BY salary DESC) AS rnk FROM employees e ) e ON e.department_id = d.department_id AND e.rnk = 1; ``` Same result as with the above query with your example (which has no ties), just a bit slower.
This is with reference to your fiddle: ``` SELECT * -- or whatever is your columns list. FROM employees e JOIN departments d ON e.Department_ID = d.id WHERE (e.Department_ID, e.Salary) IN (SELECT Department_ID, MAX(Salary) FROM employees GROUP BY Department_ID) ``` **EDIT :** As mentioned in a comment below, if you want to see the IT department also, with all `NULL` for the employee records, you can use the `RIGHT JOIN` and put the filter condition in the joining clause itself as follows: ``` SELECT e.name, e.salary, d.name -- or whatever is your columns list. FROM employees e RIGHT JOIN departments d ON e.Department_ID = d.id AND (e.Department_ID, e.Salary) IN (SELECT Department_ID, MAX(Salary) FROM employees GROUP BY Department_ID) ```
Employees with largest salary in department
[ "", "sql", "postgresql", "" ]
I installed redis this afternoon and it caused a few errors, so I uninstalled it but this error is persisting when I launch the app with `foreman start`. Any ideas on a fix? ``` foreman start 22:46:26 web.1 | started with pid 1727 22:46:26 web.1 | 2013-05-25 22:46:26 [1727] [INFO] Starting gunicorn 0.17.4 22:46:26 web.1 | 2013-05-25 22:46:26 [1727] [ERROR] Connection in use: ('0.0.0.0', 5000) ```
Check your processes. You may have had an unclean exit, leaving a zombie'd process behind that's still running.
Just type ``` sudo fuser -k 5000/tcp ``` .This will kill all process associated with port 5000
Gunicorn Connection in Use: ('0.0.0.0', 5000)
[ "", "python", "django", "heroku", "gunicorn", "foreman", "" ]
Suppose to have a Table person(**ID**,....., n\_success,n\_fails) like ``` ID n_success n_fails a1 10 20 a2 15 10 a3 10 1 ``` I want to make a query that will return ID of the person with the maximum n\_success/(n\_success+n\_fails). example in this case the output I'd like to get is: ``` a3 0.9090909091 ``` I've tried: ``` select ID,(N_succes/(n_success + n_fails)) 'rate' from person ``` with this query I have each ID with relative success rate ``` select ID,MAX(N_succes/(n_success + n_fails)) 'rate' from person ``` with this query just 1 row correct rate but uncorrect ID How can I do?
MS SQL ``` SELECT TOP 1 ID, (`n_success` / (`n_success` + `n_fails`)) AS 'Rate' FROM persona ORDER BY (n_success / (n_success + n_fails)) DESC ``` MySQL ``` SELECT `ID`, (`n_success` / (`n_success` + `n_fails`)) AS 'Rate' FROM `persona` ORDER BY (`n_success` / (`n_success` + `n_fails`)) DESC LIMIT 1 ```
It depends on your dialect of SQL, but in T-SQL it would be: ``` SELECT TOP 1 p.ID, p.n_success / (p.n_success + p.n_fails) AS Rate FROM persona p ORDER BY p.n_success / (p.n_success + p.n_fails) DESC ``` You can vary as necessary for other dialects (use `LIMIT 1` for MySql and SQLite, for example).
find max value in a table with his relative ID
[ "", "mysql", "sql", "" ]
I have backup directory structure like this (all directories are not empty): ``` /home/backups/mysql/ 2012/ 12/ 15/ 2013/ 04/ 29/ 30/ 05/ 02/ 03/ 04/ 05/ ``` I want to get a list of all directories containing the backups, by providing only a root directory path: ``` get_all_backup_paths('/home/backups/mysql', level=3) ``` This should return: ``` /home/backups/mysql/2012/12/15 /home/backups/mysql/2013/04/29 /home/backups/mysql/2013/04/30 /home/backups/mysql/2013/05/02 /home/backups/mysql/2013/05/03 /home/backups/mysql/2013/05/04 /home/backups/mysql/2013/05/05 ``` And those ***should not*** be present in the result: ``` /home/backups/mysql/2012 /home/backups/mysql/2012/12 /home/backups/mysql/2013 /home/backups/mysql/2013/04 /home/backups/mysql/2013/05 ``` Is there any way to `os.listdir` 3 levels down, excluding all the other levels? If yes, I would appriciate your help. ``` def get_all_backup_paths(dir, level): # ???? ```
``` from glob import iglob level3 = iglob('/home/backups/mysql/*/*/*') ``` (This will skip "hidden" directories with names starting with `.`) If there may be non-directories at level 3, skip them using: ``` from itertools import ifilter import os.path l3_dirs = ifilter(os.path.isdir, level3) ``` In Python 3, use `filter` instead of `ifilter`.
You can use `glob` to search down a directory tree, like this: ``` import os, glob def get_all_backup_paths(dir, level): pattern = dir + level * '/*' return [d for d in glob.glob(pattern) if os.path.isdir(d)] ``` I included a check for directories as well, in case there might be files mixed in with the directories.
List all subdirectories on given level
[ "", "python", "" ]
I've started learning Python (python 3.3) and I was trying out the `is` operator. I tried this: ``` >>> b = 'is it the space?' >>> a = 'is it the space?' >>> a is b False >>> c = 'isitthespace' >>> d = 'isitthespace' >>> c is d True >>> e = 'isitthespace?' >>> f = 'isitthespace?' >>> e is f False ``` It seems like the space and the question mark make the `is` behave differently. What's going on? **EDIT:** I know I should be using `==`, I just wanted to know why `is` behaves like this.
*Warning: this answer is about the implementation details of a specific python interpreter. comparing strings with `is`==bad idea.* Well, at least for cpython3.4/2.7.3, the answer is "no, it is not the whitespace". Not *only* the whitespace: * Two string literals will share memory if they are either alphanumeric or reside on the same *block* (file, function, class or single interpreter command) * An expression that evaluates to a string will result in an object that is identical to the one created using a string literal, if and only if it is created using constants and binary/unary operators, and the resulting string is shorter than 21 characters. * Single characters are unique. ## Examples Alphanumeric string literals always share memory: ``` >>> x='aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' >>> y='aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' >>> x is y True ``` Non-alphanumeric string literals share memory if and only if they share the enclosing syntactic block: (interpreter) ``` >>> x='`!@#$%^&*() \][=-. >:"?<a'; y='`!@#$%^&*() \][=-. >:"?<a'; >>> z='`!@#$%^&*() \][=-. >:"?<a'; >>> x is y True >>> x is z False ``` (file) ``` x='`!@#$%^&*() \][=-. >:"?<a'; y='`!@#$%^&*() \][=-. >:"?<a'; z=(lambda : '`!@#$%^&*() \][=-. >:"?<a')() print(x is y) print(x is z) ``` Output: `True` and `False` For simple binary operations, the compiler is doing very simple constant propagation (see [peephole.c](http://hg.python.org/cpython/file/0a7d237c0919/Python/peephole.c)), but with strings it does so only if the resulting string is shorter than 21 charcters. If this is the case, the rules mentioned earlier are in force: ``` >>> 'a'*10+'a'*10 is 'a'*20 True >>> 'a'*21 is 'a'*21 False >>> 'aaaaaaaaaaaaaaaaaaaaa' is 'aaaaaaaa' + 'aaaaaaaaaaaaa' False >>> t=2; 'a'*t is 'aa' False >>> 'a'.__add__('a') is 'aa' False >>> x='a' ; x+='a'; x is 'aa' False ``` Single characters always share memory, of course: ``` >>> chr(0x20) is ' ' True ```
To expand on Ignacio’s answer a bit: The `is` operator is the identity operator. It is used to compare *object* identity. If you construct two objects with the same contents, then it is usually not the case that the object identity yields true. It works for some small strings because CPython, the reference implementation of Python, stores the *contents* separately, making all those objects reference to the same string content. So the `is` operator returns true for those. This however is an implementation detail of CPython and is generally neither guaranteed for CPython nor any other implementation. So using this fact is a bad idea as it can break any other day. To compare strings, you use the `==` operator which compares the *equality* of objects. Two string objects are considered equal when they contain the same characters. So this is the correct operator to use when comparing strings, and `is` should be generally avoided if you do not *explicitely want* object *identity* (example: `a is False`). --- If you are really interested in the details, you can find the implementation of CPython’s strings [here](http://hg.python.org/cpython/file/default/Objects/unicodeobject.c). But again: This is implementation detail, so you should *never* require this to work.
'is' operator behaves differently when comparing strings with spaces
[ "", "python", "python-3.x", "operators", "object-identity", "" ]
I want to scrape data from a website which has TextFields, Buttons etc.. and my requirement is to fill the text fields and submit the form to get the results and then scrape the data points from results page. I want to know that does Scrapy has this feature or If anyone can recommend a library in Python to accomplish this task? (edited) I want to scrape the data from the following website: <http://a836-acris.nyc.gov/DS/DocumentSearch/DocumentType> My requirement is to select the values from ComboBoxes and hit the search button and scrape the data points from the result page. P.S. I'm using selenium Firefox driver to scrape data from some other website but that solution is not good because selenium Firefox driver is dependent on FireFox's EXE i.e Firefox must be installed before running the scraper. Selenium Firefox driver is consuming around 100MB memory for one instance and my requirement is to run a lot of instances at a time to make the scraping process quick so there is memory limitation as well. Firefox crashes sometimes during the execution of scraper, don't know why. Also I need window less scraping which is not possible in case of Selenium Firefox driver. My ultimate goal is to run the scrapers on Heroku and I have Linux environment over there so selenium Firefox driver won't work on Heroku. Thanks
Basically, you have plenty of tools to choose from: * [scrapy](http://scrapy.org/) * [beautifulsoup](https://pypi.python.org/pypi/BeautifulSoup/) * [lxml](http://lxml.de/) * [mechanize](http://wwwsearch.sourceforge.net/mechanize/) * [requests](http://docs.python-requests.org/en/latest/) (and [grequests](https://github.com/kennethreitz/grequests)) * [selenium](https://pypi.python.org/pypi/selenium) * [ghost.py](http://jeanphix.me/Ghost.py/) These tools have different purposes but they can be mixed together depending on the task. Scrapy is a powerful and very smart tool for crawling web-sites, extracting data. But, when it comes to manipulating the page: clicking buttons, filling forms - it becomes more complicated: * sometimes, it's easy to simulate filling/submitting forms by making underlying form action directly in scrapy * sometimes, you have to use other tools to help scrapy - like mechanize or selenium If you make your question more specific, it'll help to understand what kind of tools you should use or choose from. Take a look at an example of interesting scrapy&selenium mix. Here, selenium task is to click the button and provide data for scrapy items: ``` import time from scrapy.item import Item, Field from selenium import webdriver from scrapy.spider import BaseSpider class ElyseAvenueItem(Item): name = Field() class ElyseAvenueSpider(BaseSpider): name = "elyse" allowed_domains = ["ehealthinsurance.com"] start_urls = [ 'http://www.ehealthinsurance.com/individual-family-health-insurance?action=changeCensus&census.zipCode=48341&census.primary.gender=MALE&census.requestEffectiveDate=06/01/2013&census.primary.month=12&census.primary.day=01&census.primary.year=1971'] def __init__(self): self.driver = webdriver.Firefox() def parse(self, response): self.driver.get(response.url) el = self.driver.find_element_by_xpath("//input[contains(@class,'btn go-btn')]") if el: el.click() time.sleep(10) plans = self.driver.find_elements_by_class_name("plan-info") for plan in plans: item = ElyseAvenueItem() item['name'] = plan.find_element_by_class_name('primary').text yield item self.driver.close() ``` UPDATE: Here's an example on how to use scrapy in your case: ``` from scrapy.http import FormRequest from scrapy.item import Item, Field from scrapy.selector import HtmlXPathSelector from scrapy.spider import BaseSpider class AcrisItem(Item): borough = Field() block = Field() doc_type_name = Field() class AcrisSpider(BaseSpider): name = "acris" allowed_domains = ["a836-acris.nyc.gov"] start_urls = ['http://a836-acris.nyc.gov/DS/DocumentSearch/DocumentType'] def parse(self, response): hxs = HtmlXPathSelector(response) document_classes = hxs.select('//select[@name="combox_doc_doctype"]/option') form_token = hxs.select('//input[@name="__RequestVerificationToken"]/@value').extract()[0] for document_class in document_classes: if document_class: doc_type = document_class.select('.//@value').extract()[0] doc_type_name = document_class.select('.//text()').extract()[0] formdata = {'__RequestVerificationToken': form_token, 'hid_selectdate': '7', 'hid_doctype': doc_type, 'hid_doctype_name': doc_type_name, 'hid_max_rows': '10', 'hid_ISIntranet': 'N', 'hid_SearchType': 'DOCTYPE', 'hid_page': '1', 'hid_borough': '0', 'hid_borough_name': 'ALL BOROUGHS', 'hid_ReqID': '', 'hid_sort': '', 'hid_datefromm': '', 'hid_datefromd': '', 'hid_datefromy': '', 'hid_datetom': '', 'hid_datetod': '', 'hid_datetoy': '', } yield FormRequest(url="http://a836-acris.nyc.gov/DS/DocumentSearch/DocumentTypeResult", method="POST", formdata=formdata, callback=self.parse_page, meta={'doc_type_name': doc_type_name}) def parse_page(self, response): hxs = HtmlXPathSelector(response) rows = hxs.select('//form[@name="DATA"]/table/tbody/tr[2]/td/table/tr') for row in rows: item = AcrisItem() borough = row.select('.//td[2]/div/font/text()').extract() block = row.select('.//td[3]/div/font/text()').extract() if borough and block: item['borough'] = borough[0] item['block'] = block[0] item['doc_type_name'] = response.meta['doc_type_name'] yield item ``` Save it in `spider.py` and run via `scrapy runspider spider.py -o output.json` and in `output.json` you will see: ``` {"doc_type_name": "CONDEMNATION PROCEEDINGS ", "borough": "Borough", "block": "Block"} {"doc_type_name": "CERTIFICATE OF REDUCTION ", "borough": "Borough", "block": "Block"} {"doc_type_name": "COLLATERAL MORTGAGE ", "borough": "Borough", "block": "Block"} {"doc_type_name": "CERTIFIED COPY OF WILL ", "borough": "Borough", "block": "Block"} {"doc_type_name": "CONFIRMATORY DEED ", "borough": "Borough", "block": "Block"} {"doc_type_name": "CERT NONATTCHMENT FED TAX LIEN ", "borough": "Borough", "block": "Block"} ... ``` Hope that helps.
If you simply want to submit the form and extract data from the resulting page, I'd go for: * [requests](http://docs.python-requests.org/en/latest/) to send the post request * [beautiful soup](http://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract chosen data from the result page Scrapy added value really holds in its ability to follow links and crawl a website, I don't think it is the right tool for the job if you know precisely what you are searching for.
Python data scraping with Scrapy
[ "", "python", "python-2.7", "web-scraping", "scrapy", "" ]
I have a MySQL database that I use only for logging. It consists of several simple look-alike MyISAM tables. There is always one local (i.e. located on the same machine) client that only writes data to db and several remote clients that only read data. What I need is to insert bulks of data from local client as fast as possible. I have already tried many approaches to make this faster such as reducing amount of inserts by increasing the length of values list, or using `LOAD DATA .. INFILE` and some others. Now it seems to me that I've came to the limitation of parsing values from string to its target data type (doesn't matter if it is done when parsing queries or a text file). **So the question is:** does MySQL provide some means of manipulating data directly for local clients (i.e. not using SQL)? Maybe there is some API that allow inserting data by simply passing a pointer. **Once again. I don't want to optimize SQL code or invoke the same queries in a script as hd1 adviced. What I want is to pass a buffer of data directly to the database engine. This means I don't want to invoke SQL at all. Is it possible?**
After making some investigation I found no way of passing data directly to mysql database engine (without parsing it). My aim was to speed up communication between local client and db server as much as possible. The idea was if client is local then it could use some api functions to pass data to db engine thus not using (i.e. parsing) SQL and values in it. The only closest solution was proposed by **bobwienholt** (using prepared statement and binding parameters). But `LOAD DATA .. INFILE` appeared to be a bit faster in my case.
Use mysql's `LOAD DATA` command: Write the data to file in CSV format then execute this OS command: ``` LOAD DATA INFILE 'somefile.csv' INTO TABLE mytable ``` For more info, see the [documentation](http://dev.mysql.com/doc/refman/5.1/en/load-data.html)
How to insert data to mysql directly (not using sql queries)
[ "", "mysql", "sql", "database", "" ]
I want to get the ID's of [interactions] table but these ID's must not equal to [EmailOUT] table. I couldn't write the query. ``` Select ID from EmailOut where ID NOT IN (select ID from [172.28.101.120].[GenesysIS].dbo.interactions where media_type = 'email' and type = 'Outbound') ``` something similar to this. I want Outbound Emails in Interactions table but these emails may exist in EmailOut table. I want to remove them. Outbound Email count about 300 but this query result should less than 300
It seems you should reverse your query, if you want to get the ID's of [interactions] table: ``` select ID from [172.28.101.120].[GenesysIS].dbo.interactions where media_type = 'email' and type = 'Outbound' AND ID NOT IN (SELECT ID FROM EmailOut) ```
Try this one - ``` SELECT t2.* FROM [172.28.101.120].[GenesysIS].dbo.interactions t2 WHERE t2.media_type = 'email' AND t2.[type] = 'Outbound' AND NOT EXISTS ( SELECT 1 FROM dbo.EmailOut t WHERE t.id = t2.id ) ```
tSQL NOT IN Query
[ "", "sql", "sql-server", "t-sql", "notin", "" ]
I am Using Oracle 10g. I am Adding new column deptId to my UserList Table where I use deptId column as Foreign key which references other table Column Departments.DepartmentId Is there Difference between adding foreign key as constraint and First Query **Query1** ``` ALTER TABLE UserList ADD FOREIGN KEY (DeptId) REFERENCES Departments(DepartmentId) ``` **Query2** ``` ALTER TABLE UserList ADD CONSTRAINT fk_DeptId FOREIGN KEY (DeptId) REFERENCES Departments(DepartmentId) ```
There is no difference except in your use of the optional "CONSTRAINT" and constraint name clause. There are two kinds of constraint definition: inline and out of line. The former operates on a column as part of the column definition, and hence does not need to name the DeptID column. The latter is part of the table definition and therefore does. Both of your examples are out of line constraints, but you have not named the constraint in the former case, which is a bad practice: <http://docs.oracle.com/cd/E18283_01/server.112/e17118/clauses002.htm#g1053592>
The second syntax allows you to name your constraint. The first doesn't.
What is the Difference between adding Column as Foreign Key and as a Constraint
[ "", "sql", "oracle", "oracle10g", "" ]
OpenSSL provides a popular (but insecure – see below!) command line interface for AES encryption: ``` openssl aes-256-cbc -salt -in filename -out filename.enc ``` Python has support for AES in the shape of the PyCrypto package, but it only provides the tools. How to use Python/PyCrypto to decrypt files that have been encrypted using OpenSSL? ## Notice This question used to also concern encryption in Python using the same scheme. I have since removed that part to discourage anyone from using it. Do NOT encrypt any more data in this way, because it is NOT secure by today's standards. You should ONLY use decryption, for no other reasons than BACKWARD COMPATIBILITY, i.e. when you have no other choice. Want to encrypt? Use NaCl/libsodium if you possibly can.
Given the popularity of Python, at first I was disappointed that there was no complete answer to this question to be found. It took me a fair amount of reading different answers on this board, as well as other resources, to get it right. I thought I might share the result for future reference and perhaps review; I'm by no means a cryptography expert! However, the code below appears to work seamlessly: ``` from hashlib import md5 from Crypto.Cipher import AES from Crypto import Random def derive_key_and_iv(password, salt, key_length, iv_length): d = d_i = '' while len(d) < key_length + iv_length: d_i = md5(d_i + password + salt).digest() d += d_i return d[:key_length], d[key_length:key_length+iv_length] def decrypt(in_file, out_file, password, key_length=32): bs = AES.block_size salt = in_file.read(bs)[len('Salted__'):] key, iv = derive_key_and_iv(password, salt, key_length, bs) cipher = AES.new(key, AES.MODE_CBC, iv) next_chunk = '' finished = False while not finished: chunk, next_chunk = next_chunk, cipher.decrypt(in_file.read(1024 * bs)) if len(next_chunk) == 0: padding_length = ord(chunk[-1]) chunk = chunk[:-padding_length] finished = True out_file.write(chunk) ``` Usage: ``` with open(in_filename, 'rb') as in_file, open(out_filename, 'wb') as out_file: decrypt(in_file, out_file, password) ``` If you see a chance to improve on this or extend it to be more flexible (e.g. make it work without salt, or provide Python 3 compatibility), please feel free to do so. ## Notice This answer used to also concern encryption in Python using the same scheme. I have since removed that part to discourage anyone from using it. Do NOT encrypt any more data in this way, because it is NOT secure by today's standards. You should ONLY use decryption, for no other reasons than BACKWARD COMPATIBILITY, i.e. when you have no other choice. Want to encrypt? Use NaCl/libsodium if you possibly can.
I am re-posting your code with a couple of corrections (I didn't want to obscure your version). While your code works, it does not detect some errors around padding. In particular, if the decryption key provided is incorrect, your padding logic may do something odd. If you agree with my change, you may update your solution. ``` from hashlib import md5 from Crypto.Cipher import AES from Crypto import Random def derive_key_and_iv(password, salt, key_length, iv_length): d = d_i = '' while len(d) < key_length + iv_length: d_i = md5(d_i + password + salt).digest() d += d_i return d[:key_length], d[key_length:key_length+iv_length] # This encryption mode is no longer secure by today's standards. # See note in original question above. def obsolete_encrypt(in_file, out_file, password, key_length=32): bs = AES.block_size salt = Random.new().read(bs - len('Salted__')) key, iv = derive_key_and_iv(password, salt, key_length, bs) cipher = AES.new(key, AES.MODE_CBC, iv) out_file.write('Salted__' + salt) finished = False while not finished: chunk = in_file.read(1024 * bs) if len(chunk) == 0 or len(chunk) % bs != 0: padding_length = bs - (len(chunk) % bs) chunk += padding_length * chr(padding_length) finished = True out_file.write(cipher.encrypt(chunk)) def decrypt(in_file, out_file, password, key_length=32): bs = AES.block_size salt = in_file.read(bs)[len('Salted__'):] key, iv = derive_key_and_iv(password, salt, key_length, bs) cipher = AES.new(key, AES.MODE_CBC, iv) next_chunk = '' finished = False while not finished: chunk, next_chunk = next_chunk, cipher.decrypt(in_file.read(1024 * bs)) if len(next_chunk) == 0: padding_length = ord(chunk[-1]) if padding_length < 1 or padding_length > bs: raise ValueError("bad decrypt pad (%d)" % padding_length) # all the pad-bytes must be the same if chunk[-padding_length:] != (padding_length * chr(padding_length)): # this is similar to the bad decrypt:evp_enc.c from openssl program raise ValueError("bad decrypt") chunk = chunk[:-padding_length] finished = True out_file.write(chunk) ```
How to decrypt OpenSSL AES-encrypted files in Python?
[ "", "python", "encryption", "openssl", "aes", "pycrypto", "" ]
Task: 1. Define a function, `distance_from_zero` with one parameter. 2. Have that function do the following: * Check the type of the input it receives. * If the type is `int` or `float`, the function should return the absolute value of the function input. * If the type is any other type, the function should return `"Not an integer or float!"` My answer that does not work: ``` def distance_from_zero(d): if type(d) == int or float: return abs(d) else: return "Not an integer or float!" ```
You cannot use this kind of "natural language based logic concatenation". What I mean is that you need to state the parts of your logical conditions explicitly. ``` if type(d) == int or type(d) == float ``` This way you have the two comparisons, which stand for themselves: `if type(d) == int` as well as `type(d) == float`. The results of this can be combined with the `or`-operator.
You should use `isinstance` here rather than `type`: ``` def distance_from_zero(d): if isinstance(d, (int, float)): return abs(d) else: return "Not an integer or float!" ``` if `type(d) == int or float` is always going to be `True` as it is evaluated as `float` and it is a `True` value: ``` >>> bool(float) True ``` help on `isinstance`: ``` >>> print isinstance.__doc__ isinstance(object, class-or-type-or-tuple) -> bool Return whether an object is an instance of a class or of a subclass thereof. With a type as second argument, return whether that is the object's type. The form using a tuple, isinstance(x, (A, B, ...)), is a shortcut for isinstance(x, A) or isinstance(x, B) or ... (etc.). ``` Related : [How to compare type of an object in Python?](https://stackoverflow.com/questions/707674/how-to-compare-type-of-an-object-in-python)
Checking type of variable against multiple types doesn't produce expected result
[ "", "python", "if-statement", "types", "boolean-logic", "" ]
I wanted to match contents inside the parentheses (one with "per contract", but omit unwatned elements like "=" in the 3rd line) like this: ``` 1/100 of a cent ($0.0001) per pound ($6.00 per contract) and .001 Index point (10 Cents per contract) and $.00025 per pound (=$10 per contract) ``` I'm using the following regex: ``` r'.*?\([^$]*([\$|\d][^)]* per contract)\)' ``` This works well for any expression inside the parentheses which starts of with a `$`, but for the second line, it omits the `1` from `10 Cents`. Not sure what's going on here.
> for the second line, it omits the 1 from 10 Cents. Not sure what's going on here. What's going on is that `[^$]*` is greedy: It'll happily match digits, and leave just one digit to satisfy the `[\$|\d]` that follows it. (So, if you wrote `(199 cents` you'd only get `9`). Fix it by writing `[^$]*?` instead: ``` r'.*?\([^$]*?([\$|\d][^)]* per contract)\)' ```
You could probably use a less specific regex ``` re.findall(r'\(([^)]+) per contract\)', str) ``` This will match the "$6.00" and the "10 Cents."
Regex in Python for matching contents inside ()
[ "", "python", "regex", "" ]
What is the best way of extracting expressions for the following lines using regex: ``` Sigma 0.10 index = $5.00 beta .05=$25.00 .35 index (or $12.5) Gamma 0.07 ``` In any of the case, I want to extract the numeric values from each line (for example "0.10" from line 1) and (if available) the dollar amount or "$5.00" for line 1.
``` import re s="""Sigma 0.10 index = $5.00 beta .05=$25.00 .35 index (or $12.5) Gamma 0.07""" print re.findall(r'[0-9$.]+', s) ``` Output: ``` ['0.10', '$5.00', '.05', '$25.00', '.35', '$12.5', '0.07'] ``` More strict regex: ``` print re.findall(r'[$]?\d+(?:\.\d+)?', s) ``` Output: ``` ['0.10', '$5.00', '$25.00', '$12.5', '0.07'] ``` If you want to match `.05` also: ``` print re.findall(r'[$]?(?:\d*\.\d+)|\d+', s) ``` Output: ``` ['0.10', '$5.00', '.05', '$25.00', '.35', '$12.5', '0.07'] ```
Well the base regex would be: `\$?\d+(\.\d+)?`, which will get you the numbers. Unfortunately, I know regex in JavaScript/C# so not sure about how to do multiple lines in python. Should be a really simple flag though.
Using regex for multiple lines
[ "", "python", "regex", "" ]
I'm building a `Common Table Expression (CTE)` in `SQL Server 2008` to use in a `PIVOT` query. I'm having difficulty sorting the output properly because there are numeric values that sandwich the string data in the middle. Is it possible to do this? This is a quick and dirty example, the real query will span several years worth of values. Example: ``` Declare @startdate as varchar(max); Declare @enddate as varchar(max); Set @startdate = cast((DATEPART(yyyy, GetDate())-1) as varchar(4))+'-12-01'; Set @enddate = cast((DATEPART(yyyy, GetDate())) as varchar(4))+'-03-15'; WITH DateRange(dt) AS ( SELECT CONVERT(datetime, @startdate) dt UNION ALL SELECT DATEADD(dd,1,dt) dt FROM DateRange WHERE dt < CONVERT(datetime, @enddate) ) SELECT DISTINCT ',' + QUOTENAME((cast(DATEPART(yyyy, dt) as varchar(4)))+'-Week'+(cast(DATEPART(ww, dt) as varchar(2)))) FROM DateRange ``` Current Output: ``` ,[2012-Week48] ,[2012-Week49] ,[2012-Week50] ,[2012-Week51] ,[2012-Week52] ,[2012-Week53] ,[2013-Week1] ,[2013-Week10] ,[2013-Week11] ,[2013-Week2] ,[2013-Week3] ,[2013-Week4] ,[2013-Week5] ,[2013-Week6] ,[2013-Week7] ,[2013-Week8] ,[2013-Week9] ``` Desired Output: ``` ,[2012-Week48] ,[2012-Week49] ,[2012-Week50] ,[2012-Week51] ,[2012-Week52] ,[2012-Week53] ,[2013-Week1] ,[2013-Week2] ,[2013-Week3] ,[2013-Week4] ,[2013-Week5] ,[2013-Week6] ,[2013-Week7] ,[2013-Week8] ,[2013-Week9] ,[2013-Week10] ,[2013-Week11] ``` **EDIT** Of course after I post the question my brain started working. I changed the `DATEADD` to add 1 week instead of 1 day and then took out the `DISTINCT` in the select and it worked. ``` DECLARE @startdate AS VARCHAR(MAX); DECLARE @enddate AS VARCHAR(MAX); SET @startdate = CAST((DATEPART(yyyy, GetDate())-1) AS VARCHAR(4))+'-12-01'; SET @enddate = CAST((DATEPART(yyyy, GetDate())) AS VARCHAR(4))+'-03-15'; WITH DateRange(dt) AS ( SELECT CONVERT(datetime, @startdate) dt UNION ALL SELECT DATEADD(ww,1,dt) dt FROM DateRange WHERE dt < CONVERT(datetime, @enddate) ) SELECT ',' + QUOTENAME((CAST(DATEPART(yyyy, dt) AS VARCHAR(4)))+'-Week'+(CAST(DATEPART(ww, dt) AS VARCHAR(2)))) FROM DateRange ```
I needed to change the `DATEADD` portion of the query and remove the `DISTINCT`. Once changed the order sorted properly on it's own ``` DECLARE @startdate AS VARCHAR(MAX); DECLARE @enddate AS VARCHAR(MAX); SET @startdate = CAST((DATEPART(yyyy, GetDate())-1) AS VARCHAR(4))+'-12-01'; SET @enddate = CAST((DATEPART(yyyy, GetDate())) AS VARCHAR(4))+'-03-15'; WITH DateRange(dt) AS ( SELECT CONVERT(datetime, @startdate) dt UNION ALL SELECT DATEADD(ww,1,dt) dt FROM DateRange WHERE dt < CONVERT(datetime, @enddate) ) SELECT ',' + QUOTENAME((CAST(DATEPART(yyyy, dt) AS VARCHAR(4)))+'-Week'+(CAST(DATEPART(ww, dt) AS VARCHAR(2)))) FROM DateRange ```
I can't see the sample SQL code (that site is blacklisted where I am). Here is a trick for sorting that data in the proper order is to use the length first and then the values: ``` select col from t order by left(col, 6), len(col), col; ```
How to sort varchar string properly with numeric values on both ends?
[ "", "sql", "sql-server-2008", "sql-order-by", "common-table-expression", "dynamic-pivot", "" ]
I have the following file names that exhibit this pattern: ``` 000014_L_20111007T084734-20111008T023142.txt 000014_U_20111007T084734-20111008T023142.txt ... ``` I want to extract the middle two time stamp parts after the second underscore `'_'` and before `'.txt'`. So I used the following Python regex string split: ``` time_info = re.split('^[0-9]+_[LU]_|-|\.txt$', f) ``` But this gives me two extra empty strings in the returned list: ``` time_info=['', '20111007T084734', '20111008T023142', ''] ``` How do I get only the two time stamp information? i.e. I want: ``` time_info=['20111007T084734', '20111008T023142'] ```
Don't use `re.split()`, use the `groups()` method of regex `Match`/`SRE_Match` objects. ``` >>> f = '000014_L_20111007T084734-20111008T023142.txt' >>> time_info = re.search(r'[LU]_(\w+)-(\w+)\.', f).groups() >>> time_info ('20111007T084734', '20111008T023142') ``` You can even name the capturing groups and retrieve them in a dict, though you use `groupdict()` rather than `groups()` for that. (The regex pattern for such a case would be something like `r'[LU]_(?P<groupA>\w+)-(?P<groupB>\w+)\.'`)
I'm no Python expert but maybe you could just remove the empty strings from your list? ``` str_list = re.split('^[0-9]+_[LU]_|-|\.txt$', f) time_info = filter(None, str_list) ```
Python regex split without empty string
[ "", "python", "regex", "" ]
I have a 64bit hex number and I want to convert it to unsigned integer. I run ``` >>> a = "ffffffff723b8640" >>> int(a,16) 18446744071331087936L ``` So what is the 'L' at the end of the number? Using the following commands also don't help ``` >>> int(a,16)[:-1] Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'long' object is unsubscriptable >>> int(a,16).rstrip("L") Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'long' object has no attribute 'rstrip' ```
You are trying to apply string methods to an integer. But the string representation of a long integer doesn't have the `L` at the end: ``` In [1]: a = "ffffffff723b8640" In [2]: int(a, 16) Out[2]: 18446744071331087936L In [3]: str(int(a, 16)) Out[3]: '18446744071331087936' ``` The `__repr__` does, though (as @mgilson notes): ``` In [4]: repr(int(a, 16)) Out[4]: '18446744071331087936L' In [5]: repr(int(a, 16))[:-1] Out[5]: '18446744071331087936' ```
Python2.x has 2 classes of integer (neither of them are unsigned btw). There is the usual class `int` which is based on your system's concept of an integer (often a 4-byte integer). There's also the arbitrary "precision" type of integer `long`. They behave the same in *almost*1 all circumstances and `int` objects automatically convert to `long` if they overflow. Don't worry about the `L` in the representation -- It just means your integer is too big for `int` (there was an Overflow) so python automatically created a `long` instead. It is also worth pointing out that in python3.x, they removed python2.x's `int` in favor of always using `long`. Since they're now always using `long`, they renamed it to `int` as that name is much more common in code. [PEP-237](http://www.python.org/dev/peps/pep-0237/) gives more rational behind this decision. 1The only time they behave differently that I can think of is that long's `__repr__` adds that extra `L` on the end that you're seeing.
converting hex to int, the 'L' character
[ "", "python", "" ]
i am using the beautifulsoup to scrape the HTML data. My issue is the output displays some information i donot want to see, so, i tried to use regex to delete that information, however, not good at regex, so i put my script here and the output i want, hope anyone can help me. regards my outputs now shows like: ``` <title>Music Games DVDs at JB Hi-Fi Australia</title> Apple iPad mini 16GB Wi-Fi (Black) [Pick-up In-store Only]  <span class="SKU">MD528X/A</span> Apple iPad mini 16GB Wi-Fi (White) [Pick-up In-store Only]  <span class="SKU">MD531X/A</span> Apple iPad mini 32GB Wi-Fi (Black) [Pick-up In-store Only]  <span class="SKU">MD529X/A</span> ``` however, i only want it show like: ``` JB Hi-Fi Australia Apple iPad mini 16GB Wi-Fi (Black) Apple iPad mini 16GB Wi-Fi (White) ``` the following is my script ``` url1="http://www.jbhifionline.com.au/support.aspx?post=1&results=10&q=ipad+mini&source=all&bnSearch.x=0&bnSearch.y=0" page=urllib2.urlopen(url1) soup = BeautifulSoup(page.read()) #website source site name : <title>Music Games DVDs at JB Hi-FiAustralia</title> sitename1=soup.html.head.title print sitename1 ##issue here!!output shows <title>....</title> #website source name:<td class="check_title">Apple iPad mini 16GB Wi-Fi (Black) [Pick-up In-store Only]<span class="SKU$` productname1=soup.findAll('td',{'class':'check_title'}) for eachproductname1 in productname1: print eachproductname1.renderContents().replace("^<span>","") ```
``` for each in productname1: x = each.contents[0].strip() x = re.sub(r'\[[\w\s-]*\][\xc2]',"",x) print x ``` Output: ``` Apple iPad mini 16GB Wi-Fi (Black) Apple iPad mini 16GB Wi-Fi (White) Apple iPad mini 16GB Wi-Fi + Cellular (Black) ``` I hope this helps. I had to answer again, as this is complete solution.
``` for eachproductname1 in productname1: print eachproductname1.contents[0].strip() ``` Output : (You can modify this in a way you want, I think that should be easy from this point) ``` Apple iPad mini 16GB Wi-Fi (Black) [Pick-up In-store Only]Â Apple iPad mini 16GB Wi-Fi (White) [Pick-up In-store Only]Â ```
how to using regex to delete some data in python after beautifulsoup
[ "", "python", "regex", "" ]
I'm trying to update column "idleTime" every few minute with this query. I would like to update it ONLY in case that the value in DB is smaller! ``` INSERT INTO bl_statistics (id, date, idleTime) VALUES ("", DATE_FORMAT(NOW(), "%Y-%m-%d"), "1.01234") ON DUPLICATE KEY UPDATE idleTime=if(VALUES(idleTime) < 1.01234, VALUES(idleTime), "1.01234"); ``` No matter what, value ALWAYS get overwriten, Am I missing something or it is impossible to update values in such way?
Your implementation is quite right but I think you have not specified a `UNIQUE` constraint on the column `date`. To do that, ``` ALTER TABLE bl_statistics ADD CONSTRAINT tb_uq UNIQUE(date) ``` And execute this statement, ``` INSERT INTO bl_statistics(`id`, `date`, `idleTime`) VALUES(NULL, '2013-05-30 00:00:00', 2) ON DUPLICATE KEY UPDATE idleTime = IF(idleTime < 2, 2, idleTime) ``` Here's a link of fully working demo: <http://www.sqlfiddle.com/#!2/87bab/1> oh, one more thing, do not store date as string but instead store it as `DATETIME` or `DATE`.
If you are trying to *update* the value, why are you using `insert`? ``` update bl_statistics set idleTime = 1.01234, date = DATE_FORMAT(NOW(), "%Y-%m-%d") where idleTime < 1.01234 and id = '' ```
How to update column in database ONLY / WHEN value in DB is smaller then the NEW value
[ "", "mysql", "sql", "" ]
I've inherited a SQL Server 2008 R2 project that, amongst other things, does a table update from another table: * `Table1` (with around 150,000 rows) has 3 phone number fields (`Tel1`,`Tel2`,`Tel3`) * `Table2` (with around 20,000 rows) has 3 phone number fields (`Phone1`,`Phone2`,`Phone3`) .. and when any of those numbers match, `Table1` should be updated. The current code looks like: ``` UPDATE t1 SET surname = t2.surname, Address1=t2.Address1, DOB=t2.DOB, Tel1=t2.Phone1, Tel2=t2.Phone2, Tel3=t2.Phone3, FROM Table1 t1 inner join Table2 t2 on (t1.Tel1 = t2.Phone1 and t1.Tel1 is not null) or (t1.Tel1 = t2.Phone2 and t1.Tel1 is not null) or (t1.Tel1 = t2.Phone3 and t1.Tel1 is not null) or (t1.Tel2 = t2.Phone1 and t1.Tel2 is not null) or (t1.Tel2 = t2.Phone2 and t1.Tel2 is not null) or (t1.Tel2 = t2.Phone3 and t1.Tel2 is not null) or (t1.Tel3 = t2.Phone1 and t1.Tel3 is not null) or (t1.Tel3 = t2.Phone2 and t1.Tel3 is not null) or (t1.Tel3 = t2.Phone3 and t1.Tel3 is not null); ``` However, this query is taking over 30 minutes to run. The execution plan suggests that the main bottleneck is a `Nested Loop` around the Clustered Index Scan on `Table1`. Both tables have clustered indexes on their `ID` column. As my DBA skills are very limited, can anyone suggests the best way to improve the performance of this query? Would adding an index for `Tel1`,`Tel2` and `Tel3` to each column be the best move, or can the query be changed to improve performance?
On first look, I would recommend eliminating all your OR Conditions from the select. See if this is faster (*it's converting your update into 3 distinct updates*): ``` UPDATE t1 SET surname = t2.surname, Address1=t2.Address1, DOB=t2.DOB, Tel1=t2.Phone1, Tel2=t2.Phone2, Tel3=t2.Phone3, FROM Table1 t1 inner join Table2 t2 on (t1.Tel1 is not null AND t1.Tel1 IN (t2.Phone1, t2.Phone2, t2.Phone3); UPDATE t1 SET surname = t2.surname, Address1=t2.Address1, DOB=t2.DOB, Tel1=t2.Phone1, Tel2=t2.Phone2, Tel3=t2.Phone3, FROM Table1 t1 inner join Table2 t2 on (t1.Tel2 is not null AND t1.Tel2 IN (t2.Phone1, t2.Phone2, t2.Phone3); UPDATE t1 SET surname = t2.surname, Address1=t2.Address1, DOB=t2.DOB, Tel1=t2.Phone1, Tel2=t2.Phone2, Tel3=t2.Phone3, FROM Table1 t1 inner join Table2 t2 on (t1.Tel3 is not null AND t1.Tel3 IN (t2.Phone1, t2.Phone2, t2.Phone3); ```
First normalise your table data: ``` insert into Table1Tel select primaryKey, Tel1 as 'tel' from Table1 where Tel1 is not null union select primaryKey, Tel2 from Table1 where Tel2 is not null union select primaryKey, Tel3 from Table1 where Tel3 is not null insert into Table2Phone select primaryKey, Phone1 as 'phone' from Table2 where Phone1 is not null union select primaryKey, Phone2 from Table2 where Phone2 is not null union select primaryKey, Phone3 from Table2 where Phone3 is not null ``` These normalised tables are a much better way to store your phone numbers than as additional columns. Then you can do something like this joining across the tables: ``` update t1 set surname = t2.surname, Address1 = t2.Address1, DOB = t2.DOB from Table1 t1 inner join Table1Tel tel on t1.primaryKey = tel.primaryKey inner join Table2Phone phone on tel.tel = phone.phone inner join Table2 t2 on phone.primaryKey = t2.primaryKey ``` Note that this doesn't fix the fundamental issue of dupes in your data - for instance if you have both Joe and Jane Bloggs in your data with the same phone number (even in different fields) you'll update both records to be the same.
SQL query with lots of JOIN conditions is very slow
[ "", "sql", "sql-server-2008", "" ]
I would like to add the output of a parts-of-speech tagger to an existing xml file with the POS-tags as attribute value pairs to the existing word element: ``` house/N + <w>house</w> --> <w pos="N">house</w> ``` I thought I could give unique IDs to the words, match those and then add the POS-tag to the existing xml file, so I designed the following function in Python: ``` import xml.etree.ElementTree as ET def add_postags(POSfile, xmlfile): """ Function that takes two arguments (POSfile, xmlfile). If the value of the word <w>'s attribute 'id' in the POSfile matches the value of 'id' in the existing xml file, it adds the pos tags that are stored as attribute-value pairs in (POSfile) to the xml file and writes this to a new document 'xmlPOS'. """ treePOS = ET.parse(POSfile) rootPOS = treePOS.getroot() tree = ET.parse(xmlfile) root = tree.getroot() for w in rootPOS.iter('w'): idPOS = w.get('id') for w in root.iter('w'): idxml = w.get('id') for w in rootPOS.iter('w'): POSval = w.get('pos') if idPOS == idxml: w.set('pos', POSval) tree.write('xmlPOS.xml') return xmlPOS ``` For this to work I'd have to convert the tagger output 'house/N' to an xml format: ``` <w id="1" pos="N">house</w> ``` But even if I do so and then import the above module in Python, I seem to be unable to add the POS tags to the existing xml file (which contains more editorial markup of course than the above example). Perhaps I should use XSLT instead of this Python xml parser? I'm not very familiar with XSLTs yet, so I thought I'd try this in Python first. Any comments or suggestions will be much appreciated: thanks in advance!
The `set` method is the appropriate way to set attributes in ElementTree, and I just tested that it works when applied to an XML file read from disk. I wonder if your problem is algorithmic--- the algorithm you wrote doesn't look like it does what you want. The `idPOS`, `idxml`, and `POSval` are going to be equal to the *last* matching values in each file and `w` is going to be equal to the last `<w>` tag. It can only change one word, the last one. If you're going to be setting part of speech attributes in bulk, perhaps you want something more like the following (you may need to tweak the it if I've made some wrong assumptions about how `POSfile` is structured): ``` # load all "pos" attributes into a dictionary for fast lookup posDict = {} for w in rootPOS.iter("w"): if w.get("pos") is not None: posDict[w.text] = w.get("pos") # if we see any matching words in the xmlfile, set their "pos" attrbute for w in root.iter("w"): if w.text in posDict: w.set("pos", posDict[w.text]) ```
I've performed the tagging, but I need to write te output into the xml file. The tagger output looks like this: ``` The/DET house/N is/V big/ADJ ./PUNC ``` The xml file from which the text came will look like this: ``` <s> <w>The</w> <w>house</w> <w>is</w> <w>big</w> <w>.</w> </s> ``` Now I would like to add the pos-tags as attribute-value pairs to the xml elements: ``` <s> <w pos="DET">The</w> <w pos="N">house</w> <w pos="V">is</w> <w pos="ADJ">big</w> <w pos="PUNC">.</w> </s> ``` I hope this sample in English makes it clear (I'm actually working on historical Welsh).
Add POS tags as attribute to xml element
[ "", "python", "xml", "pos-tagger", "" ]
I have the following multi-loop situation: ``` notify=dict() for m in messages: fields=list() for g in groups: fields.append(func(g,m)) notify[m.name]=fields return notify ``` Is there a way to write the below as a comprehension or map ,that would look better(hopefully perform better too)
Assuming you really mean notify to accumulate all the results ``` return {m.name: [func(g, m) for g in groups] for m in messages} ```
``` from itertools import product results = [func(g,m) for m,g in product(messages,groups)] ``` EDIT I think you may actually want a dict of dicts, not a dict of lists: ``` from collections import defaultdict from itertools import product results = defaultdict(dict) for m,g in product(messages,groups): results[m.name][g] = func(g,m) ``` Or borrowing from gnibbler: ``` return {m.name: {g:func(g,m) for g in groups} for m in messages} ``` Now you can just use `results[msgname][groupname]` to get the value of `func(g,m)`.
How do we use comprehension here
[ "", "python", "" ]
Not sure if there is an easy way to split the following string: ``` 'school.department.classes[cost=15.00].name' ``` Into this: ``` ['school', 'department', 'classes[cost=15.00]', 'name'] ``` Note: I want to keep `'classes[cost=15.00]'` intact.
``` >>> import re >>> text = 'school.department.classes[cost=15.00].name' >>> re.split(r'\.(?!\d)', text) ['school', 'department', 'classes[cost=15.00]', 'name'] ``` More specific version: ``` >>> re.findall(r'([^.\[]+(?:\[[^\]]+\])?)(?:\.|$)', text) ['school', 'department', 'classes[cost=15.00]', 'name'] ``` Verbose: ``` >>> re.findall(r'''( # main group [^ . \[ ]+ # 1 or more of anything except . or [ (?: # (non-capture) opitional [x=y,...] \[ # start [ [^ \] ]+ # 1 or more of any non ] \] # end ] )? # this group [x=y,...] is optional ) # end main group (?:\.|$) # find a dot or the end of string ''', text, flags=re.VERBOSE) ['school', 'department', 'classes[cost=15.00]', 'name'] ```
Skip dots within brackets: ``` import re s='school.department.classes[cost=15.00].name' print re.split(r'[.](?![^][]*\])', s) ``` Output: ``` ['school', 'department', 'classes[cost=15.00]', 'name'] ```
splitting a dot delimited string into words but with a special case
[ "", "python", "regex", "parsing", "split", "" ]
With following table, ``` RECORD --------------------- NAME VALUE --------------------- Bill Clinton 100 Bill Clinton 95 Bill Clinton 90 Hillary Clinton 90 Hillary Clinton 95 Hillary Clinton 85 Monica Lewinsky 70 Monica Lewinsky 80 Monica Lewinsky 90 ``` Can I, with JPA(JPQL or Criteria), select following output? ``` Bill Clinton 100 Hillary Clinton 95 Monica Lewinsky 90 ``` I mean, `ORDER BY` maximum `VALUE` group by `NAME`.
The query itself ``` SELECT Name, MAX(value) value FROM record GROUP BY Name ORDER BY Value DESC ``` Output: ``` | NAME | VALUE | --------------------------- | Bill Clinton | 100 | | Hillary Clinton | 95 | | Monica Lewinsky | 90 | ``` **[SQLFiddle](http://sqlfiddle.com/#!2/fb176/1)** I'm not an expert in jpa but something between these lines might work ``` List<Object[]> results = entityManager .createQuery("SELECT Name, MAX(value) maxvalue FROM record GROUP BY Name ORDER BY Value DESC"); .getResultList(); for (Object[] result : results) { String name = (String) result[0]; int maxValue = ((Number) result[1]).intValue(); } ```
Because JPQL queries operate to the entities, following mapping is assumed: ``` @Entity @Table(name="your_table") public class YourEntity { @Id private int id; private String name; private int value; ... } ``` For such a mappings query is as follows: ``` SELECT e.name, MAX(e.value) FROM YourEntity e GROUP BY e.name ORDER BY MAX(e.value) DESC ``` Results of such a query is List of object arrays. First element in array is name and second element is value (as in select).
ORDER BY maximum with grouped by
[ "", "mysql", "sql", "jpa", "criteria", "jpql", "" ]
This is basic again and I've searched throughout the website but I can't find anything that is specifically about this. The problem is to write a function that takes an integer and returns a certain number of multiples for that integer. Then, return a list containing those multiples. For example, if you were to input 2 as your number parameter and 4 as your multiples parameter, your function should return a list containing 2, 4, 6, and 8. What I have so far: ``` def multipleList(number, multiples): mult = range(number, multiples, number) print mult ``` A test case would be: ``` print multipleList(2, 9): >>> [2, 4, 6, 8, 10, 12, 14, 16, 18] ``` My answer I get: ``` >>> [2, 4, 6, 8] None ``` I know that range has the format (start, stop, step). But how do you tell it to stop after a certain number of times instead of at a certain number?
use `(number*multiples)+1` as end value in `range`: ``` >>> def multipleList(number, multiples): mult = range(number, (number*multiples)+1 , number) return mult ... >>> print multipleList(2,9) [2, 4, 6, 8, 10, 12, 14, 16, 18] >>> print multipleList(3, 7) [3, 6, 9, 12, 15, 18, 21] ``` The default return value of a function is `None`, as you're not returning anything in your function so it is going to return `None`. Instead of printing `mult` you should return it. ``` >>> def f():pass >>> print f() None #default return value ```
try `range(number, number*multiples+1, number)` instead: ``` def multipleList(number, multiples): return list(range(number, number*multiples+1, number)) print(multipleList(2, 9)) ``` Output: ``` [2, 4, 6, 8, 10, 12, 14, 16, 18] ```
multiples sequence using range
[ "", "python", "" ]
I'm looking for an easy way to find the minimum distance between two integer intervals using python. For example, the minimum between [0,10] and [12,20] would be 2. If the two intervals overlap in any way, the distance would be 0. Any suggestions on an easy way to do this? I can't help but think there must be a clean, 'pythonic' way to get at this question.
``` def solve(r1, r2): # sort the two ranges such that the range with smaller first element # is assigned to x and the bigger one is assigned to y x, y = sorted((r1, r2)) #now if x[1] lies between x[0] and y[0](x[1] != y[0] but can be equal to x[0]) #then the ranges are not overlapping and return the differnce of y[0] and x[1] #otherwise return 0 if x[0] <= x[1] < y[0] and all( y[0] <= y[1] for y in (r1,r2)): return y[0] - x[1] return 0 ... >>> solve([0,10],[12,20]) 2 >>> solve([5,10],[1,5]) 0 >>> solve([5,10],[1,4]) 1 ```
``` dist=min(y)-max(x) if dist>0: return dist else: return 0 ```
Finding integer distance between two intervals
[ "", "python", "integer", "intervals", "" ]
What is wrong in my code, but I can get my expected result. I am trying to remove all "#" in the list. funds\_U is the list of data: ``` In [3]: funds_U Out[3]: [u'#', u'#', u'MMFU_U', u'#', u'#', u'AAI_U', u'TGI_U', u'JAS_U', u'TAG_U', u'#', u'#', u'AAT_U', u'BGR_U', u'BNE_U', u'IGE_U', u'#', u'#', u'DGF_U', u'BHC_U', u'FCF_U', u'SHK_U', u'VCF_U', u'#', u'JEM_U', u'SBR_U', u'TEM_U', u'#', u'#', u'BAB_U', u'BGA_U', u'#'] ``` The following is the code: ``` In [4]: for fund_U in funds_U[:]: ...: funds_U.remove(u"#") ...: ``` The following is the error: ``` ValueError Traceback (most recent call last) <ipython-input-4-9aaa02e32e76> in <module>() 1 for fund_U in funds_U[:]: ----> 2 funds_U.remove(u"#") 3 ValueError: list.remove(x): x not in list ```
As per the [documentation](http://docs.python.org/2/tutorial/datastructures.html#more-on-lists), if the item doesn't exist in the list, `remove()` will throw an error. Right now your code iterates through every item in the list and tries to remove that many `#`s. Since not every item is a `#`, `remove()` will throw an error as the list runs out of `#`s. Try a [list comprehension](http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions) like this: ``` funds_U = [x for x in funds_U if x != u'#'] ``` This will make a new list that consists of every element in `funds_U` that is not `u'#'`.
I would do this so: ``` new = [item for item in funds_U if item!=u'#'] ``` This is a [list-comprehension](http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions). It goes through every item in funds\_U and adds it to the new list if it's not `u'#'`.
ValueError: list.remove(x): x not in list
[ "", "python", "" ]
I am producing some plots in matplotlib and would like to add explanatory text for some of the data. I want to have a string inside my legend as a separate legend item above the '0-10' item. Does anyone know if there is a possible way to do this? ![enter image description here](https://i.stack.imgur.com/TDuGh.png) This is the code for my legend: `ax.legend(['0-10','10-100','100-500','500+'],loc='best')`
Sure. `ax.legend()` has a two argument form that accepts a list of objects (handles) and a list of strings (labels). Use a dummy object (aka a ["proxy artist"](http://matplotlib.org/users/legend_guide.html#using-proxy-artist)) for your extra string. I picked a `matplotlib.patches.Rectangle` with no fill and 0 linewdith below, but you could use any supported artist. For example, let's say you have 4 bar objects (since you didn't post the code used to generate the graph, I can't reproduce it exactly). ``` import matplotlib.pyplot as plt from matplotlib.patches import Rectangle fig = plt.figure() ax = fig.add_subplot(111) bar_0_10 = ax.bar(np.arange(0,10), np.arange(1,11), color="k") bar_10_100 = ax.bar(np.arange(0,10), np.arange(30,40), bottom=np.arange(1,11), color="g") # create blank rectangle extra = Rectangle((0, 0), 1, 1, fc="w", fill=False, edgecolor='none', linewidth=0) ax.legend([extra, bar_0_10, bar_10_100], ("My explanatory text", "0-10", "10-100")) plt.show() ``` ![example output](https://i.stack.imgur.com/FQn2U.png)
Alternative solution, kind of dirty but pretty quick. ``` import pylab as plt X = range(50) Y = range(50) plt.plot(X, Y, label="Very straight line") # Create empty plot with blank marker containing the extra label plt.plot([], [], ' ', label="Extra label on the legend") plt.legend() plt.show() ``` [![enter image description here](https://i.stack.imgur.com/H1VOK.png)](https://i.stack.imgur.com/H1VOK.png)
Is it possible to add a string as a legend item
[ "", "python", "matplotlib", "pandas", "legend", "legend-properties", "" ]
I have a query like this: ``` SELECT TV.Descrizione as TipoVers, sum(ImportoVersamento) as ImpTot, count(*) as N, month(DataAllibramento) as Mese FROM PROC_Versamento V left outer join dbo.PROC_TipoVersamento TV on V.IDTipoVersamento = TV.IDTipoVersamento inner join dbo.PROC_PraticaRiscossione PR on V.IDPraticaRiscossioneAssociata = PR.IDPratica inner join dbo.DA_Avviso A on PR.IDDatiAvviso = A.IDAvviso where DataAllibramento between '2012-09-08' and '2012-09-17' and A.IDFornitura = 4 group by V.IDTipoVersamento,month(DataAllibramento),TV.Descrizione order by V.IDTipoVersamento,month(DataAllibramento) ``` This query *must* always return something. If no result is produced a ``` 0 0 0 0 ``` row must be returned. How can I do this. Use a *isnull* for every selected field isn't usefull.
Use a derived table with one row and do a outer apply to your other table / query. Here is a sample with a table variable `@T` in place of your real table. ``` declare @T table ( ID int, Grp int ) select isnull(Q.MaxID, 0) as MaxID, isnull(Q.C, 0) as C from (select 1) as T(X) outer apply ( -- Your query goes here select max(ID) as MaxID, count(*) as C from @T group by Grp ) as Q order by Q.C -- order by goes to the outer query ``` That will make sure you have always at least one row in the output. Something like this using your query. ``` select isnull(Q.TipoVers, '0') as TipoVers, isnull(Q.ImpTot, 0) as ImpTot, isnull(Q.N, 0) as N, isnull(Q.Mese, 0) as Mese from (select 1) as T(X) outer apply ( SELECT TV.Descrizione as TipoVers, sum(ImportoVersamento) as ImpTot, count(*) as N, month(DataAllibramento) as Mese, V.IDTipoVersamento FROM PROC_Versamento V left outer join dbo.PROC_TipoVersamento TV on V.IDTipoVersamento = TV.IDTipoVersamento inner join dbo.PROC_PraticaRiscossione PR on V.IDPraticaRiscossioneAssociata = PR.IDPratica inner join dbo.DA_Avviso A on PR.IDDatiAvviso = A.IDAvviso where DataAllibramento between '2012-09-08' and '2012-09-17' and A.IDFornitura = 4 group by V.IDTipoVersamento,month(DataAllibramento),TV.Descrizione ) as Q order by Q.IDTipoVersamento, Q.Mese ```
Use [COALESCE](https://stackoverflow.com/questions/13366488/coalesce-function-in-tsql). It returns the first non-null value. E.g. ``` SELECT COALESCE(TV.Desc, 0)... ``` Will return 0 if TV.DESC is NULL.
Replace no result
[ "", "sql", "t-sql", "" ]
I have two arrays of the same length: ``` x = [2,3,6,100,2,3,5,8,100,100,5] y = [2,3,4,5,5,5,2,1,0,2,4] ``` I selected the position where x==100 in this way: How is possible to have the value of y where x==100? (that is y=5,0,2)? I tried in this way: ``` x100=np.where(x==100) y100=y[x100] ``` but it doesn't give me the values I want. How can I solve the problem?
x and y should be numpy arrays: ``` x = np.array([2,3,6,100,2,3,5,8,100,100,5]) y = np.array([2,3,4,5,5,5,2,1,0,2,4]) ``` Then your code should work as you expect.
Your code works fine when *actually using `numpy` arrays*. You can also write it more succinctly like so. ``` >>> import numpy as np >>> x = np.array([2,3,6,100,2,3,5,8,100,100,5]) >>> y = np.array([2,3,4,5,5,5,2,1,0,2,4]) >>> y[x == 100] array([5, 0, 2]) ```
Select values in arrays
[ "", "python", "numpy", "" ]
I couldn't find a proper answer to my question. So the deal is this: I need to print a 2D array, but each cell is a list of size 2. The first value in this list is 'H' or 'S' for hidden or seen. The second is the actual value. I need to print each line like this: format: ("%-2s %-2s... %-2s"), what to print: if first value is 'H' print 'H' else print the second value. Please help me accomplish this task, thank you! I was trying the next code: ``` print ' ' , ''.join('%-2s ' % i for i in range(self.gameBoard.width)) for i in range(self.gameBoard.height): print '%-2s'%i, ''.join('%-2s ' % v[1] for v in self.gameBoard.Matrix[i] if v[0] == 'S') ``` "i" is for line number.
You solution seems to be working, you just need to change the last line from: ``` print '%-2s'%i, ''.join('%-2s ' % v[1] for v in self.gameBoard.Matrix[i] if v[0] == 'S') ``` to: ``` print '%-2s'%i, ''.join('%-2s ' % ((v[1] if v[0] == 'S' else 'H') for v in self.gameBoard.Matrix[i])) ```
It's better to use enumerate to iterate over the lines directly ``` print ' ' , ''.join('%-2s ' % i for i in range(self.gameBoard.width)) for i, row in enumerate(self.gameBoard.Matrix): print '%-2s'%i, ''.join('%-2s '%(v[1] if V[0]==S else 'H') for v in row) ``` It's preferred to use `format` instead of `%` formatting ``` print ' ' , ' '.join(format(i, '2') for i in range(self.gameBoard.width)) for i, row in enumerate(self.gameBoard.Matrix): print '%-2s'%i, ' '.join(format(v[1] if V[0]==S else 'H', '2') for v in row) ```
Printing in python with if else conditions and correct spacing
[ "", "python", "python-2.7", "" ]
I'm working with a view, and can't use temp tables. Is it possible to : > SELECT \* FROM table1 UNION SELECT \* FROM (SELECT \* FROM table 3) I realize its bad coding practice select \*, I'm just using it as an example. Any help would be appreciated!
That query parses as: ``` (SELECT * FROM table1) UNION (SELECT * FROM (SELECT * FROM table 3)) ``` In SQL Server, this will return a missing alias error. So, add in the alias: ``` (SELECT * FROM table1) UNION (SELECT * FROM (SELECT * FROM table 3) t) ```
Yes. if there has same count's of columns.. it will work or try to these code ``` SELECT A.COL1, A.COL2 FROM TABLE1 A UNION SELECT B.COL1, B.COL2 FROM (SELECT C.COL1, C.COL2 FROM TABLE3) ```
SQL SELECT Union SELECT FROM (Select...)
[ "", "sql", "sql-server", "t-sql", "" ]
I have a list of phone number prefixes defined for large number of zones (in query defined by gvcode and cgi). I need to efficiently find a longest prefix that matches given number PHONE\_NR. I use inverted LIKE clause on field digits (which contains prefixes in form +48%, +49%, +1%, +1232% and so on). Therefore I can't use normal index on that field. I managed to get substantial improvement by using IOT on gvcode and cgi field (which are part (first two cols) of primary key). I also looked at some oracle text indexes but can't find one that will match longer input with shorter prefix in the table. Is there any other way to perform such search that is faster than this approach. Here is the query which gives a list of all matched prefixes (I sort it afterwards on digits length). ``` select t.gvcode, t.digits from NUMBERS t where t.gvcode=ZONE_SET_CODE and t.cgi=cgi_f and ( PHONE_NR like t.digits) order by length(digits) desc ```
In addition to the index on "digits" you can create the index on `rpad(substr(digits,1,length(digits)-1), 10, '9')`. "10" is the maximum length that you want to support. You will add an additional condition to the where clause: `rpad(substr(digits,1,length(digits)-1), 10, '9') >= PHONE_NR` Your SQL would be: ``` select t.gvcode, t.digits from NUMBERS t where t.gvcode=ZONE_SET_CODE and t.cgi=cgi_f and PHONE_NR like t.digits and substr(digits, 1, length(digits)-1) <= PHONE_NR and rpad(substr(digits,1,length(digits)-1), 10, '9') >= PHONE_NR order by length(digits) desc ``` Here is an example in [sqlfiddle](http://www.sqlfiddle.com/#!4/ef6fa/5)
I might sound stupid, but when I ran into such problem I went in most non-space-efficient brute force way: Lets say: ``` L=length of longest prefix to match (without obvious +, of course) ``` Add `L` additional fields naming them, for example, `P1, P2,...,PL` Update those fields with ``` UPDATE NUMBERS set P1=SUBSTR(PHONE_NR,1,1), P2=SUBSTR(PHONE_NR,1,2), ..., PL=SUBSTR(PHONE_NR,1,L) ``` (in future you can do this in `INSERT OR UPDATE` trigger too) Now you have L fields to create index on and compare to anything you like.
Fastest way to search for longest prefix in ORACLE
[ "", "sql", "oracle", "algorithm", "string-matching", "" ]
I need to move data from one table to another in my Android app I would like to use the following sql: ``` insert into MYTABLE2 select id, STATUS risposta, DATETIME('now') data_ins from MYTABLE 2 ``` Unfortunately in table MYTABLE2 there is an \_ID column AUTOINCREMENT. What could I do? Thanks. **EDIT:** this is my MYTABLE2 the, the table I would like to populate with data from another table: ``` CREATE TABLE "ANSWERS" ("_id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL , "ID_QUESTION" INTEGER,"DATE_INS" DATETIME DEFAULT (CURRENT_DATE) , "ANSWER" INTEGER) ```
explicitly specify the column name in the `INSERT` clause, ``` INSERT INTO destinationTable (risposta, data_ins) SELECT STATUS risposta, DATETIME('now') data_ins FROM sourceTable ```
This might help, a query from one table to another and it will also check if the selected column (id) already exist in another table. SQLite QUERY: ``` INSERT INTO MYTABLE2(id,data_ins ) SELECT id, data_ins FROM MYTABLE2 WHERE id NOT IN ( SELECT id FROM MYTABLE1) ``` Android: ``` String select_insert_query = "INSERT INTO " + TABLE_MYTABLE2 + "( " + ID + "," + DATA_INS + ") SELECT " + ID + "," + DATA_INS + " FROM " + TABLE_MYTABLE2 + " WHERE " + ID + " NOT IN (SELECT " + ID + " FROM " + TABLE_MYTABLE1 + ")"; SQLiteDatabase db = this.getWritableDatabase(); Cursor cursor = db.rawQuery(select_insert_query, null); cursor.close(); ```
sqlite insert into table select * from
[ "", "android", "sql", "sqlite", "etl", "" ]
I want to have a function that wraps and object in an iterable one in order to allow the clients of the function treat the same way collections and single objects, i did the following: ``` def to_iter(obj): try: iter(obj) return obj except TypeError: return [obj] ``` Is there a pythonic way to do this?, what if `obj` is a string and i want to treat strings as single objects?, should i use `isinstance` instead `iter`?
Your approach is good: It would cast a string object to an iterable though ``` try: iter(obj) except TypeError, te: obj = list(obj) ``` Another thing you can check for is: ``` if not hasattr(obj, "__iter__"): #returns True if type of iterable - same problem with strings obj = list(obj) return obj ``` To check for string types: ``` import types if not isinstance(obj, types.StringTypes) and hasattr(obj, "__iter__"): obj = list(obj) return obj ```
Here's a general solution with some doctests to demonstrate: ``` def is_sequence(arg): """ SYNOPSIS Test if input is iterable but not a string. DESCRIPTION Type checker. True for any object that behaves like a list, tuple or dict in terms of ability to iterate over its elements; but excludes strings. See http://stackoverflow.com/questions/1835018 PARAMETERS arg Object to test. RETURNS True or False EXAMPLES ## string >>> is_sequence('string') False ## list >>> is_sequence([1, 2, 3,]) True ## set >>> is_sequence(set([1, 2, 3,])) True ## tuple >>> is_sequence((1, 2, 3,)) True ## dict >>> is_sequence(dict(a=1, b=2)) True ## int >>> is_sequence(123) False LIMITATIONS TBD """ return (not hasattr(arg, "strip") and hasattr(arg, "__iteritems__") or hasattr(arg, "__iter__")) def listify(arg): """ SYNOPSIS Wraps scalar objects in a list; passes through lists without alteration. DESCRIPTION Normalizes input to always be a list or tuple. If already iterable and not a string, pass through. If a scalar, make into a one-element list. If a scalar is wrapped, the same scalar (not a copy) is preserved. PARAMETERS arg Object to listify. RETURNS list EXAMPLES >>> listify(1) [1] >>> listify('string') ['string'] >>> listify(1, 2) Traceback (most recent call last): ... TypeError: listify() takes exactly 1 argument (2 given) >>> listify([3, 4,]) [3, 4] ## scalar is preserved, not copied >>> x = 555 >>> y = listify(x) >>> y[0] is x True ## dict is not considered a sequence for this function >>> d = dict(a=1,b=2) >>> listify(d) [{'a': 1, 'b': 2}] >>> listify(None) [None] LIMITATIONS TBD """ if is_sequence(arg) and not isinstance(arg, dict): return arg return [arg,] ```
python, wrap and object into a list if not is an iterable
[ "", "python", "loops", "composite", "" ]
Consider the following table structure for an imaginary table named `score`: ``` player_name |player_lastname |try |score primary key: (player_name,player_lastname,try) ``` (dont discuss the table schema, its just an example) This table holds the scores of all players - every player should be able to play either one OR two times. Now, how could I fetch data about every player's last try only (i.e. first tries should be ignored for those who played more than once)? An example of what I'm trying to achieve: ``` player_name,player_lastname,try,score ===================================== bart, simpson,1,250 lisa,simpson,1,150 lisa,simpson,2,250 homer,simpson,1,300 homer,simpson,2,350 maggi,simpson,1,50 ``` The result should be: ``` player_name,player_lastname,try,score ===================================== bart, simpson,1,250 lisa,simpson,2,250 homer,simpson,2,350 maggi,simpson,1,50 ```
A Rank function can solve this: ``` SELECT player_name,player_lastname,TRY,score FROM (SELECT player_name,player_lastname,TRY,score,RANK() OVER (PARTITION BY player_name, Player_Lastname ORDER BY TRY DESC)AS try_rank FROM score )sub WHERE try_rank = 1 ``` I'm assuming 'try' is the number that can be 1/2. Edit, forgot Partition BY
This will probably have the best performance ``` select distinct on (player_name, player_lastname) player_name, player_lastname, try, score from score order by 1, 2, 3 desc ```
select only last try
[ "", "sql", "postgresql", "greatest-n-per-group", "" ]
Is there any way to specify the index that I want for a new row, when appending the row to a dataframe? The original documentation provides [the following example](http://pandas.pydata.org/pandas-docs/stable/merging.html): ``` In [1301]: df = DataFrame(np.random.randn(8, 4), columns=['A','B','C','D']) In [1302]: df Out[1302]: A B C D 0 -1.137707 -0.891060 -0.693921 1.613616 1 0.464000 0.227371 -0.496922 0.306389 2 -2.290613 -1.134623 -1.561819 -0.260838 3 0.281957 1.523962 -0.902937 0.068159 4 -0.057873 -0.368204 -1.144073 0.861209 5 0.800193 0.782098 -1.069094 -1.099248 6 0.255269 0.009750 0.661084 0.379319 7 -0.008434 1.952541 -1.056652 0.533946 In [1303]: s = df.xs(3) In [1304]: df.append(s, ignore_index=True) Out[1304]: A B C D 0 -1.137707 -0.891060 -0.693921 1.613616 1 0.464000 0.227371 -0.496922 0.306389 2 -2.290613 -1.134623 -1.561819 -0.260838 3 0.281957 1.523962 -0.902937 0.068159 4 -0.057873 -0.368204 -1.144073 0.861209 5 0.800193 0.782098 -1.069094 -1.099248 6 0.255269 0.009750 0.661084 0.379319 7 -0.008434 1.952541 -1.056652 0.533946 8 0.281957 1.523962 -0.902937 0.068159 ``` where the new row gets the index label automatically. Is there any way to control the new label?
The `name` of the Series becomes the `index` of the row in the DataFrame: ``` In [99]: df = pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D']) In [100]: s = df.xs(3) In [101]: s.name = 10 In [102]: df.append(s) Out[102]: A B C D 0 -2.083321 -0.153749 0.174436 1.081056 1 -1.026692 1.495850 -0.025245 -0.171046 2 0.072272 1.218376 1.433281 0.747815 3 -0.940552 0.853073 -0.134842 -0.277135 4 0.478302 -0.599752 -0.080577 0.468618 5 2.609004 -1.679299 -1.593016 1.172298 6 -0.201605 0.406925 1.983177 0.012030 7 1.158530 -2.240124 0.851323 -0.240378 10 -0.940552 0.853073 -0.134842 -0.277135 ```
df.loc will do the job : ``` >>> df = pd.DataFrame(np.random.randn(3, 2), columns=['A','B']) >>> df A B 0 -0.269036 0.534991 1 0.069915 -1.173594 2 -1.177792 0.018381 >>> df.loc[13] = df.loc[1] >>> df A B 0 -0.269036 0.534991 1 0.069915 -1.173594 2 -1.177792 0.018381 13 0.069915 -1.173594 ```
Pandas: Appending a row to a dataframe and specify its index label
[ "", "python", "pandas", "" ]
In Perl's expect module, I can do this: ``` my $login = new Expect; $login->slave->clone_winsize_from(\*STDIN); ``` What's the equivalent in Python? I see an option for child.setwinsize(row, column), but how do I clone it from STDIN like I can do in Perl?
So I found out how to do this: ``` rows, cols = map(int, os.popen('stty size', 'r').read().split()) child.setwinsize(rows, cols) ```
You can use `raw_input()`, `readline()` or `sys.stdin.readline()` depending on how you get your stdin.
How to clone pexpect winsize from current terminal?
[ "", "python", "perl", "expect", "pexpect", "" ]
I need to generate k numbers of hash value (0 .. m-1), and k numbers should be distinct. The hash values should be different based on different hash seed. I found this code, but it's too big for me to use with only one value. ``` import hashlib, uuid password = "abc" <-- key salt = str(10) # <-- hash seed value = hashlib.sha1(password + salt).hexdigest() print value # 105dee46d56df0c97ca9b6a09e59fbf63d8ceae2 ``` How can I get good k hash values between 0 and m-1? Or is it just OK to split the value into k parts to apply mod operation?
I found [mmh3](https://github.com/hajimes/mmh3) is the best option so far. ``` import mmh3 def getHash(key, m, k): result = set() seed = 1 while True: if len(result) == k: return list(result) else: b = mmh3.hash(key, seed) % m result.add(b) seed += 10 print result if __name__ == "__main__": print getHash("Hello", 100, 5) print getHash("Good Bye", 100, 5) ``` Result: ``` set([12]) set([43, 12]) set([43, 12, 29]) set([88, 43, 12, 29]) set([88, 80, 43, 12, 29]) [88, 80, 43, 12, 29] set([20]) set([2, 20]) set([2, 20, 70]) set([2, 75, 20, 70]) set([2, 75, 20, 70, 39]) [2, 75, 20, 70, 39] ```
This is the code that works. ``` import hashlib, uuid # http://stackoverflow.com/questions/209513/convert-hex-string-to-int-in-python def getHash(key, hashseed, m, k): """ We use sha256, and it generates 64 bytes of hash number, so k should be 2 <= k <= 32 However, because of duplicity the real limit should be much lower. Todo: You can concatenate more sha256 values to get more k values """ salt = str(hashseed) hashed_password = hashlib.sha256(key + salt).hexdigest() if k > 32: raise Error("k should be less than 32") if k <= 1: raise Error("k should be more than 2") result = [] index = 0 # make the non-overwrapping hash value below m while True: value = int(hashed_password[index:index+2], 16) % m index += 2 # second loop for detecting the duplicate value while True: if value not in result: result.append(value) break # Try the next value value = int(hashed_password[index:index+2], 16) % m index += 2 if len(result) == k: break return result if __name__ == "__main__": res = getHash("abc", 1, 10, 5) # seed:1, m = 10, k = 5 assert len(res) == 5 ```
Generating k numbers of hash value with python
[ "", "python", "hash", "" ]
I have this list of countries retrieved from database based on one column (`country_name`) of the `country` table: ``` list_countries = Country.objects.values_list('country_name', flat=True).distinct() ``` result is like this: ``` [u'', u'China', u'France', u'Germany', ...] ``` Some of values in database are empty. How do I remove the empty retrieved so that I get the result retrieved just with country\_name not null (cf `country_name != ''` ) ?
You can use [Q](https://docs.djangoproject.com/en/dev/topics/db/queries/#complex-lookups-with-q-objects) object, ``` from django.db.models import Q list_countries = Country.objects.filter(~Q(country_name='')).values_list('country_name', flat=True).distinct() ```
I have agree with Adem's answer except for one change. ``` from django.db.models import Q list_countries = Country.objects.filter(~Q(country_name='')).distinct().values_list('country_name', flat=True) ``` This will be slightly better than Adem's answer.
Empty values retrieved with values_list
[ "", "python", "django", "" ]
I'm trying to update a column in my table to use the values 1 through (a max number decided by a count of records). I don't know if I'm explaining this right, so I set up a SQLFiddle with the data I'm trying to update. **[SQL FIDDLE](http://sqlfiddle.com/#!3/a09ed/8)** I want to set the Version column to 1 through (the max number). Is there some way to rewrite this query to a scale the Version number? As in, I want the first record to use 1, the second record to use 2, and so on... ``` UPDATE Documents SET Version = 1 ```
You can do it with a CTE and no joins: ``` with RankedDocument as ( select * , rn = row_number() over (order by ID) from Documents ) update RankedDocument set Version = rn ``` [SQL Fiddle with demo](http://sqlfiddle.com/#!3/a09ed/32).
From what I can tell, you want every record from `Documents` to have a `version` number which is a number moving from 1 ..... N. You could use a temporary table and `ROW_NUMBER` technique to get the incremental `version` and then `UPDATE` it back to your original table. ``` CREATE TABLE #Temp (ID int, Version int) INSERT INTO #Temp (ID, Version) SELECT ID, ROW_NUMBER() OVER (ORDER BY ID ASC) FROM Documents UPDATE Doc SET Version = TT.Version FROM Documents AS Doc INNER JOIN #Temp AS TT ON Doc.ID = TT.ID DROP TABLE #Temp ``` If I understand you correctly..
How to update a column of a table to a scaling value
[ "", "sql", "sql-server", "" ]
I'm looking for an SQL statement that will return only rows of my table whose Name field contains special characters (single quotes). I used ``` SELECT * FROM 'table' WHERE Name REGEXP '"$' ``` What do i miss to put here ?
[SQL Fiddle](http://sqlfiddle.com/#!2/aaaa7/1) **MySQL 5.5.30 Schema Setup**: ``` create table test(name varchar(10)); insert into test values('aaa '''' bb'),('bsbds'); ``` **Query 1**: ``` select * from test where name regexp "'" ``` **[Results](http://sqlfiddle.com/#!2/aaaa7/1/0)**: ``` | NAME | ------------- | aaa '' bb | ```
``` SELECT * FROM 'table' WHERE Name like "%\'%" ```
mysql query for finding names include special character single quotes
[ "", "mysql", "sql", "" ]
I have a dictionary which I convert to a bytearray, and since bytearrays are immutable (can't be modified) I try to make a list equal to each index in the bytearray. ``` a = {1:'a', 2:'b', 3:'c'} b = bytearray(str(a), 'ASCII') c = [] for i in b: c[i] = b[i] # Error on this line print(str(c)) ``` The problem is it keeps printing `IndexError: bytearray index out of range`. How and why is the bytearray out of range?
If I correctly understood your question, you can simply use `c = list(b)`: ``` a = {1:'a', 2:'b', 3:'c'} b = bytearray(str(a), 'ASCII') c = list(b) print(c) ``` Output: ``` [123, 49, 58, 32, 39, 97, 39, 44, 32, 50, 58, 32, 39, 98, 39, 44, 32, 51, 58, 32, 39, 99, 39, 125] ``` In order to understand why you get this error, see this [answer](https://stackoverflow.com/questions/16843641/how-do-i-copy-the-contents-of-a-bytearray-to-a-list-python/16843708#16843708).
`i` is the **value** in `b` **not the index** ``` b = [1,5,20] for i in b: print i #prints 1,then 5 , then 20 ```
How do I copy the contents of a bytearray to a list (Python)?
[ "", "python", "dictionary", "python-3.x", "arrays", "" ]
i have a list with multiple sublists. ``` l = [[a,b,c],[3,5,0],[3,1,0],...] # I do not know how many sublists there are beforehand. ``` how can i iterate over the first item of each sublist? ``` e.g. a,3,3 then b,5,1 ... ``` I'd like to do something like: ``` for x,y,z... in zip(l[1],l[2],l[3]...) # "..." representing other sublists do something with x,y,z... if condition... ``` Of course this won't work because I do not know how many sublists exist beforehand. Ultimately, I want to filter the existing sublists if at the same index, all numerical values are equal to zero. For instance: c,0,0 would be removed (because all numbers are zeroes). But, a,3,3 and b,5,1 remain. At the end, I need 3 new filtered sublists to contain: ``` lnew = [[a,b],[3,5],[3,1]] ```
From [docs](http://docs.python.org/2/library/functions.html#zip): > `zip()` in conjunction with the `*` operator can be used to unzip a list ``` >>> lis = [['a','b','c'],[3,5,0],[3,1,0]] >>> for x,y,z in zip(*lis): print x,y,z ... a 3 3 b 5 1 c 0 0 ``` > I want to filter the existing sublists if at the same index, all > numerical values are equal to zero ``` >>> zipp = [x for x in zip(*lis) if any(y != 0 for y in x \ if isinstance (y,(int,float)) ) ] >>> zip(*zipp) [('a', 'b'), (3, 5), (3, 1)] ```
Eh, the answers here seem to be good, but i'll offer an alternative: ``` l = [['a', 'b', 'c'], [3, 5, 0], [3, 1, 0]] i = 0 while True: try: do_whatever(l[i][0]) i += 1 except IndexError: break ``` I realize that its not as elegant as the other solutions, but its nice to have a variety to choose from! And this also still works if you're still adding to the list as you iterate through it!
Python: how do i iterate over the first element of each sublist without knowing the number of sublists?
[ "", "python", "" ]
for example this kind of for loop in JAVA or C++: ``` int N = 20; for (i = 1; i < N; i = 3 * i + 1) ``` i will be 1, 4, 13 I can only use while to complete it ``` while i < N: i = 3 * i + 1 ``` How can write it using another kind of style in python? Sorry for my English.
Your while loop implementation is nothing wrong from Python's perspective. If you want a generalized loop structure to mimic the loop structure of C/C++/Java you can do something similar ``` >>> def loop(init, cond, incr): i = init while cond(i): yield i i=incr(i) >>> list(loop(1, lambda e:e < 20, lambda e:3*e + 1)) [1, 4, 13] ``` Once you create the loop routine, you can use it to create any custom loop format as you desire ``` for i in loop(1, lambda e:e < 20, lambda e:3*e + 1): print i ```
Here's another way to do it. It is more specialized than Abhijit's answer. ``` def timesThreePlusOne(init,limit): i = init while i < limit: yield i i = (3 * i) + 1 N = 20 for i in timesThreePlusOne(1,N): print i ```
How to convert this type of for loop in python
[ "", "python", "loops", "for-loop", "" ]
Ho can this code be optimized? Maybe it is possible to write it shorter, make it more 'pythonic'? ``` ... check_pass = 0 if key == 'media': size_check_pass = 0 if some_check_func(xxx): size_check_pass = 1 ... format_checks_pass = 0 if...some_checks... : format_checks_pass = 1 if size_check_pass and format_checks_pass: check_pass = 1 if key == 'text': line_end_check_pass = 0 if (((some checks))): line_end_check_pass = 1 lenght_check_pass = 0 if len(file) < 1000: lenght_check_pass = 1 if line_end_check and lenght_check_pass: check_pass = 1 if check_pass: ... ``` The background of the code is to check each file type for different conditions.
You could use a dictionary as a sort of dispatch table perhaps. Something like ``` def media_checks(f): ... return pass def text_checks(f): ... return pass _dispatch = {} _dispatch['media'] = media_checks _dispatch['text'] = text_checks dispatch_func = _dispatch[key] check_pass = dispatch_func(your_file) ``` This would split you code up so it is easier to read and maintain. It also isolates the code for checking each file type in an individual function.
So you always have a sequence of checks. First, you can shorten your conditions: ``` ... check_pass = False # to make it more boolean if key == 'media': size_check_pass = some_check_func(xxx) format_checks_pass = ...result of your checks... check_pass = size_check_pass and format_checks_pass if key == 'text': line_end_check_pass = (((some checks))) length_check_pass = len(file) < 1000 check_pass = line_end_check and length_check_pass if check_pass: ... ``` or compress them even more: ``` ... check_pass = False # to make it more boolean if key == 'media': check_pass = some_check_func(xxx) and ...result of your checks... if key == 'text': check_pass = (((some checks))) and len(file) < 1000 if check_pass: ... ``` And then you could apply a system like the one from jamesj, but maybe with lists of conditional expressions: ``` def some_checks(): ... return True or False def other_checks(f): ... return ... _dispatch = { 'media': [lambda: some_check_func(xxx), some_checks], 'text' : [other_checks, lambda: len(file) < 1000] } dispatch_list = _dispatch[key] check_pass = all(func() for func in dispatch_list) ```
Python serial checks refactoring
[ "", "python", "" ]
I have the following problem, I want to create my own colormap (red-mix-violet-mix-blue) that maps to values between -2 and +2 and want to use it to color points in my plot. The plot should then have the colorscale to the right. That is how I create the map so far. But I am not really sure if it mixes the colors. ``` cmap = matplotlib.colors.ListedColormap(["red","violet","blue"], name='from_list', N=None) m = cm.ScalarMappable(norm=norm, cmap=cmap) ``` That way I map the colors to the values. ``` colors = itertools.cycle([m.to_rgba(1.22), ..]) ``` Then I plot it: ``` for i in range(0, len(array_dg)): plt.plot(array_dg[i], markers.next(),alpha=alpha[i], c=colors.next()) ``` My problems are: 1. I can't plot the color scale. 2. I am not completely sure if my scale is creating a continues (smooth) colorscale.
There is an illustrative example of [how to create custom colormaps here](http://matplotlib.org/examples/pylab_examples/custom_cmap.html). The docstring is essential for understanding the meaning of `cdict`. Once you get that under your belt, you might use a `cdict` like this: ``` cdict = {'red': ((0.0, 1.0, 1.0), (0.1, 1.0, 1.0), # red (0.4, 1.0, 1.0), # violet (1.0, 0.0, 0.0)), # blue 'green': ((0.0, 0.0, 0.0), (1.0, 0.0, 0.0)), 'blue': ((0.0, 0.0, 0.0), (0.1, 0.0, 0.0), # red (0.4, 1.0, 1.0), # violet (1.0, 1.0, 0.0)) # blue } ``` Although the `cdict` format gives you a lot of flexibility, I find for simple gradients its format is rather unintuitive. Here is a utility function to help generate simple LinearSegmentedColormaps: ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as mcolors def make_colormap(seq): """Return a LinearSegmentedColormap seq: a sequence of floats and RGB-tuples. The floats should be increasing and in the interval (0,1). """ seq = [(None,) * 3, 0.0] + list(seq) + [1.0, (None,) * 3] cdict = {'red': [], 'green': [], 'blue': []} for i, item in enumerate(seq): if isinstance(item, float): r1, g1, b1 = seq[i - 1] r2, g2, b2 = seq[i + 1] cdict['red'].append([item, r1, r2]) cdict['green'].append([item, g1, g2]) cdict['blue'].append([item, b1, b2]) return mcolors.LinearSegmentedColormap('CustomMap', cdict) c = mcolors.ColorConverter().to_rgb rvb = make_colormap( [c('red'), c('violet'), 0.33, c('violet'), c('blue'), 0.66, c('blue')]) N = 1000 array_dg = np.random.uniform(0, 10, size=(N, 2)) colors = np.random.uniform(-2, 2, size=(N,)) plt.scatter(array_dg[:, 0], array_dg[:, 1], c=colors, cmap=rvb) plt.colorbar() plt.show() ``` ![enter image description here](https://i.stack.imgur.com/3ULpI.png) --- By the way, the `for-loop` ``` for i in range(0, len(array_dg)): plt.plot(array_dg[i], markers.next(),alpha=alpha[i], c=colors.next()) ``` plots one point for every call to `plt.plot`. This will work for a small number of points, but will become extremely slow for many points. `plt.plot` can only draw in one color, but `plt.scatter` can assign a different color to each dot. Thus, `plt.scatter` is the way to go.
*Since the methods used in other answers seems quite complicated for such easy task, here is a new answer:* Instead of a `ListedColormap`, which produces a discrete colormap, you may use a `LinearSegmentedColormap`. This can easily be created from a list using the `from_list` method. ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.colors x,y,c = zip(*np.random.rand(30,3)*4-2) norm=plt.Normalize(-2,2) cmap = matplotlib.colors.LinearSegmentedColormap.from_list("", ["red","violet","blue"]) plt.scatter(x,y,c=c, cmap=cmap, norm=norm) plt.colorbar() plt.show() ``` [![enter image description here](https://i.stack.imgur.com/mJcGc.png)](https://i.stack.imgur.com/mJcGc.png) --- More generally, if you have a list of values (e.g. `[-2., -1, 2]`) and corresponding colors, (e.g. `["red","violet","blue"]`), such that the `n`th value should correspond to the `n`th color, you can normalize the values and supply them as tuples to the `from_list` method. ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.colors x,y,c = zip(*np.random.rand(30,3)*4-2) cvals = [-2., -1, 2] colors = ["red","violet","blue"] norm=plt.Normalize(min(cvals),max(cvals)) tuples = list(zip(map(norm,cvals), colors)) cmap = matplotlib.colors.LinearSegmentedColormap.from_list("", tuples) plt.scatter(x,y,c=c, cmap=cmap, norm=norm) plt.colorbar() plt.show() ``` [![enter image description here](https://i.stack.imgur.com/3TtsY.png)](https://i.stack.imgur.com/3TtsY.png)
Create own colormap using matplotlib and plot color scale
[ "", "python", "matplotlib", "plot", "" ]
I have two numpy-arrays: ``` p_a_colors=np.array([[0,0,0], [0,2,0], [119,103,82], [122,122,122], [122,122,122], [3,2,4]]) p_rem = np.array([[119,103,82], [122,122,122]]) ``` I want to delete all the columns from p\_a\_colors that are in p\_rem, so I get: ``` p_r_colors=np.array([[0,0,0], [0,2,0], [3,2,4]]) ``` I think, something should work like ``` p_r_colors= np.delete(p_a_colors, np.where(np.all(p_a_colors==p_rem, axis=0)),0) ``` but I just don't get the axis or [:] right. I know, that ``` p_r_colors=copy.deepcopy(p_a_colors) for i in range(len(p_rem)): p_r_colors= np.delete(p_r_colors, np.where(np.all(p_r_colors==p_rem[i], axis=-1)),0) ``` would work, but I am trying to avoid (python)loops, because I also want the performance right.
This is how I would do it: ``` dtype = np.dtype((np.void, (p_a_colors.shape[1] * p_a_colors.dtype.itemsize))) mask = np.in1d(p_a_colors.view(dtype), p_rem.view(dtype)) p_r_colors = p_a_colors[~mask] >>> p_r_colors array([[0, 0, 0], [0, 2, 0], [3, 2, 4]]) ``` You need to do the void dtype thing so that numpy compares rows as a whole. After that using the built-in set routines seems like the obvious way to go.
It's ugly, but ``` tmp = reduce(lambda x, y: x | np.all(p_a_colors == y, axis=-1), p_rem, np.zeros(p_a_colors.shape[:1], dtype=np.bool)) indices = np.where(tmp)[0] np.delete(p_a_colors, indices, axis=0) ``` (edit: corrected) ``` >>> tmp = reduce(lambda x, y: x | np.all(p_a_colors == y, axis=-1), p_rem, np.zeros(p_a_colors.shape[:1], dtype=np.bool)) >>> >>> indices = np.where(tmp)[0] >>> >>> np.delete(p_a_colors, indices, axis=0) array([[0, 0, 0], [0, 2, 0], [3, 2, 4]]) >>> ```
find and delete from more-dimensional numpy array
[ "", "python", "arrays", "numpy", "" ]
From this list of tuples: ``` [('IND', 'MIA', '05/30'), ('ND', '07/30'), ('UNA', 'ONA', '100'), \ ('LAA', 'SUN', '05/30'), ('AA', 'SN', '07/29'), ('UAA', 'AAN')] ``` I want to create a dictionary, which keys will be `[0]` and `[1]` value of **every third tuple**. Thus, the first key of dict created should be `'IND, MIA'`, second key `'LAA, SUN'` The final result should be: ``` {'IND, MIA': [('IND', 'MIA', '05/30'), ('ND', '07/30'), ('UNA', 'ONA', '100')],\ 'LAA, SUN': [('LAA', 'SUN', '05/30'), ('AA', 'SN', '07/29'), ('UAA', 'AAN')]} ``` If this is of any relevance, once the values in question becomes keys, they may be removed from tuple, since then I do not need them anymore. Any suggestions greatly appreciated!
``` inp = [('IND', 'MIA', '05/30'), ('ND', '07/30'), ('UNA', 'ONA', '100'), \ ('LAA', 'SUN', '05/30'), ('AA', 'SN', '07/29'), ('UAA', 'AAN')] result = {} for i in range(0, len(inp), 3): item = inp[i] result[item[0]+","+item[1]] = inp[i:i+3] print (result) ``` Dict comprehension solution is possible, but somewhat messy. To remove keys from array replace second loop line (`result[item[0]+ ...`) with ``` result[item[0]+","+item[1]] = [item[2:]]+inp[i+1:i+3] ``` Dict comprehension solution (a bit less messy than I initially thought :)) ``` rslt = { inp[i][0]+", "+inp[i][1]: inp[i:i+3] for i in range(0, len(inp), 3) } ``` And to add more kosher stuff into the answer, here's some useful links :): [defaultdict](http://docs.python.org/2/library/collections.html#defaultdict-objects), [dict comprehensions](http://www.python.org/dev/peps/pep-0274/)
Using the [`itertools` `grouper` recipe](http://docs.python.org/2/library/itertools.html#recipes): ``` from itertools import izip_longest def grouper(iterable, n, fillvalue=None): "Collect data into fixed-length chunks or blocks" # grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx args = [iter(iterable)] * n return izip_longest(fillvalue=fillvalue, *args) {', '.join(g[0][:2]): g for g in grouper(inputlist, 3)} ``` should do it. The `grouper()` method gives us groups of 3 tuples at a time. Removing the key values from the dictionary values too: ``` {', '.join(g[0][:2]): (g[0][2:],) + g[1:] for g in grouper(inputlist, 3)} ``` Demo on your input: ``` >>> from pprint import pprint >>> pprint({', '.join(g[0][:2]): g for g in grouper(inputlist, 3)}) {'IND, MIA': (('IND', 'MIA', '05/30'), ('ND', '07/30'), ('UNA', 'ONA', '100')), 'LAA, SUN': (('LAA', 'SUN', '05/30'), ('AA', 'SN', '07/29'), ('UAA', 'AAN'))} >>> pprint({', '.join(g[0][:2]): (g[0][2:],) + g[1:] for g in grouper(inputlist, 3)}) {'IND, MIA': (('05/30',), ('ND', '07/30'), ('UNA', 'ONA', '100')), 'LAA, SUN': (('05/30',), ('AA', 'SN', '07/29'), ('UAA', 'AAN'))} ```
Creating a dictionary from a list
[ "", "python", "list", "dictionary", "" ]
i need to update a table with another column (userID ) to Lastname Column .So i have written query like this ``` update User set LastName = +'LastName_'+ CONVERT (varchar(10),UserID) ``` and result giving like this ``` UserID FirstName LastName 1 AALIYAH Bhatt_1 2 Mohan Kumar_2 3 varun ratna_3 4 suresh rania_4 5 AARON suresh_5 etc ...... 4500 Kalyan raju_4500 4501 raohan manish4501 ``` and how can i get last name in the sequence.. see the last column for example 4500 so last name is updated as raju\_4500 and coming to first name userId(1) and lastname is Bhatt\_1 how could i get in sequence ``` UserID FirstName LastName 1 AALIYAH Bhatt_0001 2 Mohan Kumar_0002 3 varun ratna_0003 4 suresh rania_0004 5 AARON suresh_0005 etc ...... 4500 Kalyan raju_4500 4501 raohan manish4501 ``` Suggest me
This expression gives you the length of the maximum UserID: ``` select len(max(userid)) from User ``` And you can put leading zeros on a number like this: ``` select right('0000000000' + convert(varchar(10), /*number*/), /*length*/) ``` So, putting that together, I would do this: ``` declare @length int = (select len(max(userid)) from User) update User set LastName = 'LastName_' + right('0000000000' + convert(varchar(10), UserID), @length) ```
I think this will work ``` update User set LastName = +'LastName_'+ RIGHT('0000'+ CONVERT(varchar(10),UserID),4) ```
how to update a column with same sequence
[ "", "sql", "t-sql", "" ]
Python processing question -- strip out date-time patterns: I have some data from a GSM unit of the form: ``` +CMGL: 1,"REC READ","+111111111111","13/05/25,05:15:16+04",25-05-13,05:15:20, 0.668 +CMGL: 2,"REC READ","+111111111111","13/05/25,12:15:14+04",25-05-13,12:15:20, 0.875 +CMGL: 3,"REC READ","+111111111111","13/05/25,10:15:15+04",25-05-13,10:15:20, 0.679 ``` .. The data is retrieved as a single string-buffer so it's all on a single line initially. I can sort and strip data using DATA.replace(a,b), but I'm unable to delete the first 4 comma separated groups, i.e. ``` +CMGL: 1,"REC READ","+111111111111","YY/MM/DD,HH:MM:SS+DELTA" ``` My aim is to extract the data to look like this (I don't mind the wrong order of the date-time lines)- ``` 25-05-13, 05:15:20, 0.668 25-05-13, 12:15:20, 0.875 25-05-13, 10:15:20, 0.679 ``` .. Suggestions welcome
Something like this: ``` >>> strs = '+CMGL: 1,"REC READ","+111111111111","13/05/25,05:15:16+04",25-05-13,05:15:20, 0.668' >>> ", ".join( x for x in strs.split(",")[5:] ) '25-05-13, 05:15:20, 0.668' ``` or: ``` >>> ", ".join( strs.split(",",5)[-1].split(",") ) '25-05-13, 05:15:20, 0.668' ``` For multiple lines: ``` >>> strs = """+CMGL: 1,"REC READ","+111111111111","13/05/25,05:15:16+04",25-05-13,05:15:20, 0.668 +CMGL: 2,"REC READ","+111111111111","13/05/25,12:15:14+04",25-05-13,12:15:20, 0.875 +CMGL: 3,"REC READ","+111111111111","13/05/25,10:15:15+04",25-05-13,10:15:20, 0.679""" >>> >>> for line in strs.splitlines(): ... print ", ".join( line.split(",",5)[-1].split(",")) 25-05-13, 05:15:20, 0.668 25-05-13, 12:15:20, 0.875 25-05-13, 10:15:20, 0.679 ```
Use the `csv` module to process delimited files. **gsm.txt** ``` +CMGL: 1,"REC READ","+111111111111","13/05/25,05:15:16+04",25-05-13,05:15:20, 0.668 +CMGL: 2,"REC READ","+111111111111","13/05/25,12:15:14+04",25-05-13,12:15:20, 0.875 +CMGL: 3,"REC READ","+111111111111","13/05/25,10:15:15+04",25-05-13,10:15:20, 0.679 ``` the example code below ``` import csv gsm = open('gsm.txt') for row in csv.reader(gsm): print row[4:] ``` outputs ``` ['25-05-13', '05:15:20', ' 0.668'] ['25-05-13', '12:15:20', ' 0.875'] ['25-05-13', '10:15:20', ' 0.679'] ```
python string processing, strip out date-time patterns
[ "", "python", "design-patterns", "datetime", "" ]
So I have a 2D array of data taken from an excel spreadsheet which I'm currently sorting based on a column with data on criticality. ``` #rows contains my data that I'm sorting, it's a 2D array searchdict = dict(Critical=1, High=2, Medium=3, Low=4) rows.sort(key=lambda row: searchdict.get(row[11], 5)) ``` I'd like to sort based on another column if it's a tie for that column, anyone know how to approach this? Thanks in advance for any help. FYI: the other column contains numerical data
Use a tuple in your key. This is method is generally considered more "pythonic" than doing two sorts in a row. ``` key=lambda row: (searchdict.get(row[11], 5), row[other_column])) ```
The best option would be to use key with python's tuple ordering. ``` #rows contains my data that I'm sorting, it's a 2D array searchdict = dict(Critical=1, High=2, Medium=3, Low=4) rows.sort(key=lambda row: (searchdict.get(row[11], 5), searchdict.get(row[<secondary column index here>], 5))) ``` This plays off of the fact that the leftmost element in a tuple is considered to be more significant during comparisons, demonstrated here: ``` >>> (6, 5) > (5, 6) True >>> (6, 5) > (6, 4) True >>> (6, 5) > (6, 6) False >>> (2, 1, 1) > (1, 1000, 1000) True ```
how to do hierarchical sorting in python?
[ "", "python", "sorting", "" ]
I'm not exactly new to Python, but I do still have trouble understanding what makes something "Pythonic" (and the converse). So forgive me if this is a stupid question, but why can't I get the size of a file by doing a len(file)? file.`__len__` is not even implemented, so it's not like it's needed for something else? Would it be confusing/inconsistent for some reason if it was implemented to return the file size?
file is an iterator. To find the number of *lines* you need to read the entire file ``` sum(1 for line in file) ``` if you want the number of *bytes* in a file, use `os.stat` eg ``` import os os.stat(filename).st_size ```
Files have a broader definition, especially in Unix, than you may be thinking. What is the length of a printer, for example? Or a CDROM drive? Both are files in /dev, and sort of in Windows. For what we normally think of as a file, what would its length be? The size of the variable? The size of the file in bytes? The latter makes more sense, but then it gets ickier. Should the size of the file's contents be listed, or its size on disk (modulus allocation unit size). The question arises again for sparse files (files that have large empty sections which take no space, but are part of the file's normally reported size, supported by some file systems like NTFS and XFS). Of course, the answer to all of those could be, "just pick one and document what you picked." Perhaps that is exactly what should be done, but to be Pythonic, something usually must be clear-cut without having to read a lot of docs. `len(string)` is mostly obvious (one may ask if bytes or characters are the return value), `len(array)` is obvious, `len(file)` maybe not quite enough.
Why no len(file) in Python?
[ "", "python", "" ]
I have a model like this: ``` class Hop(models.Model): migration = models.ForeignKey('Migration') host = models.ForeignKey(User, related_name='host_set') ``` How can I have the primary key be the combination of `migration` and `host`?
## Update Django 4.0 Django 4.0 documentation recommends using [UniqueConstraint](https://docs.djangoproject.com/en/4.0/ref/models/constraints/#uniqueconstraint) with the constraints option instead of [unique\_together](https://docs.djangoproject.com/en/4.0/ref/models/options/#unique-together). > Use **UniqueConstraint** with the constraints option instead. > > **UniqueConstraint** provides more functionality than **unique\_together**. **unique\_together** may be deprecated in the future. ``` class Hop(models.Model): migration = models.ForeignKey('Migration') host = models.ForeignKey(User, related_name='host_set') class Meta: constraints = [ models.UniqueConstraint( fields=['migration', 'host'], name='unique_migration_host_combination' ) ] ``` ## Original Answer I would implement this slightly differently. I would use a default primary key (auto field), and use the meta class property, `unique_together` ``` class Hop(models.Model): migration = models.ForeignKey('Migration') host = models.ForeignKey(User, related_name='host_set') class Meta: unique_together = (("migration", "host"),) ``` It would act as a "surrogate" primary key column. If you really want to create a multi-column primary key, look into [this app](https://github.com/simone/django-compositekey)
Currently, Django models only support a single-column primary key. If you don't specify `primary_key = True` for the field in your model, Django will automatically create a column `id` as a primary key. The attribute `unique_together` in class `Meta` is only a constraint for your data.
How can I set two primary key fields for my models in Django?
[ "", "python", "django", "django-models", "model", "primary-key", "" ]
I whipped up a query here that does something particular with retrieving results that do not match the join (as suggested by this SO question). ``` SELECT cf.f_id FROM comments_following AS cf INNER JOIN comments AS c ON cf.c_id = c.id WHERE NOT EXISTS ( SELECT 1 FROM follows WHERE f_id = cf.f_id ) ``` Any ideas on how to speed this up? There are anywhere from 30k-200k rows it's looking through and appears to be using indexes, but the query times out. EXPLAIN/DESCRIBE Info: ``` 1 PRIMARY c ALL PRIMARY NULL NULL NULL 39119 1 PRIMARY cf ref c_id, c_id_2 c_id 8 ...c.id 11 Using where; Using index 2 DEPENDENT SUBQUERY following index NULL PRIMARY 8 NULL 35612 Using where; Using index ```
The `comments` table isn't used explicitly in the query. Is it being used for filtering? If not, try: ``` SELECT cf.f_id FROM comments_following cf WHERE NOT EXISTS ( SELECT 1 FROM follows WHERE follows.f_id = cf.f_id ) ``` By the way, if this generates a syntax error (because follows.f\_id does not exist), then that is the problem. In that case, you would think you have a correlated subquery, but there is not really one. Or the `left outer join` version: ``` SELECT cf.f_id FROM comments_following cf left outer join follows f on f.f_id = cf.f_id where f.f_id is null ``` Having an index on `follows(f_id)` should make both these versions run faster.
LEFT JOIN sometimes is faster then WHERE NOT EXISTS subquerys, try: ``` SELECT cf.f_id FROM comments_following AS cf INNER JOIN comments AS c ON cf.c_id = c.id LEFT JOIN follows AS f ON f.f_id = cf.f_id WHERE f.f_id IS NULL ```
MySQL Query Times out - Need to speed it up
[ "", "mysql", "sql", "" ]
I am new to python and programming, so apologies in advance. I know of remove(), append(), len(), and rand.rang (or whatever it is), and I believe I would need those tools, but it's not clear to me *how* to code it. What I would like to do is, while looping or otherwise accessing List\_A, randomly select an index within List\_A, remove the selected\_index from List\_A, and then append() the selected\_index to List\_B. I would like to randomly remove only up to a *certain percentage* (or real number if this is impossible) of items from List A. Any ideas?? Is what I'm describing possible?
If you don't care about the order of the input list, I'd shuffle it, then remove `n` items from that list, adding those to the other list: ``` from random import shuffle def remove_percentage(list_a, percentage): shuffle(list_a) count = int(len(list_a) * percentage) if not count: return [] # edge case, no elements removed list_a[-count:], list_b = [], list_a[-count:] return list_b ``` where `percentage` is a float value between `0.0` and `1.0`. Demo: ``` >>> list_a = range(100) >>> list_b = remove_percentage(list_a, 0.25) >>> len(list_a), len(list_b) (75, 25) >>> list_b [1, 94, 13, 81, 23, 84, 41, 92, 74, 82, 42, 28, 75, 33, 35, 62, 2, 58, 90, 52, 96, 68, 72, 73, 47] ```
If you can find a random index `i` of some element in `listA`, then you can easily move it from A to B using: ``` listB.append(listA.pop(i)) ```
Is it possible to randomly *remove* a percentage/number of items from a list & then *append* them to another list?
[ "", "python", "python-2.7", "random", "" ]
i have a MySQL table having columns as below ``` chat_id sender receiver msg msg_time ``` i need a query that will execute every day to delete all messages except latest 20 messages received by each receiver.is there a single query or i need to do some code in PhP or any other programming language
Try this query ``` select * from (select @rn:=if(@prv=receiver, @rn+1, 1) as rId, @prv:=receiver as receiver, chat_id, sender, msg, msg_time from tbl join (select @rn:=0, @prv:='')tmp order by receiver, msg_time desc)tmp where rid >= 20; ``` This select query will return all the records other than last 20 for each user, you can use it accordingly in your delete statement.. ``` delete a from tbl a inner join (select @rn:=if(@prv=receiver, @rn+1, 1) as rId, @prv:=receiver as receiver, chat_id, sender, msg, msg_time from tbl join (select @rn:=0, @prv:='')tmp order by receiver, msg_time desc )tmp on a.chat_id=tmp.chat_id where tmp.rId >20 ```
You can actually do it using correlated subquery and without using any user variables. ``` DELETE a FROM TableName a LEFT JOIN ( SELECT *, ( SELECT COUNT(*) FROM tableName c WHERE c.receiver = a.receiver AND c.msg_time >= a.msg_time) AS RowNumber FROM TableName a ) b ON a.receiver = b.receiver AND a.msg_time = b.msg_time AND b.RowNumber <= 3 -- <<== change this to your desired value WHERE b.receiver IS NULL ``` * [SQLFiddle Demo](http://www.sqlfiddle.com/#!2/8eba7/1) the current query will remove all the records except for the 3 latest record for each `receiver` based on `msg_time`. Just change `3` to `20` to fit your needs.
MySQLdelete all entry except last 20 for each individual
[ "", "mysql", "sql", "" ]
I'd like to retrieve the list of distinct years between two dates. For example, from `25/12/2006` to `14/11/2013`, the result should look like: ``` 2006 2007 2008 2009 2010 2011 2012 2013 ``` Is this possible in SQL Server ?
A date like `1/2/2013` is ambiguous: depending on the regional setting, it can be either Feb 1st, or Jan 2nd. So it's a good idea to use the `YYYY-MM-DD` date format when talking to the datebase. You can generate a list of numbers using a recursive CTE: ``` ; with CTE as ( select datepart(year, '2006-12-25') as yr union all select yr + 1 from CTE where yr < datepart(year, '2013-11-14') ) select yr from CTE ``` [Example at SQL Fiddle.](http://sqlfiddle.com/#!6/d41d8/4472/0)
Tested on SQL Server 2008 ``` declare @smaller_date date = convert(date, '25/12/2006', 103) declare @larger_date date = convert(date, '14/11/2013', 103) declare @diff int select @diff = DATEDIFF(YY, @smaller_date, @larger_date) ;with sequencer(runner) as( select 0 union all select sequencer.runner + 1 from sequencer where runner < @diff ) select YEAR(@smaller_date) + runner from sequencer ```
List of distinct years between two dates
[ "", "sql", "sql-server", "" ]
I have 3 tables : `A(k1,A)` `B(k1,k2,B)` and `C(k2,C)`. I want to filter all A that satisfy C.k2 condition. in this example, I must filter go through table B : filter all B that have same k1 attribute with A , and filter all C k2 attribute with B (that I have filtered before). I have an ugly way to do this : `select * from A where k1 in (select * .....)` // it looks ugly and hard to trace I have though about using `join` function, but don't really know how to do this. Please tell me a best way for this query. Thanks :)
Try this Query. ``` select * from A join b on a.k1 = b.k1 join c on c.k2 = b.k2 ``` **[Explanation for JOIN](http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html)** ![enter image description here](https://i.stack.imgur.com/T4yzp.jpg)
It sounds pretty easy: ``` select * from A join B on B.k1 = A.k1 join C on C.k2 = B.k2 ```
Nice way to query for cross table
[ "", "sql", "" ]
I have the following data in a Oracle database: ``` Name Place Color ------- --------- -------- John Paris Blue John Miami Blue Ryan Boston Red Ryan Boston Red Jim Miami Blue Ryan Oslo Red Jason Rome Green Jim Paris Blue Jason Rome Green Jim Paris Blue Ryan Boston Red ``` I need to dedup if two columns are the same(name/place) and then count by a third column. I am trying to group by twice with a nested select but I keep getting an ORA-00933 error. ``` select Color, count(Color) from ( select TO_CHAR(Name)||'-'||(Place) as unique_ident from mytable group by TO_CHAR(Name)||'-'||(Place) ) as inline group by Color ``` I would to return something like ``` Blue 4 Red 2 Green 1 ``` Any help is appreciated.
I think you're looking for something like this: ``` SELECT Color, COUNT(DISTINCT name || '-' || place) ColorCnt FROM yourtable GROUP BY Color ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!4/334d6/1) Resulting In: ``` COLOR ColorCnt ----------------- Green 1 Blue 4 Red 2 ```
Doesn't this give you the results you want? ``` select TO_CHAR(Name)||'-'||(Place) , color, count(*) as color_count from mytable group by name, place, color ```
nested Group by
[ "", "sql", "oracle", "" ]
I have two tables, each which produce their own result-sets as a single row. I would like to join these result-sets into one single row. For example: ``` SELECT * FROM Table 1 WHERE Year = 2012 AND Quarter = 1 ``` Results: ``` Year Quarter Name State Mail 2012 1 Bob NY bob@gmail ``` Query #2: ``` SELECT * FROM Table 2 WHERE Year = 2012 AND Quarter = 1 ``` Results: ``` Year Quarter Name State Mail 2012 1 Greg DC greg@gmail ``` Desired result-set: ``` SELECT * FROM Table 3 WHERE Year = 2012 AND Quarter = 1 Year Quarter T1Name T1State T1Mail T2Name T2State T2Mail 2012 1 Bob NY bob@gmail Greg DC greg@gmail ``` The results are joined/pivoted onto the combination of Year and Quarter, which will be fed into the query via parameters. Any assistance would be greatly appreciated. Thanks in advance!
Unless I am missing something, it looks like you can just join the tables on the `year`/`quarter` there doesn't seem to be a need to pivot the data: ``` select t1.year, t1.quarter, t1.name t1Name, t1.state t1State, t1.mail t1Mail, t2.name t2Name, t2.state t2State, t2.mail t2Mail from table1 t1 inner join table2 t2 on t1.year = t2.year and t1.quarter = t2.quarter where t1.year = 2012 and t1.quarter = 1; ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/47e8e/2) Now if there is a question on whether or not the `year` and `quarter` will exist in both tables, then you could use a `FULL OUTER JOIN`: ``` select coalesce(t1.year, t2.year) year, coalesce(t1.quarter, t2.quarter) quarter, t1.name t1Name, t1.state t1State, t1.mail t1Mail, t2.name t2Name, t2.state t2State, t2.mail t2Mail from table1 t1 full outer join table2 t2 on t1.year = t2.year and t1.quarter = t2.quarter where (t1.year = 2012 and t1.quarter = 2) or (t2.year = 2012 and t2.quarter = 2) ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/d026b/2)
All you need to do is to determine the join criteria, i.e. the columns that have to have the matching values in both tables. In your case it looks to be Year and Quarter so you would write something like select \* from Table\_1 A Join Table\_2 B ON A.year=B.year and A.quarter=B.quarter
T-SQL: Joining result-sets horizontally
[ "", "sql", "sql-server", "t-sql", "" ]
I am trying to step up my knowledge of MySQL and how a relationship can work between a few different tables in a database. I understand primary and foreign keys and how those work within tables. What I have is a test database I am playing with full of `people`, `hats` and the relationship between different hats and people. Here is what I have so far in my test database: ``` | Tables_in_test | +-----------------+ | hats | | hats_collection | | people | +-----------------+ ``` My `people` table looks like this: ``` +-----------+---------+ | person_id | fname | +-----------+---------+ | 1 | Sethen | | 2 | Michael | | 3 | Jazmine | +-----------+---------+ ``` My `hats` table looks like this: ``` +--------+----------+ | hat_id | hat | +--------+----------+ | 1 | Awesome | | 2 | Kick Ass | | 3 | Bowler | | 4 | Fedora | +--------+----------+ ``` And finally my `hats_collection` table looks like this: ``` +-----------+--------+ | person_id | hat_id | +-----------+--------+ | 2 | 1 | | 2 | 4 | | 3 | 2 | +-----------+--------+ ``` Essentially, I am storing all of the different hats that could exist in the hats table and holding their relationship to different people in the `hats_collection` table which just uses the `person_id` and `hat_id`. While I am sure how to set up all of these tables correctly and insert the data, I am unsure on how to write the query to display the data from the `hats_collection` table to display a users name and which hat they have. Here is what I have so far: ``` SELECT people.fname, hats.hat FROM people INNER JOIN hats INNER JOIN hats_collection ON hats_collection.person_id = people.person_id AND hats_collection.hat_id = hats.hat_id WHERE people.person_id = 1; ``` Which is returning an empty set. My questions are these: 1. How would I change my query to get the correct results of which people have which hats? 2. Is this the way I would set up the tables for something like this? Is there an easier/more efficient way? **EDIT** My apologies, the query I posted works. It was returning an empty set because the person with the id of 1 doesn't have any hats, but this has spawned a new question: 1. Is this the best way to write this query? My question about being the most efficient way to do this still stands.
Criteria for the ON should be on each join otherwise you get what's called a cross join all records in people to all records in hats, which isn't what you wanted. ``` SELECT people.fname, hats.hat FROM people INNER JOIN hats_collection ON hats_collection.person_id = people.person_id INNER JOIN hats ON hats_collection.hat_id = hats.hat_id WHERE people.person_id = 1; ``` Tables are designed in a "normal/common practice" Now if you want all people regardless if they have hats or not... use `LEFT JOIN` instead of inner's above.
Person with id=1 does not have any hats so this is fine. If you want to display persons even when they dont have any hats you have to use `LEFT JOIN` instead ``` SELECT people.fname, GROUP_CONCAT(hats.hat) AS hats FROM people LEFT JOIN hats_collection ON hats_collection.person_id = people.person_id LEFT JOIN hats ON hats_collection.hat_id = hats.hat_id GROUP BY people.person_id; ``` Notice how I reordered joined tables logically and put each relevant join condition to its own join tables `ON` part.
Relationship table in MySQL database
[ "", "mysql", "sql", "" ]
I've been searching around on the web and SO and can't find a straight forward answer to this issue - it seems like something really really obvious but I'm having no luck. I have a SQL query which has the line: ``` WHERE Genre IN ('Rock', 'Popular', 'Classical') ``` However what this does is give a long list of results, namely does a OR however what i want instead are results that match all 3. Thanks.
Here's an alternative approach, but I like Gordon's approach as well: ``` SELECT id FROM yourtable WHERE genre IN ('Rock', 'Popular', 'Classical') GROUP BY id HAVING COUNT(DISTINCT Genre) = 3 ``` This counts the distinct genres associated with an id and only returns the ones that match all 3.
This is a "set-within-sets" query. I advocate using aggregation with a `having` clause. Here is an approximation of such a query: ``` select id from t group by id having sum(case when genre = 'Rock' then 1 else 0 end) > 0 and -- has Rock sum(case when genre = 'Popular' then 1 else 0 end) > 0 and -- has Popular sum(case when genre = 'Classical' then 1 else 0 end) > 0 -- has Classical ``` The `having` clause may look a little complicated. But each clause is just counting the number of rows that match one of the values. If there are any, then that genre passes that test.
SQL Where IN but match all
[ "", "sql", "" ]
``` my_dict = {'a1':2,'a2':3,'b1':1,'b2':5,'b3':8} ``` I want to sum the products of labels 'a' and 'b' with the same numbers. ``` result = a1*b1 + a2*b2 ``` note: b3 is ignored, since it has no corresponding a3. my code: ``` result = sum (lambda x * y, if str(key)[0] == a and str(key)[0] ==b and str(key)[1] == str(key)[1], my_dict) ```
You have enough logic here that you should *probably* break out your filter into a separate function: ``` my_dict = {'a1':2,'a2':3,'b1':1,'b2':5,'b3':8} def filtered(d): for key in d: if not key.startswith('a'): continue b_key = "b" + key[1:] if b_key not in d: continue yield d[key], d[b_key] sum(a * b for a, b in filtered(my_dict)) ```
``` my_dict = {'a1':2,'a2':3,'b1':1,'b2':5,'b3':8} result = 0 for i in range(1, (len(my_dict) + 1) / 2): if not my_dict.has_key("a" + str(i)) \ or not my_dict.has_key("b" + str(i)): break result += my_dict["a" + str(i)] * my_dict["b" + str(i)] ```
lambda function to sum products of values from a dictionary
[ "", "python", "dictionary", "lambda", "" ]
I have an array where discreet sinewave values are recorded and stored. I want to find the max and min of the waveform. Since the sinewave data is recorded voltages using a DAQ, there will be some noise, so I want to do a weighted average. Assuming self.yArray contains my sinewave values, here is my code so far: ``` filterarray = [] filtersize = 2 length = len(self.yArray) for x in range (0, length-(filtersize+1)): for y in range (0,filtersize): summation = sum(self.yArray[x+y]) ave = summation/filtersize filterarray.append(ave) ``` My issue seems to be in the second for loop, where depending on my averaging window size (filtersize), I want to sum up the values in the window to take the average of them. I receive an error saying: ``` summation = sum(self.yArray[x+y]) TypeError: 'float' object is not iterable ``` I am an EE with very little experience in programming, so any help would be greatly appreciated!
`self.yArray[x+y]` is returning a single item out of the `self.yArray` list. If you are trying to get a subset of the `yArray`, you can use the slice operator instead: ``` summation = sum(self.yArray[x:y]) ``` to return an iterable that the `sum` builtin can use. A bit more information about python slices can be found here (scroll down to the "Sequences" section): <http://docs.python.org/2/reference/datamodel.html#the-standard-type-hierarchy>
The other answers correctly describe your error, but this type of problem really calls out for using numpy. Numpy will [run faster, be more memory efficient, and is more expressive and convenient](https://stackoverflow.com/questions/993984/why-numpy-instead-of-python-lists) for this type of problem. Here's an example: ``` import numpy as np import matplotlib.pyplot as plt # make a sine wave with noise times = np.arange(0, 10*np.pi, .01) noise = .1*np.random.ranf(len(times)) wfm = np.sin(times) + noise # smoothing it with a running average in one line using a convolution # using a convolution, you could also easily smooth with other filters # like a Gaussian, etc. n_ave = 20 smoothed = np.convolve(wfm, np.ones(n_ave)/n_ave, mode='same') plt.plot(times, wfm, times, -.5+smoothed) plt.show() ``` ![enter image description here](https://i.stack.imgur.com/381Mv.png) If you don't want to use numpy, it should also be noted that there's a logical error in your program that results in the `TypeError`. The problem is that in the line ``` summation = sum(self.yArray[x+y]) ``` you're using `sum` within the loop where your also calculating the sum. So either you need to use `sum` without the loop, or loop through the array and add up all the elements, but not both (and it's doing both, *ie*, applying `sum` to the indexed array element, that leads to the error in the first place). That is, here are two solutions: ``` filterarray = [] filtersize = 2 length = len(self.yArray) for x in range (0, length-(filtersize+1)): summation = sum(self.yArray[x:x+filtersize]) # sum over section of array ave = summation/filtersize filterarray.append(ave) ``` or ``` filterarray = [] filtersize = 2 length = len(self.yArray) for x in range (0, length-(filtersize+1)): summation = 0. for y in range (0,filtersize): summation = self.yArray[x+y] ave = summation/filtersize filterarray.append(ave) ```
Moving average of an array in Python
[ "", "python", "python-2.7", "moving-average", "" ]
In SQL Server 2008 R2 using this table: ``` CREATE TABLE mytest( id [bigint] IDENTITY(1,1) NOT NULL, code [bigint] NULL, sequence_number [int] NULL ); ``` This query: ``` SELECT id, code, sequence_number FROM mytable ``` returns this data: ``` id code sequence_number 1 381 0 2 381 1 3 382 0 4 382 1 5 383 0 6 383 1 7 383 1 8 384 0 9 384 1 10 385 0 11 385 1 12 385 2 13 386 0 14 386 1 15 386 1 16 386 2 17 387 0 18 387 1 19 387 1 20 387 2 21 387 3 22 387 3 23 388 0 24 388 1 25 388 1 26 388 2 27 388 2 28 389 0 29 389 1 ``` How do I `SELECT` just these rows: ``` 7 383 1 15 386 1 19 387 1 22 387 3 25 388 1 27 388 2 ``` These are the the `MAX(id)` rows where there is more than one record with the same sequence\_number. I want all the highest ids for each unique combination of code and sequence\_number. So, in effect I want to select just the "duplicate" sequence\_number records. How do I do this?
use this ``` SELECT MAX(t1.Id) as ID, t1.code, t1.sequence_number FROM mytest t1 INNER JOIN mytest t2 ON t1.id <> t2.id AND t1.code = t2.code AND t1.sequence_number = t2.sequence_number GROUP BY t1.code, t1.sequence_number ORDER BY ID ```
Try this :- ``` Select ID,code,sequence_number from ( Select ID, code, [rn] = row_number() over (partition by code,sequence_number order by id), sequence_number from yourTable )s where rn > 1 ``` Demo in [SQL FIDDLE](http://sqlfiddle.com/#!3/4793c/8)
Selecting rows with repeating values in one column
[ "", "sql", "sql-server", "" ]
If I have multiple WHEN MATCHED statements in a MERGE statement, do they all execute if they're true? My example: ``` DECLARE @X bit = NULL; --skipping the MERGE statement, straight to WHEN MATCHED WHEN MATCHED AND A = 1 @X = 0; WHEN MATCHED AND B = 1 @X = 1; ``` What is the state of X in each of the 4 possibilities? ``` A|B|X 0|0|? 0|1|? 1|0|? 1|1|? ``` Basically, I'm curious if there's an implicit BREAK after each WHEN MATCHED clause.
To answer your question, yes, it will only run a single match and then break. However, if you'd like to have logic to allow for conditional matching in the update, the `CASE` statement is rather useful for this. Something like this as an example: ``` MERGE INTO YourTable USING (VALUES (1, 1, NULL), (0, 0, NULL), (0, 1, NULL), (1, 0, NULL)) T2 (a2,b2,c2) ON a = a2 AND b = b2 WHEN MATCHED THEN UPDATE SET c = CASE WHEN a = 1 THEN 0 WHEN b = 1 THEN 1 ELSE NULL END WHEN NOT MATCHED THEN INSERT (a, b) VALUES (a2, b2); SELECT * FROM YourTable ORDER BY a,b; ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!3/1f31e/1) --- And the results: ``` A B C -------------- 0 0 (null) 0 1 1 1 0 0 1 1 0 ```
I found in the [MSDN documentation](http://msdn.microsoft.com/en-us/library/bb510625.aspx): > WHEN MATCHED THEN > > Specifies that all rows of target\_table that match the rows returned by ON , and satisfy any additional search condition, are either updated or deleted according to the clause. > > The MERGE statement can have at most two WHEN MATCHED clauses. If two clauses are specified, then the first clause must be accompanied by an AND clause. For any given row, the second WHEN MATCHED clause is only applied if the first is not. If there are two WHEN MATCHED clauses, then one must specify an UPDATE action and one must specify a DELETE action. If UPDATE is specified in the clause, and more than one row of matches a row in target\_table based on , SQL Server returns an error. The MERGE statement cannot update the same row more than once, or update and delete the same row. So it looks like only one of the statements are executed, and they require a DELETE in one and an UPDATE in the other.
When using multiple WHEN MATCHED statements, do they all execute, or does only one get executed?
[ "", "sql", "sql-server", "database", "t-sql", "merge", "" ]
I have a general question about using WHERE clauses in SQLite. I have worked with SQLite some and know my way around. However, I'm having trouble with WHERE clauses. I have an Android application in which I'm needing to do a few simple operations with the SQLite WHERE clause. However, despite experimenting with the WHERE clause all day, I've come up empty. I am trying to perform very simple SQLite commands (e.g. `SELECT name FROM tablename WHERE _id=2` or `DELETE FROM tablename WHERE name='somename'`). But, every time I use the where clause I either get 0 rows returned (in the case of a SELECT) or have 0 rows deleted. The table name is correct. The column names are correct. (I haven't had any trouble selecting or inserting as long as I don't specify a WHERE clause.) I made sure that the queries/statements were well formed. I've tried both raw queries/statements as well as the methods (I made sure to use the correct methods for SELECT and DELETE {e.g. `rawQuery()` or `query()` for SELECT}) provided from the `SQLiteDatabase` class in Android, but **nothing** has worked. All I need is to be able to perform simple queries/statements using the WHERE clause. Does anyone have any insight as to what's happening? **Here's the code to create the table that I'm using:** ``` public static final String TABLE_WORKOUTS = "workouts" public static final String W_ID = "_id"; public static final String W_WORKOUT_NAME = "workout_name"; public static final String W_EXERICSE_NAME = "exercise_name"; public static final String W_EXERCISE_SETS = "exercise_sets"; public static final String W_EXERCISE_FIRST_ATTRIBUTE = "first_attribute"; public static final String W_EXERCISE_SECOND_ATTRIBUTE = "second_attribute"; public static final String W_EXERCISE_TYPE = "exercise_type"; private static final String createTableWorkouts = "CREATE TABLE " + TABLE_WORKOUTS + " (" + W_ID + " INTEGER PRIMARY KEY AUTOINCREMENT, " + W_WORKOUT_NAME + " TEXT, " + W_EXERICSE_NAME + " TEXT, " + W_EXERCISE_SETS + " TEXT, " + W_EXERCISE_FIRST_ATTRIBUTE + " TEXT, " + W_EXERCISE_SECOND_ATTRIBUTE + " TEXT, " + W_EXERCISE_TYPE + " TEXT);"; ``` **Example query:** `String workoutName = "Some Workout"; Cursor cursor = datasource.executeRawQuery("SELECT * FROM " + WorkoutDatabase.TABLE_WORKOUTS + " WHERE " + WorkoutDatabase.W_EXERICSE_NAME + "='" + workoutName + "'", null);` **Example record (\_id, workout\_name, exercise\_name, sets, first\_attribute, second\_attribute, exercise\_type):** 23, Upper Body, Bench Press, 5, 160, 5, Weight
The value in the `exercise_name` column is `Bench Press`, but you are trying to match it with `Some Workout`.
This one is workable source, which will retrieve list of values from SQLite database using where condition. By using cursor, to retrieve values and added into POJO class. I think this code will helps you lot. ``` public List<RationNamesPOJO> fetchRationMembers(String rationNumber) { ArrayList<RationNamesPOJO> RationDet = new ArrayList<RationNamesPOJO>(); SQLiteDatabase db = this.getWritableDatabase(); Cursor cur = db.rawQuery("SELECT * FROM " + TABLE_NAME + " where " + PEOPLE_RATION_NUMBER + "='" + rationNumber + "'", null); if (cur.moveToFirst()) { do { RationNamesPOJO rationPojo = new RationNamesPOJO(); rationPojo.setPeople_rationNumber(cur.getString(cur.getColumnIndex(PEOPLE_RATION_NUMBER))); rationPojo.setPeople_aadhaarNumber(cur.getString(cur.getColumnIndex(PEOPLE_AADHAAR_NUMBER))); rationPojo.setPeopleName(cur.getString(cur.getColumnIndex(PEOPLE_NAME))); rationPojo.setPeopleBiometric(cur.getString(cur.getColumnIndex(PEOPLE_FINGER_IMAGE))); RationDet.add(rationPojo); } while (cur.moveToNext()); } return RationDet; } ```
Using the WHERE clause in SQLite
[ "", "android", "sql", "sqlite", "android-sqlite", "" ]
I have a table with captures user log-on and log-off times (the application they log on to is VB which communicates with a MySQL server). The table looks like the example: ``` idLoginLog | username | Time | Type | -------------------------------------------------------- 1 | pauljones | 2013-01-01 01:00:00 | 1 | 2 | mattblack | 2013-01-01 01:00:32 | 1 | 3 | jackblack | 2013-01-01 01:01:07 | 1 | 4 | mattblack | 2013-01-01 01:02:03 | 0 | 5 | pauljones | 2013-01-01 01:04:27 | 0 | 6 | sallycarr | 2013-01-01 01:06:49 | 1 | ``` So each time a user logs in it adds a new row to the table with their username and the time stamp. The type is "1" for logging in. When they log out the same happens only type is "0". There are slight issues whereby users will not ever appear to have logged out if they force quit the application, as this obviously bypasses the procedure that submits the logging out query (type "0"). But please ignore that and assume I figure out a way out of that issue. I want to know what query (that I will run perhaps once weekly) to calculate the most ever users that were logged in at any one time. Is this even possible? It seems like an immense mathmateical/SQL challenge to me! The table currently has about 30k rows. --- Wow! Thank you all! I have adapted mifeet's answer to the shortest code that gets what I need done. Cannot believe I can get it done with just this code, I thought I'd have to brute force or redesign my db! ``` set @mx := 0; select time,(@mx := @mx + IF(type,1,-1)) as mu from log order by mu desc limit 1; ```
You can use MySQL variables to calculate the running sum of currently logged visitors and then get the maximum: ``` SET @logged := 0; SET @max := 0; SELECT idLoginLog, type, time, (@logged := @logged + IF(type, 1, -1)) as logged_users, (@max := GREATEST(@max, @logged)) FROM logs ORDER BY time; SELECT @max AS max_users_ever; ``` ([SQL Fiddle](http://sqlfiddle.com/#!2/9a114/3)) --- **Edit:** I have also a suggestion how to deal with users not explicitly logged out. Say you consider a user automatically logged out after 30 minutes: ``` SET @logged := 0; SET @max := 0; SELECT -- Same as before idLoginLog, type, time, (@logged := @logged + IF(type, 1, -1)) AS logged_users, (@max := GREATEST(@max, @logged)) AS max_users FROM ( -- Select from union of logs and records added for users not explicitely logged-out SELECT * from logs UNION SELECT 0 AS idLoginnLog, l1.username, ADDTIME(l1.time, '0:30:0') AS time, 0 AS type FROM -- Join condition matches log-out records in l2 matching a log-in record in l1 logs AS l1 LEFT JOIN logs AS l2 ON (l1.username=l2.username AND l2.type=0 AND l2.time BETWEEN l1.time AND ADDTIME(l1.time, '0:30:0')) WHERE l1.type=1 AND l2.idLoginLog IS NULL -- This leaves only records which do not have a matching log-out record ) AS extended_logs ORDER BY time; SELECT @max AS max_users_ever; ``` ([Fiddle](http://sqlfiddle.com/#!2/9a114/34))
If Type were +1 for login, and -1 for log out, and you add log out entries, you could do something like this: ``` CREATE TABLE usage SELECT a.Time AS Time, SUM(b.Type) AS Users FROM logons AS a, logons AS b WHERE b.Time < a.Time; ``` In some SQLs you have to break this up into multiple statements. I believe MySQL has `INSERT SELECT` not so sure about `CREATE... SELECT`. This table gives you the usage at any time. To get the max, [Bill suggested](https://stackoverflow.com/questions/16865771/calculate-most-users-ever-online-with-mysql/16865922#comment24328512_16865922): ``` SELECT * from Usage [WHERE if you want a time range] ORDER BY Users DESC LIMIT 1; ``` Note that MAXIMA are NOT UNIQUE, which raises various issues. There might be more than one time when you had the maximum number of users.... you can explore this by changing `LIMIT 1` to `LIMIT 5` or `LIMIT 10` and looking at the table.
calculate most users ever online with MySQL
[ "", "mysql", "sql", "" ]
From what I have read, there are two ways to debug code in Python: * With a traditional debugger such as `pdb` or `ipdb`. This supports commands such as `c` for `continue`, `n` for `step-over`, `s` for `step-into` etc.), but you don't have direct access to an IPython shell which can be extremely useful for object inspection. * Using **IPython** by [embedding](http://ipython.org/ipython-doc/dev/interactive/reference.html#embedding-ipython) an IPython shell in your code. You can do `from IPython import embed`, and then use `embed()` in your code. When your program/script hits an `embed()` statement, you are dropped into an IPython shell. This allows the full inspection of objects and testing of Python code using all the IPython goodies. However, when using `embed()` you can't **step-by-step** through the code anymore with handy keyboard shortcuts. Is there any way to combine the best of both worlds? I.e. 1. Be able to **step-by-step** through your code with handy pdb/ipdb keyboard shortcuts. 2. At any such step (e.g. on a given statement), have access to a full-fledged **IPython shell**. ### IPython debugging *as in* MATLAB: An example of this type of "enhanced debugging" can be found in MATLAB, where the user **always** has full access to the MATLAB engine/shell, and she can still **step-by-step** through her code, define conditional breakpoints, etc. From what I have discussed with other users, this is the debugging feature that people miss the most when moving from MATLAB to IPython. ### IPython debugging in Emacs and other editors: I don't want to make the question too specific, but I work mostly in Emacs, so I wonder if there is any way to bring this functionality into it. **Ideally**, Emacs (or the editor) would allow the programmer to set breakpoints anywhere on the code and communicate with the interpreter or debugger to have it stop in the location of your choice, and bring to a full IPython interpreter on that location.
You can use IPython's [`%pdb` magic](https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-pdb). Just call `%pdb` in IPython and when an error occurs, you're automatically dropped to `ipdb`. While you don't have the stepping immediately, you're in `ipdb` afterwards. This makes debugging individual functions easy, as you can just load a file with `%load` and then run a function. You could force an error with an `assert` at the right position. `%pdb` is a line magic. Call it as `%pdb on`, `%pdb 1`, `%pdb off` or `%pdb 0`. If called without argument it works as a toggle.
What about ipdb.set\_trace() ? In your code : `import ipdb; ipdb.set_trace()` **update**: now in Python 3.7, we can write `breakpoint()`. It works the same, but it also obeys to the `PYTHONBREAKPOINT` environment variable. This feature comes from [this PEP](https://www.python.org/dev/peps/pep-0553/). This allows for full inspection of your code, and you have access to commands such as `c` (continue), `n` (execute next line), `s` (step into the method at point) and so on. See the [ipdb repo](https://github.com/gotcha/ipdb) and [a list of commands](https://wangchuan.github.io/coding/2017/07/12/ipdb-cheat-sheet.html). [IPython](https://ipython.org/) is now called (edit: part of) [Jupyter](https://jupyter.org/). --- ps: note that an ipdb command takes precedence over python code. So in order to write `list(foo)` you'd need `print(list(foo))`, or `!list(foo)` . Also, if you like the ipython prompt (its emacs and vim modes, history, completions,…) it's easy to get the same for your project since it's based on the [python prompt toolkit](https://github.com/jonathanslenders/python-prompt-toolkit/).
Step-by-step debugging with IPython
[ "", "python", "debugging", "emacs", "ipython", "pdb", "" ]
`Android` saved settings in a database file which is `/data/data/com.android.providers.settings/databases/settings.db`. Android use `sqlite3` as the database. We can use `adb` to manage the database file. I want to know if there is a way to run all these commands in a `perl/python` script to automate the whole query process? ``` $adb shell $sqlite3 /data/data/com.android.providers.settings/databases/settings.db ``` The above command will open the settings database. Then you will enter into sqlite3 command line. First check how many tables existed in the database. Here lists the result. ``` sqlite> .tables android_metadata bookmarks gservices bluetooth_devices favorites system ``` The settings (such as volume\_alarm) I want to get are in "system" table, a .dump command will list all items in the table. ``` sqlite> .dump system BEGIN TRANSACTION; CREATE TABLE system (_id INTEGER PRIMARY KEY AUTOINCREMENT,name TEXT UNIQUE ON CONFLICT REPLACE,value TEXT); INSERT INTO "system" VALUES(3,’volume_system’,’5′); INSERT INTO "system" VALUES(4,’volume_voice’,’4′); INSERT INTO "system" VALUES(5,’volume_alarm’,’6′); ..... $ select value from system where name ='volume_alarm'; select value from system where name ='volume_alarm' 6 $.quit; ```
To query a specific value: ``` adb shell sqlite3 /data/data/com.android.providers.settings/databases/settings.db "select value from 'system' where name = 'volume_alarm';" ``` Or to pull all records from a table: ``` adb shell sqlite3 /data/data/com.android.providers.settings/databases/settings.db "select name, value from 'system';" ```
this one works. adb shell sqlite3 /data/data/com.android.providers.settings/databases/settings.db "\"select name, value from 'system';\""
How to use a script to query android sqlite database
[ "", "android", "python", "perl", "sqlite", "adb", "" ]
I can run: ``` import chef chef.autoconfigure() for node in chef.Node.list(): if "auto" in node.lower(): print "deleting node " + node nodeObj = chef.Node(node) nodeObj.delete() ``` in a console directly, but when I try to run it as a script: `python2.7 test.py` I get the following error: ``` Traceback (most recent call last): File "test.py", line 38, in <module> for node in chef.Node.list(): File "/usr/local/lib/python2.7/site-packages/chef/base.py", line 86, in list names = [name for name, url in api[cls.url].iteritems()] TypeError: 'NoneType' object has no attribute '__getitem__' ``` I used the console verify that ``` >>> chef.__path__ ['/usr/local/lib/python2.7/site-packages/chef'] ``` So, the machine is the same, the version of python is the same, and the module is the same. Why could this possibly happen?
I figured out that when ran as a script pyChef does not correctly identify the knife.rb file for the autoconfigure step. This is what got it working instead: ``` with chef.ChefAPI('http://example.com:4000', '/root/.chef/client.pem', 'client'): for node in chef.Node.list(): if "auto" in node.lower(): print "deleting node " + node nodeObj = chef.Node(node) nodeObj.delete() ``` Note that I do not know why it was unable to use the knife.rb file correctly in one case and not the other (I verified that the same cwd was being used in both cases... - even tried pointing the autoconfigure('/folder/of/knife.rb') with no luck.
While I don't know why that ChefAPI object does not persist in a script, I found that I must pass my Search object my ChefAPI object, [as seen as a keyword argument in the signature here](http://pychef.readthedocs.org/en/latest/api.html#nodes). And like in your case, this wasn't necessary when testing my code in the console. In my case, I generate the ChefAPI object `from_config_file()`, and pass it to my Search object like this: ``` import chef chefapiobject = chef.chefAPI.from_config_file('knife.rb') nodes = chef.Search('node', 'roles:worker', api=chefapiobject) ``` In the console, this works without passing `api=chefapiobject`
python chef works in console not in script
[ "", "python", "chef-infra", "" ]
I am asked to implement a recursive function that takes a nonnegative integer n as input and returns turtle instruction encoded with letters L,R and F where L means rotate left 45 degrees, R means rotate right 45 degress and F means go forward. Additional information i have i: for every nonnegative integer n>0, the Levy curve `L(n)` can be defined in terms of Levy curve `L(n-1)`; Levy curve `L(0)` is just a straight line. ``` usage: >>> lev(0) 'F' >>> lev(1) 'LFRRFL' ``` I am very new to this and I am not sure how to start: so far I only got: ``` from turtle import Screen, Turtle def lev(n): # base case if n ==0: return 'F' # recursive case else: return lev(n-1) ``` I need some good pointers here please.
Since [Levy C](https://en.wikipedia.org/wiki/L%C3%A9vy_C_curve)'s [L system](http://en.wikipedia.org/wiki/Lindenmayer_system) only has a single rule, it's simple to build the resulting string using a single [replace](http://docs.python.org/3/library/stdtypes.html?highlight=format#str.replace) method. ``` def lev(n): if n == 0: return "F" else: symbols = lev(n-1) return symbols.replace("F", "LFRRFL") for i in range(4): print lev(i) ``` Result: ``` F LFRRFL LLFRRFLRRLFRRFLL LLLFRRFLRRLFRRFLLRRLLFRRFLRRLFRRFLLL ``` You can visualize this replacement by imagining each straight line in the figure being replaced by two smaller lines connected at a ninety degree angle. Like so: ![enter image description here](https://i.stack.imgur.com/3NeSE.gif)
First, in case this is the problem: A large Levy curve (recursive case) is constructed by arranging two smaller ones facing each other 'across the room', with two more 'on the floor' facing up, in between. A small Levy curve (base case) is just a straight line. So indeed, the base case is: ``` def lev(n): if n == 0: return 'F' else: # Recursive case here ``` But for the recursive case, you just have it call lev(n-1). You are right that you will need to do this, but you will need to do it four times, and rotate in between. This will create the desired 'two smaller curves facing each other, with two in between'. Inspecting the curve carefully (here: <https://en.wikipedia.org/wiki/File:Levy_C_construction.png>), we see that we will need to draw one curve, then turn right, then draw another, then turn completely around, then draw a third curve, and finally, turn right and draw the final curve. This can be done fairly simply: ``` dev lev(n): if n == 0: # Base case return 'F' else: # Recursive case # Calculate the smaller curve smaller = lev(n-1) # Add in the turning in between the smaller curves final = smaller # First curve if n%2 == 0: # Even depths require right turns final += 'RR' # Rotate 90 degrees final += smaller # Second curve final += 'RRRR' # Rotate 180 degrees final += smaller # Third curve final += 'RR' # Rotate 90 degrees final += smaller # Final curve else: # Odd depths require left turns final += 'LL' # Rotate 90 degrees final += smaller # Second curve # (No full rotation in odd depths) final += smaller # Third curve final += 'LL' # Rotate 90 degrees final += smaller # Final curve return final # Done! ```
Trying to understand the Levy Curve (fractal)
[ "", "python", "fractals", "" ]
I have `product` table and `product_attributes` table. I want filter products with necessary attributes, here is my sql: ``` SELECT * FROM product p INNER JOIN product_attributes p2 ON p.id = p2.product_id WHERE p2.attribute_id IN (637, 638, 629)) ``` But, it gives me all products even if product have only one attribute (637 for example). But i need products with all given attributes (637, 638, 629).
There's a fairly standard approach: ``` select * from product where id in ( SELECT id FROM product p JOIN product_attributes p2 ON p.id = p2.product_id AND p2.attribute_id IN (637, 638, 629) GROUP BY id HAVING COUNT(distinct attribute_id) = 3) ``` The HAVING clause ensures there were 3 different attribute ids (ie they were all found). This can be expressed as a straight join (rather than the ID IN(...)), but it's simpler to read and should perform OK like thus. Of slight interest may be the moving of the attribute id condition into the JOIN's ON condition.
This is an example of a "set-within-sets" subquery. I like to solve these with aggregation and the `having` clause, because this is the most flexible solution: ``` SELECT p.* FROM product p join product_attributes pa on p.id = pa.product_id group by p.id having sum(pa.attribute_id = 637) > 0 and sum(pa.attribute_id = 638) > 0 and sum(pa.attribute_id = 629) > 0 ``` An alternative `having` clause is: ``` having count(distinct case when pa.attribute_id IN (637, 638, 629) then pa.attribute_id end) = 3 ```
Join with where in condition difficulties
[ "", "mysql", "sql", "join", "filter", "" ]
For the life of me I can't figure out why my IF statement is not getting hit. There are plenty of cases where the remainder of n // the last number put in the results list is 0. ``` n = 100 numbers = range(2, n) results = [] results.append(numbers.pop(0)) print numbers for n in numbers: if n % results[-1] == 0: print "If statement", numbers numbers.remove(n) else: print "Else statement", numbers numbers.remove(n) ```
Problem is you're modifying the list while iterating over it, so all the even numbers are getting skipped. Hence the `if` condition is always `False`. The `for` loop keeps track of index, so when you remove an item at index `i`, the next item at `i+1`th position shifts to the current index(`i`) and hence in the next iteration you'll actually pick the `i+2`th item. ``` for n in numbers[:]: #iterate over a shallow copy of list if n % results[-1] == 0: print "If statement", numbers numbers.remove(n) else: print "Else statement", numbers numbers.remove(n) ``` Example: ``` >>> lis = range(3,15) >>> for x in lis: ... print x ... lis.remove(x) ... 3 5 7 9 11 13 ```
Don't loop over a list you are removing items from. The `for` loop creates a list iterator that keeps track of the current item by incrementing a counter. But a shrinking list means that the counter will, from loop iteration to loop iteration, point at the wrong item: ``` >>> lst = range(5) >>> for i in lst: ... lst.remove(i) ... print i ... 0 2 4 >>> lst [1, 3] ``` What happens is that as you remove `0` from the list `[0, 1, 2, 3, 4]`, the counter increments to item `1`, which in the now-altered list `[1, 2, 3, 4]` points to the value `2`. Removing `2` from the list, the iterator count increments to `2` and in the altered list `[1, 3, 4]` that means the next value in the loop is `4`, after which the iterator counter has counted beyond the end and the loop terminates. If you are going to remove *all* items from the list, use a while loop: ``` while numbers: n = numbers.pop() # do something with `n` ``` If you are removing *some* items, another option would be to create a shallow copy: ``` for n in numbers[:]: # .... ``` Now you can alter `numbers` to your hearts content, as the `for` loop is iterating over a copy instead.
if statement in loop not working
[ "", "python", "" ]
I have one table like ``` CREATE TABLE table_name ( P_Id int, amount varchar(50) ) ``` Data Like ``` Id amount ---------- 1 2340 2 4568 3 10000 ``` Now I want to sort table by amount but one problem is amount is varchar so it sort table like this ``` Id amount ---------- 3 10000 1 2340 2 4568 ``` but i want result like this ``` Id amount ---------- 3 10000 2 4568 1 2340 ``` what should i do ?
Cast amount column into Numeric in `ORDER BY` clause while selecting: ``` SELECT * FROM MyTable ORDER BY CAST(amount AS Numeric(10,0)) DESC ``` You can change the decimal point value according to your requirement to get more precise result: ``` SELECT * FROM MyTable ORDER BY CAST(amount AS Numeric(10,2)) DESC ^ ``` Result: | Id | amount | | --- | --- | | 3 | 10000 | | 2 | 4568 | | 1 | 2340 | See [this dbfiddle](https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=01f9f8fd40c834a50baed70edd10dfd9) **Note**: As @Ice suggested, this will fail if the amount field contains non numeric data like `ab123` or `xyz` (obviously).
Try ABS(): ``` SELECT * FROM MyTable ORDER BY ABS(MyCol) DESC; ``` **[SQL Fiddle](http://sqlfiddle.com/#!3/ae482/3)**
Sql Server query varchar data sort like int
[ "", "sql", "sql-server", "sql-order-by", "" ]
I have the following piece of code where I take in an integer n from stdin, convert it to binary, reverse the binary string, then convert back to integer and output it. ``` import sys def reversebinary(): n = str(raw_input()) bin_n = bin(n)[2:] revbin = "".join(list(reversed(bin_n))) return int(str(revbin),2) reversebinary() ``` However, I'm getting this error: ``` Traceback (most recent call last): File "reversebinary.py", line 18, in <module> reversebinary() File "reversebinary.py", line 14, in reversebinary bin_n = bin(n)[2:] TypeError: 'str' object cannot be interpreted as an index ``` I'm unsure what the problem is.
You are passing a string to the `bin()` function: ``` >>> bin('10') Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'str' object cannot be interpreted as an index ``` Give it a integer instead: ``` >>> bin(10) '0b1010' ``` by turning the `raw_input()` result to `int()`: ``` n = int(raw_input()) ``` Tip: you can easily reverse a string by giving it a negative slice stride: ``` >>> 'forward'[::-1] 'drawrof' ``` so you can simplify your function to: ``` def reversebinary(): n = int(raw_input()) bin_n = bin(n)[2:] revbin = bin_n[::-1] return int(revbin, 2) ``` or even: ``` def reversebinary(): n = int(raw_input()) return int(bin(n)[:1:-1], 2) ```
You want to convert the input to an integer not a string - it's already a string. So this line: ``` n = str(raw_input()) ``` should be something like this: ``` n = int(raw_input()) ```
Reading in integer from stdin in Python
[ "", "python", "python-2.7", "stdin", "" ]
I have the following sed script: ``` cat foo.txt | sed -e "s/.*\[\([^]]*\)\].*/\1/g" -e "s/ //g" -e "s/'//g" ``` Which can be translated into three expressions: 1. Captures all text between `[...]` 2. Removes white-spaces 3. Removes all single quotes What's a neat way to do something similar with a text file in python?
You could do it all with regular expressions (`re.sub()`) but this does it mostly with plain Python, just using regular expressions for the initial capture. ``` import re s = "some string ['foo'] [b a r] [baz] [] extra stuff" pat0 = re.compile(r'\[([^]]*)\]') lst0 = pat0.findall(s) lst1 = [s.replace(' ', '') for s in lst0] lst2 = [s.replace("'", '') for s in lst1] print(lst2) # prints: ['foo', 'bar', 'baz', ''] ```
``` import re with open('foo.txt', 'r') as f: read_data = f.readlines() out_data = [] for line in read_data: out_line = re.sub(r".*\[([^]]*)\].*", r"\1", line) out_line = re.sub(r" ", r"", out_line) out_line = re.sub(r"'", r"", out_line) out_data.append(out_line) # do whatever you want with out_data here ```
Convert sed with multiple expressions to python
[ "", "python", "regex", "replace", "sed", "" ]
I have stored procedure which returns a category-wise data in a group. Now the problem is I need to sum the category-wise data for a group and insert it at the beginning of a category. For example: ``` Group Category X1 X2 X3 X4 ======================================================== A A1 1 1 1 1 ---------------------------------------------------- A A2 1 1 1 1 --------------------------------------------------- B B1 1 1 1 1 ---------------------------------------------------- B B2 1 1 1 1 ---------------------------------------------------- ``` Should be displayed as: ``` Group Category X1 X2 X3 X4 ======================================================= A A 2 2 2 2 ---------------------------------------------------- A A1 1 1 1 1 ---------------------------------------------------- A A2 1 1 1 1 ---------------------------------------------------- B B 2 2 2 2 ---------------------------------------------------- B B1 1 1 1 1 ---------------------------------------------------- B B2 1 1 1 1 ---------------------------------------------------- ``` All this needs to be done in a single procedure. I cannot use temp table.
I'm guessing that you just want something like ``` SELECT * FROM ( SELECT group, category, x1, x2, x3, x4 FROM your_table UNION ALL SELECT group, group, sum(x1), sum(x2), sum(x3), sum(x4) FROM your_table GROUP BY group ) ORDER BY group, category ``` Of course, you'd need to use your actual column names-- 'GROUP`is a reserved word so you cannot have a column named`GROUP'.
You can perform multiple levels of aggregation with a GROUP BY ROLLUP(), including no aggregation at all: ``` select grp, coalesce(cat,grp) cat, sum(x1), sum(x2), sum(x3), sum(x4) from my_table group by grp, rollup(cat) order by grp, cat nulls first; ``` <http://sqlfiddle.com/#!4/4d9c4/8>
Oracle row-wise summing
[ "", "sql", "oracle", "aggregate-functions", "" ]
I have the table `EMPLOYEE` with 3 fields: ``` EMPLOYEE(ROLE SMALLINT, RATING INTEGER, NAME VARCHAR) ``` I need get from this table only 3 row. It's One row of each type with the highest rating in its type. The role of the field - is the discriminant that defines the specific role from list of the three values: DEVELOPER(1), TESTER(2), MANAGER (3). So the value of ROLE field can be either 1 or 2 or 3.
Postgres specific short query: ``` SELECT DISTINCT ON (role) role, name, rating FROM employee ORDER BY role, rating DESC ``` If 2 employee have same role and rating - one of them will be picked at random.
This is a basic aggregation query (I'm assuming you are new to SQL): ``` select role, max(rating) from employee group by role ``` In response to the comment (I can see how the question is ambiguous). The right way to do this in Postgres is using a window function: ``` select role, rating, name from (select e.*, row_number() over (partition by role order by rating desc) as seqnum from employee e ) e where seqnum = 1 ``` This version returns only one row, even if there are duplicates. If you want all rows when there are multiples with the same max, then use `rank()` instead of `row_number()`.
Quite a tricky SQL query
[ "", "sql", "postgresql", "" ]
I'm new to SQL, and havn't been able to get this SQL query right yet. I currently have: ``` SELECT * FROM tableA LEFT OUTER JOIN tableB ON tableA.`full_name` = tableB.`full_name` WHERE tableB.`id` IS NULL ``` Both tables have records of people, complete with names and addresses. I need to get ALL the records for those who are in tableA, but not tableB. The diagram below is basically what I need: ![Ven Diagram](https://i.stack.imgur.com/PHEwf.png) The problem is that two people may have the same name, but different addresses. So ultimately, I need to get the records of ALL the people that are in tableA, excluding the duplicates that have duplicate names AND addresses. Each table has columns as follows: ``` id,full_name,first_name,last_name,title,phone,address,city,state,postal_code ```
The following query will give you all the ids of people in tableA that are not in tableB based on full name and adress: ``` SELECT tableA.id FROM tableA LEFT OUTER JOIN tableB -- people are the same if fullname and adress match ON tableA.`full_name` = tableB.`full_name` AND tableA.adress = tableB.adress -- filter people that re in tableA only WHERE tableB.`id` IS NULL -- filter duplicates GROUP BY tableA.id ``` You can easily edit this selet to include whatever information you need from tableA.
Since you're joining on two fields you're options are an ANTI-JOIN [(Friederike S' answer)](https://stackoverflow.com/a/16844314/119477) `Not exists`, ``` SELECT DISTINCT tablea.* FROM tablea WHERE NOT EXISTS (SELECT * FROM tableb WHERE tablea.`full_name` = tableb.`full_name` AND tableA.adress = tableB.adress) ``` [DEMO](http://sqlfiddle.com/#!2/17aa6/4) You can also use `not in` see ([Christian Ammer's)](https://stackoverflow.com/a/16846165/119477%29) answer Another more obscure solution is to use the `ALL` keyword. It's very similar to `NOT IN` ``` SELECT DISTINCT tablea.* FROM tablea WHERE ( tablea.`full_name` , tableA.address) != ALL (SELECT tableb.`full_name`, tableB.address FROM tableb) ``` [DEMO](http://sqlfiddle.com/#!2/17aa6/6)
Get records from Table A, that are not in Table B
[ "", "mysql", "sql", "logic", "" ]
I am attempting to combine a collection of 600 text files, each line looks like > `Measurement title Measurement #1` > > `ebv-miR-BART1-3p 4.60618701` > `....` > `evb-miR-BART1-200 12.8327289` with 250 or so rows in each file. Each file is formatted that way, with the same data headers. What I would like to do is combine the files such that it looks like this > `Measurement title Measurement #1 Measurement #2` > > `ebv-miR-BART1-3p 4.60618701 4.110878867` > `....` > `evb-miR-BART1-200 12.8327289 6.813287556` I was wondering if there is an easy way in python to strip out the second column of each file, then append it to a master file? I was planning on pulling each line out, then using regular expressions to look for the second column, and appending it to the corresponding line in the master file. Is there something more efficient?
It is a small amount of data for today's desktop computers (around 150000 measurements) - so keeping everything in memory, and dumping to a single file will be easier than an another strategy. If it would not fit in RAM, maybe using SQL would be a nice approach there - but as it is, you can create a single default dictionary, where each element is a list - read all your files and collect the measurements to this dictionary, and dump it to disk - ``` # create default list dictionary: >>> from collections import defaultdict >>> data = defaultdict(list) # Read your data into it: >>> from glob import glob >>> import csv >>> for filename in glob("my_directory/*csv"): ... reader = csv.reader(open(filename)) ... # throw away header row: ... reader.readrow() ... for name, value in reader: ... data[name].append(value) ... >>> # and record everything down in another file: ... >>> mydata = open("mydata.csv", "wt") >>> writer = csv.writer(mydata) >>> for name, values in sorted(data.items()): ... writer.writerow([name] + values) ... >>> mydata.close() >>> ```
Use the csv module to read the files in, create a dictionary of the measurement names, and make the values in the dictionary a list of the values from the file.
Combining files in python using
[ "", "python", "regex", "file-io", "" ]
What would be a slick way to join many lists together, using list comprehension or otherwise ``` >>> a = [1,2,3] >>> b = [4,5,6] >>> c = [7,8,9] >>> d = [x for x in a or b or c] >>> d [1, 2, 3] ``` This doesn't quite work...
``` >>> a + b + c [1, 2, 3, 4, 5, 6, 7, 8, 9] ``` Using a list comprehension: ``` >>> [y for x in (a,b,c) for y in x] [1, 2, 3, 4, 5, 6, 7, 8, 9] ``` Using [`itertools.chain`](http://docs.python.org/2/library/itertools.html#itertools.chain): ``` >>> from itertools import chain >>> list(chain(a,b,c)) [1, 2, 3, 4, 5, 6, 7, 8, 9] ```
Just another solution: ``` >>> sum([a,b,c],[]) [1, 2, 3, 4, 5, 6, 7, 8, 9] ```
Join many lists using list comprehension
[ "", "python", "list", "list-comprehension", "" ]
I want to make a Python script that creates footnotes. The idea is to find all strings of the sort `"Some body text.{^}{Some footnote text.}"` and replace them with `"Some body text.^#"`, where `"^#"` is the proper footnote number. (A different part of my script deals with actually printing out the footnotes at the bottom of the file.) The current code I'm using for this is: ``` pattern = r"\{\^\}\{(.*?)\}" i = 0 def create_footnote_numbers(match): global i i += 1 return "<sup>"+str(i)+"</sup>" new_body_text = re.sub(pattern, create_footnote_numbers, text) ``` This works fine, but it seems weird to have to declare a variable (`i`) outside the `create_footnote_numbers` function and then have to call it inside that function. I would have thought there'd be something inside `re` that would return the number of the match.
Any callable can be used, so you could use a class to track the numbering: ``` class FootnoteNumbers(object): def __init__(self, start=1): self.count = start - 1 def __call__(self, match): self.count += 1 return "<sup>{}</sup>".format(self.count) new_body_text = re.sub(pattern, FootnoteNumbers(), text) ``` Now the counter state is contained in the `FootnoteNumbers()` instance, and `self.count` will be set anew each time you start a `re.sub()` run.
It seems like a good fit for [a closure](https://stackoverflow.com/q/13857/4279): ``` def make_footnote_counter(start=1): count = [start - 1] # emulate nonlocal keyword def footnote_counter(match): count[0] += 1 return "<sup>%d</sup>" % count[0] return footnote_counter new_body_text = re.sub(pattern, make_footnote_counter(), text) ```
How to properly iterate with re.sub() in Python
[ "", "python", "regex", "" ]
Simple question: Why does the following query not output `perday`? ``` SELECT FROM_UNIXTIME(`date`,"%Y-%m-%d") AS `perday`, COUNT(*) AS `count` FROM `data` WHERE `group` = 1 GROUP BY `perday` ``` Count gets outputted correctly, but `perday` stays empty. The `data` table is like: ``` | id | group | date | ------------------------------------------------ | 1 | 1 | 2013-04-13 06:01:02 | | 2 | 1 | 2013-04-13 14:24:18 | | 3 | 2 | 2012-01-21 21:33:03 | Ect. ``` Thanks! EDIT: Expected output: ``` | perday | -------------- | 2013-04-13 | | 2012-01-21 | ```
remove `WHERE` clause, ``` SELECT FROM_UNIXTIME(date,'%Y-%m-%d') AS perday, SUM(`group` = 1) AS `count` FROM data GROUP BY FROM_UNIXTIME(date,'%Y-%m-%d') ``` if date is formatted as `2013-04-13 06:01:02`, then why use `FROM_UNIXTIME`? Isn't it `DATE_FORMAT` instead? ``` SELECT DATE_FORMAT(date, '%Y-%m-%d') AS perday, SUM(`group` = 1) AS `count` FROM data GROUP BY DATE(date) ORDER BY date ``` This will display all available dates in the table. * [SQLFiddle Demo](http://www.sqlfiddle.com/#!2/f89bb/2) But if you want the selected `group` only, ``` SELECT DATE_FORMAT(date, '%Y-%m-%d') AS perday, COUNT(*) AS `count` FROM data WHERE `group` = 1 GROUP BY DATE(date) ORDER BY date ```
you may looking for this ``` SELECT DATE_FORMAT(`date`,"%Y-%m-%d") AS `perday`, COUNT(*) AS `count` FROM `data` WHERE `group` = 1 GROUP BY `perday` ``` [**DEMO HERE**](http://sqlfiddle.com/#!2/3f1dd/4)
Mysql query extracting date + group by day
[ "", "mysql", "sql", "" ]
I've got a large iterable, in fact, a large iterable given by: ``` itertools.permutations(range(10)) ``` I would like to access to the millionth element. I alredy have problem solved in some different ways. 1. Casting iterable to list and getting 1000000th element: ``` return list(permutations(range(10)))[999999] ``` 2. Manually skiping elements till 999999: ``` p = permutations(range(10)) for i in xrange(999999): p.next() return p.next() ``` 3. Manually skiping elements v2: ``` p = permutations(range(10)) for i, element in enumerate(p): if i == 999999: return element ``` 4. Using islice from itertools: ``` return islice(permutations(range(10)), 999999, 1000000).next() ``` But I still don't feel like none of them is the python's elegant way to do that. First option is just too expensive, it needs to compute the whole iterable just to access a single element. If I'm not wrong, islice does internally the same computation I just did in method 2, and is almost exactly as 3rd, maybe it has even more redundant operations. So, I'm just curious, wondering if there is in python some other way to access to a concrete element of an iterable, or at least to skip the first elements, in some more elegant way, or if I just need to use one of the aboves.
Use the [`itertools` recipe `consume`](http://docs.python.org/2/library/itertools.html) to skip `n` elements: ``` def consume(iterator, n): "Advance the iterator n-steps ahead. If n is none, consume entirely." # Use functions that consume iterators at C speed. if n is None: # feed the entire iterator into a zero-length deque collections.deque(iterator, maxlen=0) else: # advance to the empty slice starting at position n next(islice(iterator, n, n), None) ``` Note the `islice()` call there; it uses `n, n`, effectively not returning *anything*, and the `next()` function falls back to the default. Simplified to your example, where you want to skip 999999 elements, then return element 1000000: ``` return next(islice(permutations(range(10)), 999999, 1000000)) ``` `islice()` processes the iterator in C, something that Python loops cannot beat. To illustrate, here are the timings for just 10 repeats of each method: ``` >>> from itertools import islice, permutations >>> from timeit import timeit >>> def list_index(): ... return list(permutations(range(10)))[999999] ... >>> def for_loop(): ... p = permutations(range(10)) ... for i in xrange(999999): p.next() ... return p.next() ... >>> def enumerate_loop(): ... p = permutations(range(10)) ... for i, element in enumerate(p): ... if i == 999999: ... return element ... >>> def islice_next(): ... return next(islice(permutations(range(10)), 999999, 1000000)) ... >>> timeit('f()', 'from __main__ import list_index as f', number=10) 5.550895929336548 >>> timeit('f()', 'from __main__ import for_loop as f', number=10) 1.6166789531707764 >>> timeit('f()', 'from __main__ import enumerate_loop as f', number=10) 1.2498459815979004 >>> timeit('f()', 'from __main__ import islice_next as f', number=10) 0.18969106674194336 ``` The `islice()` method is nearly 7 times faster than the next fastest method.
Finding the nth permutation may just be an example but if this is actually the problem you are trying to solve then there is a much better way to do this. Instead of skipping the elements of the iterable you can calculate the nth permutation directly. Borrowing the code from [another answer here](https://stackoverflow.com/a/6784359/505154): ``` import math def nthperm(li, n): li = list(li) n -= 1 s = len(li) res = [] if math.factorial(s) <= n: return None for x in range(s-1,-1,-1): f = math.factorial(x) d = n / f n -= d * f res.append(li[d]) del(li[d]) return res ``` Example and timing comparison: ``` In [4]: nthperm(range(10), 1000000) Out[4]: [2, 7, 8, 3, 9, 1, 5, 4, 6, 0] In [5]: next(islice(permutations(range(10)), 999999, 1000000)) Out[5]: (2, 7, 8, 3, 9, 1, 5, 4, 6, 0) In [6]: %timeit nthperm(range(10), 1000000) 100000 loops, best of 3: 9.01 us per loop In [7]: %timeit next(islice(permutations(range(10)), 999999, 1000000)) 10 loops, best of 3: 29.5 ms per loop ``` Same answer, over 3000 times faster. Note that I did make a slight modification to the original code so that it will no longer destroy the original list.
Elegant way to skip elements in an iterable
[ "", "python", "iterator", "iterable", "" ]
I need to return the top 10 products per year, how can I do this with my following query? ``` SELECT DP.ProductID , DP.Name , Year(FS.OrderDate) as TheYear , FS.OrderQty , FS.OrderAmount FROM dbo.DimProduct AS DP LEFT JOIN dbo.FactSales as FS on FS.ProductID = DP.ProductID ```
This should be easy if your RDBMS supports `Window Functions` ``` SELECT ProductID, Name, TheYear, OrderQty, OrderAmount FROM ( SELECT DP.ProductID ,DP.Name ,Year(FS.OrderDate) as TheYear ,FS.OrderQty ,FS.OrderAmount, ,ROW_NUMBER() OVER() (PARTITION BY Year(FS.OrderDate) ORDER BY FS.OrderQty DESC) rn FROM dbo.DimProduct AS DP LEFT JOIN dbo.FactSales as FS on FS.ProductID = DP.ProductID ) s WHERE rn <= 10 ORDER BY TheYear ``` * [TSQL Ranking Functions](http://msdn.microsoft.com/en-us/library/ms189798.aspx) The current query will give you `10` products for every `TheYear` based on `FS.OrderQty` since you have not mentioned the criteria on how the records will be sorted out. The `ROW_NUMBER()` (*a RANKING function*) will generate a sequence of number for each group, in this case `Year(FS.OrderDate)`, that is sorted out based on `FS.OrderQty`. The records will then be filtered out based on the value of the generated sequence. --- However, if `ROW_NUMBER()` will not generate `TIE` on the records having the same `FS.OrderQty`. If you want it to be handled, use `DENSE_RANK()` instead of `ROW_NUMBER()`.
You want to use the function `row_number()` to get the top 10. This assumes that `OrderQty` defines the top 10: ``` select t.* from (SELECT DP.ProductID, DP.Name, Year(FS.OrderDate) as TheYear, FS.OrderQty, FS.OrderAmount, row_number() over (partition by Year(FS.OrderDate) order by fs.OrderAmount desc ) as seqnum FROM dbo.DimProduct DP LEFT JOIN dbo.FactSales FS on FS.ProductID = DP.ProductID ) t where seqnum <= 10; ``` The function `row_number()` enumerates rows, starting with 1. It starts over within each group, as defined by the `partition by` clause (in your case, the year). The ordering of the numbers is based on the `order by` clause (in your case, `fs.OrderAmount desc`). So, the ten best products in each year will have the numbers 1-10 and the `where` clause just chooses them.
Return top 10 per year
[ "", "sql", "t-sql", "greatest-n-per-group", "" ]
i m having problem with this sql query, the query is self explanatory, so looking at it will tell u what problem i m having ``` SELECT customer.customerid, paymentdata.paidamount, paymentdata.balanceamount, sum(paymentreceipt.paidamount) AS Expr1 FROM customer INNER JOIN paymentdata ON customer.customerid = paymentdata.customerid INNER JOIN paymentreceipt ON customer.customerid = paymentreceipt.customerid GROUP BY customer.customerid ```
So how do you want to calculate the other columns, you are grouping only by `customerid` but you want to show also `paymentdata.paidamount`,`paymentdata.balanceamount` and `paymentreceipt.paidamount`? Consider that the group returns multiple rows for each group, so sql-server(or any other rdbms except MySql) doesn't know which row you want to show for the ungrouped columns. If you don't want to pick out a single row you have to aggregate the column. So e.g. by using `SUM`, `COUNT`, `MAX` or `AVG`: For example: ``` SELECT customer.customerid, MAX(paymentdata.paidamount) AS MaxPaid, AVG(paymentdata.balanceamount) AS AverageBalance, SUM(paymentreceipt.paidamount) AS ReceiptPaidSum FROM customer INNER JOIN paymentdata ON customer.customerid = paymentdata.customerid INNER JOIN paymentreceipt ON customer.customerid = paymentreceipt.customerid GROUP BY customer.customerid ```
You are missing a few terms in your group by. All columns, other than aggregated ones, need to be included.
SQL Query with Group by clause, with sum aggreagte
[ "", "sql", "" ]
I have the following piece of code inside a function: ``` try: PLACES.append(self.name) except NameError: global PLACES PLACES = [self.name] ``` Which causes `from <file containing that code> import *` to return ``` SyntaxWarning: name 'PLACES' is used prior to global declaration global PLACES ``` So I was wondering if it is considered bad practice to do such a thing, and if so, what is the correct way of doing it? I'm a noob btw.
The first problem is you shouldn't do `from foo import *`, this is just bad practice and will cause namespace collisions (without any warnings, by the way), and will cause you headaches later on. If you need to share a global storage space between two modules; consider pickling the object and unpickling it where required; or a k/v store, cache or other external store. If you need to store rich data, a database might be ideal. Checking if a name points to a object is usually a sign of bad design somewhere. You also shouldn't assume to pollute the global namespace if a name doesn't exist - how do you know `PLACES` wasn't deleted intentionally?
Yes, it is considered a bad practice. Just make sure the variable is defined. Virtually always, this is as simple as as module-level assignment with a reasonable default value: ``` Places = [] ``` When the default value should not be instantiated at import time (e.g. if it is very costy, or has some side effect), you can at least initialize `None` and check whether `the_thing is None` when it's needed, initializing it if it's still `None`.
Is using "try" to see if a variable is defined considered bad practice in Python?
[ "", "python", "variables", "" ]
I have 2 lists, for example: ``` a = ['a','b','c','d','e'] b = ['c','a','dog'] ``` I would like to sort the common elements in list b by the order of list a, to get something like this: ``` ['a','c','dog'] ``` I have read similar questions using `sorted()`, however I cannot make it work when the lists do not contain the same elements (i.e. `'dog'` in list `b`).
I'd turn `a` into dictionary: ``` a_dict = dict((v, i) for i, v in enumerate(a)) ``` and use `float('inf')` to indicate values to be sorted at the end: ``` sorted(b, key=lambda v: a_dict.get(v, float('inf'))) ``` Demo: ``` >>> a = ['a','b','c','d','e'] >>> b = ['c','a','dog'] >>> a_dict = dict((v, i) for i, v in enumerate(a)) >>> sorted(b, key=lambda v: a_dict.get(v, float('inf'))) ['a', 'c', 'dog'] ``` This has the advantage of speed; `dict` lookups are O(1) versus list `.index()` lookups having a `O(n)` cost. You'll notice this more as `a` and `b` grow in size. The disadvantage is that duplicate values in `a` are handled differently; the `dict` approach picks the last index versus `.index()` picking the first.
``` >>> a = ['a','b','c','d','e'] >>> b = ['c','a','dog'] >>> def func(x): ... try: ... return a.index(x) ... except ValueError: ... return float("inf") ... >>> sorted(b, key = func) ['a', 'c', 'dog'] ```
Sort python list with order of another, larger list
[ "", "python", "list", "sorting", "" ]
What is the best way to find and update duplicate records only from a table. For instance, the below records are considered duplicates with only the ID making them unique. I need to update records 2 and 3 active field to 0 instead of 1 and set the date to getdate(). I need the first instance of these duplicate (ID 1) to not be updated. I have a table that has thousands of this scenario and need to deactivate the duplicate records. Any ideas? acct\_plan table: ``` ID act plan active date 1 123 blue 1 NULL 2 123 blue 1 NULL 3 123 blue 1 NULL ``` Thanks in advance for any help! :)
I like to use the following CTE for this type of problem: ``` with toupdate as ( select ap.*, row_number() over (partition by act, plan, active, date order by id ) as seqnum from acct_plan ) update toupdate set date = getdate(), active = 0 where seqnum > 1; ``` This is syntax that SQL Server supports, but doesn't generally work in other databases.
``` update t1 set t1.active=0, t1.date = GETDATE() from acct_plan t1 where t1.ID NOT in (select MIN(t2.Id) from acct_plan t2 GROUP BY t2.act, t2.[plan], t2.active, t2.date); ``` [SqlFiddle](http://sqlfiddle.com/#!3/6f91a/1) By the way, `plan` is a [reserved keyword](http://msdn.microsoft.com/en-us/library/ms189822%28v=sql.90%29.aspx) in sql server 2005. Always a bad idea to use reserved keywords for object names...
SQL UPDATE fields on only the duplicate records, initial instance must not change
[ "", "sql", "sql-server-2005", "" ]
I have long list(`nextWordIndices`) of indices size of 7000. I want to get value list from another list to mach that indices. I can do this but it take lots of time ``` nextWord = [] for i in nextWordIndices: nextWord.append(allWords[i]) ``` is there any optimize way?
If the indices are frequently the same, you can use `operator.itemgetter`: ``` word_getter = operator.itemgetter(*nextWordIndices) nextWord = word_getter(allWords) ``` If you can use `word_getter` multiple times, and `tuple`s are OK for output, you might see a speed-up compared to a list-comprehension. Timings: ``` python -m timeit -s "allWords = range(7000); nextWordIndices = range(7000)" "[allWords[i] for i in nextWordIndices]" 1000 loops, best of 3: 415 usec per loop python -m timeit -s "allWords = range(7000); nextWordIndices = range(7000)" "map(allWords.__getitem__, nextWordIndices)" 1000 loops, best of 3: 614 usec per loop python -m timeit -s "allWords = range(7000); nextWordIndices = range(7000); from operator import itemgetter" "itemgetter(*nextWordIndices)(allWords)" 1000 loops, best of 3: 292 usec per loop ```
Using a list comp: ``` nextWord = [allWords[i] for i in nextWordIndices] ``` ~~Actually this might be faster~~ (will have to `timeit`) ``` map(allWords.__getitem__, nextWordIndices) ```
optimized way to get values from list when list of indices given
[ "", "python", "list", "loops", "" ]
The question is the inverse of [this question](https://stackoverflow.com/q/16856788/1214214). I'm looking for a generic method to from the original big array from small arrays: ``` array([[[ 0, 1, 2], [ 6, 7, 8]], [[ 3, 4, 5], [ 9, 10, 11]], [[12, 13, 14], [18, 19, 20]], [[15, 16, 17], [21, 22, 23]]]) -> array([[ 0, 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17], [18, 19, 20, 21, 22, 23]]) ``` I am currently developing a solution, will post it when it's done, would however like to see other (better) ways.
``` import numpy as np def blockshaped(arr, nrows, ncols): """ Return an array of shape (n, nrows, ncols) where n * nrows * ncols = arr.size If arr is a 2D array, the returned array looks like n subblocks with each subblock preserving the "physical" layout of arr. """ h, w = arr.shape return (arr.reshape(h//nrows, nrows, -1, ncols) .swapaxes(1,2) .reshape(-1, nrows, ncols)) def unblockshaped(arr, h, w): """ Return an array of shape (h, w) where h * w = arr.size If arr is of shape (n, nrows, ncols), n sublocks of shape (nrows, ncols), then the returned array preserves the "physical" layout of the sublocks. """ n, nrows, ncols = arr.shape return (arr.reshape(h//nrows, -1, nrows, ncols) .swapaxes(1,2) .reshape(h, w)) ``` For example, ``` c = np.arange(24).reshape((4,6)) print(c) # [[ 0 1 2 3 4 5] # [ 6 7 8 9 10 11] # [12 13 14 15 16 17] # [18 19 20 21 22 23]] print(blockshaped(c, 2, 3)) # [[[ 0 1 2] # [ 6 7 8]] # [[ 3 4 5] # [ 9 10 11]] # [[12 13 14] # [18 19 20]] # [[15 16 17] # [21 22 23]]] print(unblockshaped(blockshaped(c, 2, 3), 4, 6)) # [[ 0 1 2 3 4 5] # [ 6 7 8 9 10 11] # [12 13 14 15 16 17] # [18 19 20 21 22 23]] ``` --- Note that there is also [superbatfish's `blockwise_view`](https://stackoverflow.com/a/28207538/190597). It arranges the blocks in a different format (using more axes) but it has the advantage of (1) always returning a view and (2) being capable of handing arrays of any dimension.
Yet another (simple) approach: ``` threedarray = ... twodarray = np.array(map(lambda x: x.flatten(), threedarray)) print(twodarray.shape) ```
Form a big 2d array from multiple smaller 2d arrays
[ "", "python", "numpy", "" ]
I'm new to Python and found a couple of suggestions for finding the longest WORD in a string, but none which accounted for a string with a number of words which match the longest length. After playing around, I settled on this: ``` inputsentence = raw_input("Write a sentence: ").split() longestwords = [] for word in inputsentence: if len(word) == len(max(inputsentence, key=len)): longestwords.append(word) ``` That way I have a list of the longest words that I can do something with. Is there any better way of doing this? **NB:** Assume `inputsentence` contains no integers or punctuation, just a series of words.
If you'll be doing this with short amounts of text only, there's no need to worry about runtime efficiency: **Programming efficiency,** *in coding, reviewing and debugging,* is far more important. So the solution you have is fine, since it's clear and sufficiently efficient for even thousands of words. (However, you ought to calculate `len(max(inputsentence, key=len))` just once, before the `for` loop.) But suppose you do want to do this with a large corpus, which might possibly be several gigabytes long? Here's how to do it **in one pass,** without ever storing every word in memory (note that `inputcorpus` might be an iterator or function that reads the corpus in stages): Save all the *longest* words only. If you see a word that's longer than the current maximum, it's obviously the first one at this length, so you can start a fresh list. ``` maxlength = 0 maxwords = [ ] # unnecessary: will be re-initialized below for word in inputcorpus: if len(word) > maxlength: maxlength = len(word) maxwords = [ word ] elif len(word) == maxlength: maxwords.append(word) ``` If a certain word of maximal length repeats, you'll end up with several copies. To avoid that, just use `set( )` instead of a list (and adjust initializing and extending).
How about this: ``` from itertools import groupby as gb inputsentence = raw_input("Write a sentence: ").split() lwords = list(next(gb(sorted(inputsentence, key=len, reverse=True), key=len))[1]) ```
Is this an efficient way of finding the longest words (plural) in a string?
[ "", "python", "string", "python-2.7", "words", "" ]
I have a dictionary where the values are list elements: ``` dict1={'A':[9,0,8],'B':[6,5,4],'C':[]} ``` Now I have to process this dictionary to check if all the values in the dictionary are empty lists; and if so, collect the keys. Is there a better option than simply iterating through the entries?
Once you have a dictionary, you can't avoid having to iterate over the items in order to filter it: ``` keys = [key for key, value in dict1.items() if not value] if len(keys) != len(dict1): # Do something about the mis-match ```
``` if not any(dict1.values()): res = dict1.keys() ```
Iterating a dictionary
[ "", "python", "" ]
I am trying to exclude cases on a comprehension which return a None. The following achieves it but results in calling the function twice!!! ``` >>> def func(x): ... if x>10: ... return None ... else: ... return x*2 ... >>> result1=[func(x) for x in l if func(x) is not None] >>> result1 [4, 8, 14, 6] ``` What would be the way out?
``` result1[:] = [x for x in map(func, l) if x is not None] ``` On Python 2.x use `itertools.imap` **Note:** `filter(None, map(func, l))` would usually work but that would also remove `0` (since it is considered empty)
What about this code : ``` print([x*2 for x in l if x <= 10]) ``` **Edit** Generally it can take form of : ``` print([process(x) for x in l if is_valid(x)] ```
How to exclude cases from comprehension
[ "", "python", "" ]
I am using oracle's SQL Developer. To begin with, I have this table: ``` Name Null Type -------------- -------- ------------ EMPLOYEE_ID NOT NULL NUMBER(6) FIRST_NAME VARCHAR2(20) LAST_NAME NOT NULL VARCHAR2(25) DEPARTMENT_ID NUMBER(4) ``` I would like for each employee to show his name and the number of colleagues from his department. This is what I got so far: ``` select first_name, department_id, count(employee_id) from employees group by department_id; ``` This generates an error: ``` ORA-00979: not a GROUP BY expression 00979. 00000 - "not a GROUP BY expression" ``` I would really need some help. I am a total beginner so any suggestion is welcome. UPDATE: So, for each Employee, I want to show the number of his colleagues from the same department, and his name. I have updated the question.
The query you posted would not generate the error you indicate ``` SQL> create table employees( 2 employee_id number primary key, 3 first_name varchar2(20), 4 last_name varchar2(25), 5 department_id number 6 ); Table created. SQL> select first_name, department_id, count(employee_id) 2 from employees 3 group by first_name, department_id; no rows selected ``` However, it would also not produce the results that you seem to indicate that you want. From your description, it appears that you want something like ``` select first_name, last_name, count(*) over (partition by department_id) - 1 num_colleagues_in_department from employees ```
You can do it, among other ways, using a subquery (as many other answers) or a `LEFT JOIN` like this one; ``` SELECT u.employee_id, u.first_name, u.last_name, u.department_id, COUNT(c.employee_id)-1 colleagues FROM employees u LEFT JOIN employees c ON u.department_id=c.department_id GROUP BY u.employee_id, u.first_name, u.last_name, u.department_id ORDER BY employee_id ``` A normal `JOIN` or `INNER JOIN` would only return users that actually have at least one colleague, a `LEFT JOIN` returns users even if they don't have a colleague so we can count them. [An SQLfiddle to test with](http://sqlfiddle.com/#!4/26399/2).
oracle sql select query
[ "", "sql", "oracle", "select", "group-by", "" ]