repo_name stringlengths 6 67 | path stringlengths 5 185 | copies stringlengths 1 3 | size stringlengths 4 6 | content stringlengths 1.02k 962k | license stringclasses 15 values |
|---|---|---|---|---|---|
jmorris0x0/CFDscraper | CFDscraper.py | 1 | 31725 | #! /usr/bin/env python3
# -*- coding: utf-8
"""
A module to scrape financial data from web tables and write to MySQL.
Usage: python CFDscraper.py ./config1.cfg
First and only arg is optional path to a config file.
One of the items is a list of lists with table info in it that seems like
a headache to parse with configparser so this module simply exec()s a text
file excerpted from the config section below. (Yes, yes, I know using exec()
like this is frowned upon.)
TODO:
Issue #1:
Find a way to detect when the phantomjs driver becomes
inactive. For some reason, the page at investing.com stops updating. On
inspecting, I can see that networks GETs are still occrring. How then to detect
this? How about running two different instances of the page at a time and
comparing them? When there is a mismatch, do a reload on one or both.
Make this a switch in the config. You will have to rewrite things to be a
little more OO to make it work. Might actually be fun to implement.
Make it so that any number of browsers can be called for purposes of
redundancy.
You might think about putting a lock on the database when you check it and
then remove the lock when you are done. This way, you could have mulitple
instances of each program running concurrently. They could even be running on
different computers in different locations.
Issue #2:
Move away from SQLalchemy connectionless execution and add
rollback. In order to implement this properly I'll need to have something in
memory to fill and empty in the order it was filled. I've already implemented
this in the dbbuffer class I wrote. Move it over if needed.
Issue #3:
Move logging options to a command line switch.
Make sql server stop updating and stopping sql server every week.
Make scraper deal gracfully with sql server going away.
(Wait loop with stored data. Write rows to a flat file perhaps.)
Go through every sys.exit() below and make it enter a wait loop.
Issue #4:
Test scraper for recovery with a restart of the SQL database.
Issue #5:
After "loading webpage" I need to check for the page actually being loaded.
I have had a couple errors where I got a "Couldn't close popup" and the
screenshot was just blank. There is no way I should be getting that far. Some
element needs to be checked for. The title?
Issue #6:
Now spawning zombie or orphan processes.
/usr/sbin/mysqld
Nope, this is normal behavior for mysql. It does this to improve performance.
Also phantomjs
This does not look normal.
Perhaps this with 1.9.1:
https://github.com/Obvious/phantomjs/issues/71
I'm on 1.9.2 right now on my mac and 1.9.0 on linux.
current is 1.9.7
I uninstalled with apt and put the 1.9.7 executable into /usr/bin
This executes just fine but I'm still getting 17 processes.
I wonder if instead of launching phantomjs multiple times, I'm supposed to
launch multiple windows?
This may have nothing to do with phantomjs. It may be selenium or ghostdriver
that is messing up.
Upgraded selenium as well to 2.39 and no luck.
This is a problem. I've reached my memory,(but not cpu) limit for running
CFDscraper at seven instances. That's probably 49 instances of phantomjs at
133M each. (6.5GB) At this rate, going back to chrome would be much better.
Next try calling the browser manually while cloesly watching top to see
where things go south. Then go deeper and see if the same behavior arises in
pure phantomjs. Not sure how to do this. Do I need to do it in a js
command line?
Next try running the same with chrome. Seven chrome browsers isn't too bad.
You will need to make sure chromedriver is installed in the right place.
Also move chromedriver to the correct place for osx so you don't have to
specify in the configs.
Ultimately, set something to do unix ps and count zombies. I can't have this
happen again.
Other Issues:
Look for chrome where it belongs for each OS.
Check for function on linux and windows.
If the database is not available, write rows to an object that can be "emptied"
later.
Include hours of operation in the config file and don't scrape at these times.
Make a way to scrape a page with just one data point.
Turn the database writer into a class in order to do away with pesky globals.
Move classes into a seperate file.
"""
import sys
from time import sleep, time
import datetime
##### For scraping ######
from selenium import webdriver
from selenium.common.exceptions import NoSuchWindowException
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from bs4 import BeautifulSoup
from sqlalchemy import (create_engine, MetaData, Table, Column,
Integer, DateTime, Float)
from dateutil.parser import parse
import pandas as pd
##### Logging ############s
import logging
import logging.handlers
import uuid # For creating unique name for screenshots.
###### For timeout ########
from functools import wraps
import errno
import os
import signal
###### Some globals #########################################################
# Put these into a class at some point.
total_rows_scraped = 0 # Don't change this. It's just a counter.
last_write_time = time() # Also a counter.
##############################################################################
###### Default Configuration Data ############################################
###### Copy this section to a config file and load it with a CL argument #####
dataname = 'bondCFD'
logpath = dataname + '_scrape.log'
chromepath = '/Users/jmorris/Code/chromedriver'
browser_choice = "phantomjs" # Choose chrome, firefox, or phantomjs
phantom_log_path = dataname + '_phantomjs.log'
# Database info:
db_host = 'dataserve.local'
db_user = 'j'
db_pass = ''
db_name = 'mydb'
db_dialect = 'mysql+pymysql'
# Page info:
page_source_timeout = 5 # In seconds. Must be an integer.
browser_lifetime = 1680 # In seconds. 14400 is four hours.
base_url = 'http://www.investing.com'
url_string = base_url + '/rates-bonds/government-bond-spreads'
web_tz = 'GMT'
# Table info:
attribute = {'id': 'bonds'}
time_col = "UTCTime"
row_title_column = 'Country' # Need this to know index column.
refresh_rate = 10.5 # Minimum number of seconds between scrapes.
# Table form:
# bootstrap = (db_table_name,
# ((db_column1_name, web_row_string, web_col_string),
# (db_column2_name, web_row_string, web_col_string)))
# Timestamp column name is special and will be made primary key
# All others default to float.
# The timestamp column, whatever its dytpe, must be the first for
# everything to work.
# It can be just one big list of lists. I just thought the format below
# would be more readable and less prone to making typos.
###### Tables list #########
bootstrap_list = []
bootstrap1 = ("German10yrbond",
(("UTCTime", "Germany", "Time"),
("Value", "Germany", "Yield")))
bootstrap_list.append(bootstrap1)
bootstrap_list.sort()
###############################################################################
###############################################################################
def import_config():
"""
"""
if len(sys.argv) > 1:
filename = sys.argv[1]
print("loading config file:" + sys.argv[1])
else:
filename = './CFDscraper.cfg'
exec(compile(open(filename, "rb").read(), filename, 'exec'),
globals(),
globals()) # Force import to global namespace.
import_config() # This needs to happen before the logger gets set up.
######## Set up logging ######################################################
logger = logging.getLogger('CFDscraper') # Or __name__
logger.setLevel(logging.DEBUG)
# Create file handler which logs even debug messages.
file_hand = logging.handlers.RotatingFileHandler(logpath,
maxBytes=10000,
backupCount=2)
file_hand.setLevel(logging.ERROR) # Set logging level here.
# Create console handler with a higher log level.
console_hand = logging.StreamHandler()
console_hand.setLevel(logging.ERROR) # Set logging level here. Normally INFO
# Create formatter and add it to the handlers.
form_string = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
formatter = logging.Formatter(form_string)
formatter2 = logging.Formatter('%(message)s')
console_hand.setFormatter(formatter2)
file_hand.setFormatter(formatter)
# Add the handlers to logger.
logger.addHandler(console_hand)
logger.addHandler(file_hand)
###### Make timeout wrapper for pageloads and such ############################
class TimeoutError(Exception):
pass
def timeout(seconds=10, error_message=os.strerror(errno.ETIME)):
"""
Timeout wrapper.
From:
http://stackoverflow.com/questions/2281850/
timeout-function-if-it-takes-too-long-to-finish?lq=1
"""
def decorator(func):
def _handle_timeout(signum, frame):
raise TimeoutError(error_message)
#@wraps(func)
def wrapper(*args, **kwargs):
signal.signal(signal.SIGALRM, _handle_timeout)
signal.alarm(seconds)
try:
result = func(*args, **kwargs)
finally:
signal.alarm(0)
return result
return wraps(func)(wrapper)
return decorator
###############################################################################
######## Open database and check that it can be reached #######################
def db_setup():
"""
Connects to the database using SQLalchemy-core. This is the only
function that is called outside of main().
"""
# print("Enter password for " + db_user + "@" + db_host + ":")
logger.info('Connecting to database.')
connect_string = (db_dialect + '://' +
db_user + ':' +
db_pass + '@' +
db_host + '/' +
db_name)
try:
engine = create_engine(connect_string,
echo=False,
pool_recycle=3600)
metadata = MetaData(bind=engine)
conn = engine.connect()
except:
logger.error('ERROR: Database not reachable. Exiting', exc_info=1)
sys.exit()
return engine, metadata, conn
########## Webdrivers class ###################################################
class Browser(object):
"""
Wrapper class for webdriver.
Usage:
browser = Browser()
browser = Browser("phantomjs") # Default is chrome, also firefox.
browser.refresh()
browser.quit()
browser.age()
browser.type()
browser.source()
TODO:
Move popup close and url load to separate functions.
Make internal methods "private".
"""
def __init__(self, browser_type="chrome"):
self.browser_type = browser_type.lower()
self.driver = self.new_driver(self.browser_type)
self.start_time = time()
def new_driver(self, browser_type):
if browser_type == "chrome":
driver = self.new_chrome_driver()
elif browser_type == "firefox":
driver = self.new_firefox_driver()
elif browser_type == "phantomjs":
driver = self.new_phantomjs_driver()
else:
logger.critical("Invalid browser choice. Exiting")
clean_up(self)
self.start_time = time()
return driver
def new_chrome_driver(self):
"""
Opens a Chrome webdriver instance.
Options:
http://peter.sh/experiments/chromium-command-line-switches/
"""
try:
options = webdriver.ChromeOptions()
options.add_argument('--disable-bundled-ppapi-flash')
options.add_argument('--disable-pepper-3d')
options.add_argument('--disable-internal-flash')
options.add_argument('--disable-flash-3d')
options.add_argument('--disable-flash-stage3d')
options.add_argument('--disable-core-animation-plugins')
options.add_argument('--disable-plugins')
options.add_argument('--views-corewm-window-animations-disabled')
# options.add_argument('--disable-images')
# options.add_argument('--disable-javascript') # bad idea
# list of switches: print(options.arguments)
logger.info("Loading Chrome webdriver.")
driver = webdriver.Chrome(executable_path=chromepath,
chrome_options=options)
logger.info("Loading webpage.")
except:
logger.error("Can't open webdriver.", exc_info=1)
attempts = 0
while attempts < 10:
try:
logger.info("Loading webpage: " + url_string)
driver.get(url_string)
break
except:
attempts += 1
logger.error("Page load failed. Retrying.")
sleep(2)
# The TBs generated here are of little use.
# All the good stuff is inside phantomjs.
# logger.critical("Can't load webpage.", exc_info=1)
# clean_up(self)
if attempts == 10:
logger.critical("Page load re-try limit exceeded.")
clean_up(self)
try:
# browser.find_element_by_class_name("popupAdCloseIcon").click()
driver.find_element_by_partial_link_text("Continue").click()
except:
logger.error("ERROR: Can't close the popup.")
pass
return driver
def new_firefox_driver(self):
"""
Opens a Firefox browser and closes the popup.
I switched to Firefox from Chrome because for some reason lxml doesn't
work with Chrome and Python 3.3. (Because unicode from Chrome being
ignored by lxml.)
"""
## Firefox profile object
try:
firefox_profile = webdriver.FirefoxProfile()
# Disable images
# firefox_profile.set_preference('permissions.default.image', 2)
# Diasble flash
firefox_profile.set_preference(
'dom.ipc.plugins.enabled.libflashplayer.so', 'false')
# (try to) Disable popups
firefox_profile.set_preference('network.http.prompt-temp-redirect',
'false')
# browser.browserHandle = webdriver.Firefox(firefox_profile)
firefox_profile.set_preference('plugin.state.flash', 0)
logger.info("Loading FireFox webdriver.")
driver = webdriver.Firefox(firefox_profile)
except:
logger.critical("ERROR: Can't open browser.", exc_info=1)
clean_up(self)
attempts = 0
while attempts < 10:
try:
logger.info("Loading webpage: " + url_string)
driver.get(url_string)
break
except:
attempts += 1
logger.error("Page load failed. Retrying.")
sleep(2)
# logger.critical("Can't load webpage.", exc_info=1)
# clean_up(self)
try:
driver.find_element_by_partial_link_text("Continue").click()
except:
logger.error("ERROR: Can't close popup.")
pass
return driver
def new_phantomjs_driver(self):
"""
Opens a ghostjs webdriver.
For OSX:
brew install phantomjs
If not using brew:
Install NodeJS
Using Node's package manager install phantomjs:
npm -g install phantomjs
Install selenium (in virtualenv, if using.)
For others:
http://phantomjs.org/download.html
"""
# PhantomJS args:
# service_args : A List of command line arguments to pass to PhantomJS
# service_log_path: Path for phantomjs service to log to.
# Command lin:
# github.com/ariya/phantomjs/wiki/API-Reference#command-line-options
# PhantomJS user agent out of the box:
# "Mozilla/5.0 (Macintosh; PPC Mac OS X) AppleWebKit/534.34
# (KHTML, like Gecko) PhantomJS/1.9.2 Safari/534.34"
# https://github.com/ariya/phantomjs/issues/11156
# Set the user agent string to something less robotronic:
user_agent = ("Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) " +
"AppleWebKit/534.34 (KHTML, like Gecko) " +
"Chrome/31.0.1650.63 Safari/534.34")
dcap = dict(DesiredCapabilities.PHANTOMJS)
dcap["phantomjs.page.settings.userAgent"] = user_agent
service_args = ['--debug=false',
'--ignore-ssl-errors=true'
] # Set phantomjs command line options here.
try:
logger.info("Loading PhantomJS webdriver.")
driver = webdriver.PhantomJS(executable_path="phantomjs",
desired_capabilities=dcap,
service_log_path=phantom_log_path,
service_args=service_args)
except:
logger.critical("ERROR: Can't open browser.", exc_info=1)
clean_up(self)
driver.set_window_size(1024, 768)
attempts = 0
while attempts < 10:
try:
logger.info("Loading webpage: " + url_string)
driver.get(url_string)
break
except:
attempts += 1
logger.error("Page load failed. Retrying.")
sleep(2)
# logger.critical("Can't load webpage.", exc_info=1)
# clean_up(self)
try:
driver.find_element_by_partial_link_text("Continue").click()
except:
logger.error("ERROR: Can't close popup.")
tempname = str(uuid.uuid4()) + '.png'
driver.save_screenshot(tempname)
logger.error("Screenshot: " + tempname)
return driver
def refresh(self):
""" """
try:
self.driver.quit()
except:
logger.error("ERROR: Browser process won't die.", exc_info=1)
self.driver = self.new_driver(self.browser_type)
def type(self):
return self.browser_type
def age(self):
self.browser_age = (time() - self.start_time)
return self.browser_age
def quit(self):
self.driver.quit()
return
def source(self):
logger.debug("Browser.source() called.")
try:
self.html_source = self.source_inner()
except NoSuchWindowException:
logger.error("Window missing.")
self.refresh()
try:
self.html_source = self.source_inner()
except:
logger.critical("2nd try on source load failed.", exc_info=1)
clean_up(self)
except TimeoutError:
logger.error("Time limit exceeded for webdriver.page_source.")
logger.error("Refreshing webdriver.")
self.refresh()
try:
self.html_source = self.source_inner()
except:
logger.critical("2nd try on source load failed.", exc_info=1)
clean_up(self)
return self.html_source
@timeout(page_source_timeout)
def source_inner(self):
"""
Wrapper for browser.page_source so that it can be timed out if hung.
"""
return self.driver.page_source # Must be unbound method.
###############################################################################
def setup_tables(bootstrap_list, metadata):
"""
Creates needed tables in the database using bootstrap_list as guide.
TODO:
Autoincrement isn't being set on the integer column.
"Setting the autoincrement field has no effect for columns that are
not part of the primary key."
There are ways around this but they seem like hacks that will not
be portable to another database.
Update: Now have two primary keys. Problem? Not sure.
"""
logger.info("Setting up database tables.")
for entry in bootstrap_list:
column_list = [row[0] for row in entry[1]]
Table(entry[0], metadata,
Column('id', Integer(),
nullable=False,
autoincrement=True,
primary_key=True),
*((Column(time_col, DateTime(),
primary_key=True,
autoincrement=False,
nullable=False))
if colname == time_col
else (Column(colname, Float(), nullable=False))
for colname in column_list))
metadata.create_all()
def get_last_row_dict(table_title):
"""
Gets the last entry in the table for to see if the web entry is
new enough to update.
"""
sql_table = Table(table_title, metadata, autoload=True)
query = sql_table.select().order_by('-id').limit(1)
result_set = query.execute()
keys = result_set.keys()
values = result_set.fetchone()
if values is None:
values = len(keys) * [None]
data_dict = dict(zip(keys, values))
return data_dict
def fill_from_db(bootstrap_list, conn):
"""
Using bootstrap_list as guide, creates list_of_rows and fills from last
entry in the db.
"""
logger.info("Loading last database rows.")
list_of_rows = []
for entry in bootstrap_list:
# print("bootstrap row: ", entry[0])
row_dict = get_last_row_dict(entry[0])
# print("row dict: ", row_dict)
col_list = []
for column in entry[1]:
# print (column[0])
col = [column[0], row_dict[column[0]]]
col_list.append(col)
# print("col_list: ", col_list)
row = [entry[0], col_list]
logger.debug("Load db: %s", str(row))
list_of_rows.append(row)
return list_of_rows
def browser2dframe(browser, attribute):
"""
Makes a dataframe from a webdriver instance given a table
attribute: {'id':'bonds'}.
TODO:
Exhibits a strange bug where after 15-30 calls the time for execution
grows from ~ 0.290s to 8 seconds and then to 20. Why?
The culprit is browser.page_source()
Fixed! Moved from Firefox to phantomjs. Works much, much faster and
with much less memory.
Stupid lxml is causing me stress. ["lxml", "xml"] is best for
Firefox but phantomjs and Chrome work only with html5lib so that
is what I'm going with. The difference is only 330 milliseconds
so that's fine for now. Write an lxml-based custom parser later.
(a lot later.)
(I gained around that much when I switched to phantomjs so that
is also fine.)
"""
profiler = []
start1 = time()
logger.debug("Getting source in browser2dframe.")
html_source = browser.source()
end_time1 = time() - start1
profiler.append("html_source = browser.page_source: " + str(end_time1))
start2 = time()
logger.debug("Parsing source in browser2dframe.")
soup = BeautifulSoup(html_source, "html5lib") # Parser important.
end_time2 = time() - start2
profiler.append("BeautifulSoup(html_source, ...): " + str(end_time2))
start3 = time()
table = soup.find('table', attribute)
if table is None:
logger.critical("Can't find the table. Is the attribute correct?")
clean_up(browser)
try:
header = [th.text for th in table.find('thead').select('th')]
except AttributeError:
logger.critical("Can't find the table head!")
clean_up(browser)
body = [[td.text for td in row.select('td')]
for row in table.findAll('tr')]
body2 = [x for x in body if x != []] # Must remove empty rows.
cols = zip(*body2) # Turn it into tuples.
tbl_d = {name: col for name, col in zip(header, cols)}
end_time3 = time() - start3
profiler.append("Body of function: " + str(end_time3))
start4 = time()
logger.debug("Creating Dataframe in browser2dframe.")
result = pd.DataFrame(tbl_d, columns=header)
end_time4 = time() - start4
profiler.append("pd.DataFrame(tbl_d, columns=header): " + str(end_time4))
total_time = time() - start1
if total_time > 3:
logger.error("Page source time exceeded!")
logger.error(profiler[0])
logger.error(profiler[1])
logger.error(profiler[2])
logger.error(profiler[3])
browser.refresh()
return result
def fill_from_web(browser, attribute):
"""
Loads the table of interest into a pandas Dataframe for easy lookup
by row and column.
"""
logger.debug("Calling browser2dframe in fill_from_web.")
table_df = browser2dframe(browser, attribute)
logger.debug("Setting index in fill_from_web.")
table_df = table_df.set_index(row_title_column)
logger.debug("Iterating bootstrap_list in fill_from_web.")
list_of_rows = []
for entry in bootstrap_list:
# logger.debug("tablename: %s", entry[0])
col_list = []
for column in entry[1]:
table_value = table_df.loc[column[1], column[2]]
# logger.debug(table_value)
if column[0] == time_col:
table_value = custom_date_parser(table_value, browser)
else:
table_value = table_value.replace(',', '')
table_value = float(table_value)
col = [column[0], table_value]
col_list.append(col)
row = [entry[0], col_list]
logger.debug("Load web: %s", str(row))
list_of_rows.append(row)
return list_of_rows
def custom_date_parser(date_string, browser):
"""
Date parser for the oddball date format. Also atempts to handle
the difference between the page date time and the system datetime.
This is especially an issue around midnight when the two times might
be in different days.
"""
good_time = ':' in date_string
if good_time is False:
return None
good_len = (len(date_string) == 7) or (len(date_string) == 8)
if good_len is False:
logger.critical("Unrecognized web source date format.%s", date_string)
clean_up()
if (len(date_string) == 7):
date_string = '0' + date_string
if ((web_tz == 'GMT') or (web_tz == 'UTC')):
# Fancy stuff for when the web and utc date are not synced @ midnight.
current_utc = datetime.datetime.utcnow()
web_hour = int(date_string[0:2])
if current_utc.hour == 0:
if web_hour == 23:
one_day = datetime.timedelta(days=1)
current_utc = current_utc - one_day
return parse(date_string, default=(current_utc))
else:
logger.critical("Non GMT web dates not yet supported.")
clean_up(browser)
def compare_lists(old_list, new_list):
"""
Compare list_of_rows data structure row by row to determine what has
changed and must be written to the database.
"""
logger.debug("Comparing lists.")
differences = []
for entry in new_list:
if entry not in old_list:
differences.append(entry)
return differences
def write2db(changed_list):
"""
Writes rows to the database. Only does an update if the datetime
is not None.
I'm using pymysql as my underlying DBAPI and there is a bug that
allows a hang if the session is interupted.
The last line of exception is in python3.3/socket.py
"return self._sock.recv_into(b)"
The bug is in 0.6.1
ref: https://github.com/PyMySQL/PyMySQL/issues/136
pip install --upgrade https://github.com/PyMySQL/PyMySQL/tarball/master
Hopefully this will not be needed after 0.6.1
TODO:
Using connectionless execution. Fix this.
Make the update happen en masse rather than one
table at a time. This could be faster.
Not sure if possible when updates are in diferent tables.
Put some error handling when you get back some errors.
"""
global total_rows_scraped
global last_write_time
for entry in changed_list:
null_date = (entry[1][0][1] is None)
if null_date:
pass
else:
logger.debug("Write db: %s", str(entry))
total_rows_scraped += 1
current_table = Table(entry[0], metadata)
inserter = current_table.insert()
insert_dict = dict(entry[1]) # keep this.
inserter.execute(insert_dict)
last_write_time = time()
logger.debug("Finished db insert.")
return
############ Shut down ########################################################
def clean_up(browser):
"""
Closes any webdriver instances and ends program.
"""
global metadata
logger.critical("Closing webdriver.")
try:
browser.quit()
except:
logger.critical("Browser process won't terminate.")
logger.critical("Exiting program.")
conn.close() # Close connection.
engine.dispose() # Actively close out connections.
metadata = None
sys.exit()
# Now: move db set up stuff inside of main() or at least inside of a function.
######### Main Function #######################################################
def main():
"""
TODO:
Not happy with the try...except capture of ^C as method to end while
loop.
However, being as I have searched for a way to do it a number of times
and I have always come up unsatisfied, I am giving up for now.
Note that this is going to cause some zombie browser processes to hang
around after ^C, if the ^C is not caught by the right exception handler.
In the future, look into UIs like pygame or Tkinter for this
function or get curses working.
Or, look into one of the solutions that uses threads. Though, if I use
threads here, I cannot use them for doing timeouts on page loads because
the signals might get crossed.
"""
global last_write_time # need to keep it global so I can reach it.
logger.info("CFDscraper by Jonathan Morris Copyright 2014")
global metadata
global engine
global conn
engine, metadata, conn = db_setup()
setup_tables(bootstrap_list, metadata)
browser = Browser(browser_choice)
module_start_time = time()
last_write_time = time()
old_list = fill_from_db(bootstrap_list, conn)
logger.info("Starting scraping loop.")
try:
while True:
cycle_start = time()
new_list = fill_from_web(browser, attribute)
changed_list = compare_lists(old_list, new_list)
write2db(changed_list)
old_list = new_list
if browser.age() > browser_lifetime:
logger.info("Lifetime exceeded. Refreshing.")
browser.refresh()
cycle_length = time() - cycle_start
sleep_time = refresh_rate - cycle_length
if sleep_time < 0:
sleep_time = 0
# Write some stuff to stdout so I know it is alive.
uptime = int(time() - module_start_time)
since_write = int(time() - last_write_time)
sys.stdout.write("\rRows: %d" % (total_rows_scraped))
sys.stdout.write(", Uptime: %ss" % str(uptime))
sys.stdout.write(", Since write: %ss" % str(since_write))
sys.stdout.write(", Sleeping: %.2fs" % sleep_time)
sys.stdout.flush()
sleep(sleep_time)
except KeyboardInterrupt:
logger.critical("^C from main loop.")
clean_up(browser)
if __name__ == "__main__":
main()
sys.exit()
| mit |
aminert/scikit-learn | benchmarks/bench_20newsgroups.py | 377 | 3555 | from __future__ import print_function, division
from time import time
import argparse
import numpy as np
from sklearn.dummy import DummyClassifier
from sklearn.datasets import fetch_20newsgroups_vectorized
from sklearn.metrics import accuracy_score
from sklearn.utils.validation import check_array
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import MultinomialNB
ESTIMATORS = {
"dummy": DummyClassifier(),
"random_forest": RandomForestClassifier(n_estimators=100,
max_features="sqrt",
min_samples_split=10),
"extra_trees": ExtraTreesClassifier(n_estimators=100,
max_features="sqrt",
min_samples_split=10),
"logistic_regression": LogisticRegression(),
"naive_bayes": MultinomialNB(),
"adaboost": AdaBoostClassifier(n_estimators=10),
}
###############################################################################
# Data
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('-e', '--estimators', nargs="+", required=True,
choices=ESTIMATORS)
args = vars(parser.parse_args())
data_train = fetch_20newsgroups_vectorized(subset="train")
data_test = fetch_20newsgroups_vectorized(subset="test")
X_train = check_array(data_train.data, dtype=np.float32,
accept_sparse="csc")
X_test = check_array(data_test.data, dtype=np.float32, accept_sparse="csr")
y_train = data_train.target
y_test = data_test.target
print("20 newsgroups")
print("=============")
print("X_train.shape = {0}".format(X_train.shape))
print("X_train.format = {0}".format(X_train.format))
print("X_train.dtype = {0}".format(X_train.dtype))
print("X_train density = {0}"
"".format(X_train.nnz / np.product(X_train.shape)))
print("y_train {0}".format(y_train.shape))
print("X_test {0}".format(X_test.shape))
print("X_test.format = {0}".format(X_test.format))
print("X_test.dtype = {0}".format(X_test.dtype))
print("y_test {0}".format(y_test.shape))
print()
print("Classifier Training")
print("===================")
accuracy, train_time, test_time = {}, {}, {}
for name in sorted(args["estimators"]):
clf = ESTIMATORS[name]
try:
clf.set_params(random_state=0)
except (TypeError, ValueError):
pass
print("Training %s ... " % name, end="")
t0 = time()
clf.fit(X_train, y_train)
train_time[name] = time() - t0
t0 = time()
y_pred = clf.predict(X_test)
test_time[name] = time() - t0
accuracy[name] = accuracy_score(y_test, y_pred)
print("done")
print()
print("Classification performance:")
print("===========================")
print()
print("%s %s %s %s" % ("Classifier ", "train-time", "test-time",
"Accuracy"))
print("-" * 44)
for name in sorted(accuracy, key=accuracy.get):
print("%s %s %s %s" % (name.ljust(16),
("%.4fs" % train_time[name]).center(10),
("%.4fs" % test_time[name]).center(10),
("%.4f" % accuracy[name]).center(10)))
print()
| bsd-3-clause |
RomainBrault/scikit-learn | examples/decomposition/plot_ica_blind_source_separation.py | 349 | 2228 | """
=====================================
Blind source separation using FastICA
=====================================
An example of estimating sources from noisy data.
:ref:`ICA` is used to estimate sources given noisy measurements.
Imagine 3 instruments playing simultaneously and 3 microphones
recording the mixed signals. ICA is used to recover the sources
ie. what is played by each instrument. Importantly, PCA fails
at recovering our `instruments` since the related signals reflect
non-Gaussian processes.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
from sklearn.decomposition import FastICA, PCA
###############################################################################
# Generate sample data
np.random.seed(0)
n_samples = 2000
time = np.linspace(0, 8, n_samples)
s1 = np.sin(2 * time) # Signal 1 : sinusoidal signal
s2 = np.sign(np.sin(3 * time)) # Signal 2 : square signal
s3 = signal.sawtooth(2 * np.pi * time) # Signal 3: saw tooth signal
S = np.c_[s1, s2, s3]
S += 0.2 * np.random.normal(size=S.shape) # Add noise
S /= S.std(axis=0) # Standardize data
# Mix data
A = np.array([[1, 1, 1], [0.5, 2, 1.0], [1.5, 1.0, 2.0]]) # Mixing matrix
X = np.dot(S, A.T) # Generate observations
# Compute ICA
ica = FastICA(n_components=3)
S_ = ica.fit_transform(X) # Reconstruct signals
A_ = ica.mixing_ # Get estimated mixing matrix
# We can `prove` that the ICA model applies by reverting the unmixing.
assert np.allclose(X, np.dot(S_, A_.T) + ica.mean_)
# For comparison, compute PCA
pca = PCA(n_components=3)
H = pca.fit_transform(X) # Reconstruct signals based on orthogonal components
###############################################################################
# Plot results
plt.figure()
models = [X, S, S_, H]
names = ['Observations (mixed signal)',
'True Sources',
'ICA recovered signals',
'PCA recovered signals']
colors = ['red', 'steelblue', 'orange']
for ii, (model, name) in enumerate(zip(models, names), 1):
plt.subplot(4, 1, ii)
plt.title(name)
for sig, color in zip(model.T, colors):
plt.plot(sig, color=color)
plt.subplots_adjust(0.09, 0.04, 0.94, 0.94, 0.26, 0.46)
plt.show()
| bsd-3-clause |
parenthetical-e/wheelerexp | meta/kmeans_trialtime.py | 1 | 2374 | """
usage: python ./kmeans_trialtime.py name data roifile cond tr window [, filtfile]
"""
import sys, os
import numpy as np
import argparse
# from fmrilearn.analysis import fir
from fmrilearn.load import load_roifile
from sklearn.cluster import KMeans
from wheelerexp.base import Trialtime
from wheelerexp.base import DecomposeExp
from wheelerdata.load.meta import get_data
parser = argparse.ArgumentParser(
description="Apply PCA to trial-level data",
formatter_class=argparse.ArgumentDefaultsHelpFormatter
)
parser.add_argument(
"name",
help="Name of this exp"
)
parser.add_argument(
"data",
help="Name of the wheeerlab dataset"
)
parser.add_argument(
"roifile",
help="A text file with ROI names to iterate over, ':' separated"
)
parser.add_argument(
"cond",
help="Name of cond to use"
)
parser.add_argument(
"tr",
help="TR of the dataset",
type=float
)
parser.add_argument(
"window",
help="L of the trial window (in TRs)",
type=int
)
parser.add_argument(
"filtfile",
help="Name of this exp",
nargs='?',
default=None
)
args = parser.parse_args()
# ---------------------------------------------------------------------------
# Process argv
# ---------------------------------------------------------------------------
# Replace this with good arg processing....
# basename, dataname, rois, cond, tr, filtfile = process_exp_argv(sys.argv)
data = get_data(args.data)
_, rois = load_roifile(args.roifile) ## roifile
# ---------------------------------------------------------------------------
# Setup exp
# ---------------------------------------------------------------------------
spacetime = Trialtime(KMeans(4), mode="cluster")
exp = DecomposeExp(
spacetime, data, window=args.window, nsig=3, tr=args.tr
)
# ---------------------------------------------------------------------------
# And run each roi
# ---------------------------------------------------------------------------
for n, roi in enumerate(rois):
print("{3}: {0} ({1}/{2})".format(roi, n+1, len(rois), args.data))
exp.run(
args.name, roi, args.cond, smooth=False,
filtfile=args.filtfile, event=False
)
| bsd-2-clause |
michellab/Sire | wrapper/Tools/ap.py | 2 | 27471 | """
Package that allows you to plot simple graphs in ASCII, a la matplotlib.
This package is a inspired from Imri Goldberg's ASCII-Plotter 1.0
(https://pypi.python.org/pypi/ASCII-Plotter/1.0)
At a time I was enoyed by security not giving me direct access to my computer,
and thus to quickly make figures from python, I looked at how I could make
quick and dirty ASCII figures. But if I were to develop something, I wanted
something that can be used with just python and possible standard-ish packages
(numpy, scipy).
So I came up with this package after many iterations based of ASCII-plotter.
I added the feature to show multiple curves on one plot with different markers.
And I also made the usage, close to matplotlib, such that there is a plot,
hist, hist2d and imshow functions.
TODO:
imshow does not plot axis yet.
make a correct documentation
"""
import math as _math
import numpy as np
__version__ = 0.9
__author__ = 'M. Fouesneau'
__all__ = ['markers', 'ACanvas', 'AData', 'AFigure',
'hist', 'hist2d', 'imshow', 'percentile_imshow',
'plot', 'stem', 'stemify', 'step', 'steppify',
'__version__', '__author__']
markers = { '-' : 'None' , # solid line style
',': '\u2219', # point marker
'.': '\u2218', # pixel marker
'.f': '\u2218', # pixel marker
'o': '\u25CB', # circle marker
'of': '\u25CF', # circle marker
'v': '\u25BD', # triangle_down marker
'vf': '\u25BC', # filler triangle_down marker
'^': '\u25B3', # triangle_up marker
'^f': '\u25B2', # filled triangle_up marker
'<': '\u25C1', # triangle_left marker
'<f': '\u25C0', # filled triangle_left marker
'>': '\u25B7', # triangle_right marker
'>f': '\u25B6', # filled triangle_right marker
's': '\u25FD', # square marker
'sf': '\u25FC', # square marker
'*': '\u2606', # star marker
'*f': '\u2605', # star marker
'+': '\u271A', # plus marker
'x': '\u274C', # x marker
'd': '\u25C7', # diamond marker
'df': '\u25C6' # filled diamond marker
}
def _sign(x):
""" Return the sign of x
INPUTS
------
x: number
value to get the sign of
OUTPUTS
-------
s: signed int
-1, 0 or 1 if negative, null or positive
"""
if (x > 0):
return 1
elif (x == 0):
return 0
else:
return -1
def _transpose(mat):
""" Transpose matrice made of lists
INPUTS
------
mat: iterable 2d list like
OUTPUTS
-------
r: list of list, 2d list like
transposed matrice
"""
r = [ [x[i] for x in mat] for i in range(len(mat[0])) ]
return r
def _y_reverse(mat):
""" Reverse the y axis of a 2d list-like
INPUTS
------
mat: list of lists
the matrix to reverse on axis 0
OUTPUTS
-------
r: list of lists
the reversed version
"""
r = [ list(reversed(mat_i)) for mat_i in mat ]
return r
class AData(object):
""" Data container for ascii AFigure """
def __init__(self, x, y, marker='_.', plot_slope=True):
""" Constructor
INPUTS
------
x: iterable
x values
y: iterable
y values
KEYWORDS
--------
marker: str
marker for the data.
if None or empty, the curve will be plotted
if the first character of the marker is '_' then unicode markers will be called:
marker repr description
======== =========== =============================
'-' u'None' solid line style
',' u'\\u2219' point marker
'.' u'\\u2218' pixel marker
'.f' u'\\u2218' pixel marker
'o' u'\\u25CB' circle marker
'of' u'\\u25CF' circle marker
'v' u'\\u25BD' triangle_down marker
'vf' u'\\u25BC' filler triangle_down marker
'^' u'\\u25B3' triangle_up marker
'^f' u'\\u25B2' filled triangle_up marker
'<' u'\\u25C1' triangle_left marker
'<f' u'\\u25C0' filled triangle_left marker
'>' u'\\u25B7' triangle_right marker
'>f' u'\\u25B6' filled triangle_right marker
's' u'\\u25FD' square marker
'sf' u'\\u25FC' square marker
'*' u'\\u2606' star marker
'*f' u'\\u2605' star marker
'+' u'\\u271A' plus marker
'x' u'\\u274C' x marker
'd' u'\\u25C7' diamond marker
'df' u'\\u25C6' filled diamond marker
plot_slope: bool
if set, the curve will be plotted
"""
self.x = x
self.y = y
self.plot_slope = plot_slope
self.set_marker(marker)
def set_marker(self, marker):
""" set the marker of the data
INPUTS
------
marker: str
marker for the data.
see constructor for marker descriptions
"""
if marker in [None, 'None', 'None', '']:
self.plot_slope = True
self.marker = ''
elif marker[0] == '_':
self.marker = markers[marker[1:]]
else:
self.marker = marker
def extent(self):
""" return the extention of the data
OUPUTS
------
e: list
[ min(x), max(x), min(y), max(y) ]
"""
return [min(self.x), max(self.x), min(self.y), max(self.y)]
def __repr__(self):
s = 'AData: %s\n' % object.__repr__(self)
return s
class ACanvas(object):
""" Canvas of a AFigure instance. A Canvas handles all transformations
between data space and figure space accounting for scaling and pixels
In general there is no need to access the canvas directly
"""
def __init__(self, shape=None, margins=None, xlim=None, ylim=None):
""" Constructor
KEYWORDS
--------
shape: tuple of 2 ints
shape of the canvas in number of characters: (width, height)
margins: tuple of 2 floats
fractional margins
xlim: tuple of 2 floats
limits of the xaxis
ylim: tuple of 2 floats
limits of the yaxis
"""
self.shape = shape or (50, 20)
self.margins = margins or (0.05, 0.1)
self._xlim = xlim or [0, 1]
self._ylim = ylim or [0, 1]
self.auto_adjust = True
self.margin_factor = 1
@property
def x_size(self):
""" return the width """
return self.shape[0]
@property
def y_size(self):
""" return the height """
return self.shape[1]
@property
def x_margin(self):
""" return the margin in x """
return self.margins[0]
@property
def y_margin(self):
""" return the margin in y """
return self.margins[1]
def xlim(self, vmin=None, vmax=None):
"""
Get or set the *x* limits of the current axes.
KEYWORDS
--------
vmin: float
lower limit
vmax: float
upper limit
xmin, xmax = xlim() # return the current xlim
xlim( (xmin, xmax) ) # set the xlim to xmin, xmax
xlim( xmin, xmax ) # set the xlim to xmin, xmax
"""
if vmin is None and vmax is None:
return self._xlim
elif hasattr(vmin, '__iter__'):
self._xlim = vmin[:2]
else:
self._xlim = [vmin, vmax]
if self._xlim[0] == self._xlim[1]:
self._xlim[1] += 1
self._xlim[0] -= self.x_mod
self._xlim[1] += self.x_mod
def ylim(self, vmin=None, vmax=None):
"""
Get or set the *y* limits of the current axes.
KEYWORDS
--------
vmin: float
lower limit
vmax: float
upper limit
ymin, ymax = ylim() # return the current xlim
ylim( (ymin, ymax) ) # set the xlim to xmin, xmax
ylim( ymin, ymax ) # set the xlim to xmin, xmax
"""
if vmin is None and vmax is None:
return self._ylim
elif hasattr(vmin, '__iter__'):
self._ylim = vmin[:2]
else:
self._ylim = [vmin, vmax]
if self._ylim[0] == self._ylim[1]:
self._ylim[1] += 1
self._ylim[0] -= self.y_mod
self._ylim[1] += self.y_mod
@property
def min_x(self):
""" return the lower x limit """
return self._xlim[0]
@property
def max_x(self):
""" return the upper x limit """
return self._xlim[1]
@property
def min_y(self):
""" return the lower y limit """
return self._ylim[0]
@property
def max_y(self):
""" return the upper y limit """
return self._ylim[1]
@property
def x_step(self):
return float(self.max_x - self.min_x) / float(self.x_size)
@property
def y_step(self):
return float(self.max_y - self.min_y) / float(self.y_size)
@property
def ratio(self):
return self.y_step / self.x_step
@property
def x_mod(self):
return (self.max_x - self.min_x) * self.x_margin
@property
def y_mod(self):
return (self.max_y - self.min_y) * self.y_margin
def extent(self, margin_factor=None):
margin_factor = margin_factor or self.margin_factor
min_x = (self.min_x + self.x_mod * margin_factor)
max_x = (self.max_x - self.x_mod * margin_factor)
min_y = (self.min_y + self.y_mod * margin_factor)
max_y = (self.max_y - self.y_mod * margin_factor)
return (min_x, max_x, min_y, max_y)
def extent_str(self, margin=None):
def transform(val, fmt):
if abs(val) < 1:
_str = "%+.2g" % val
elif fmt is not None:
_str = fmt % val
else:
_str = None
return _str
e = self.extent(margin)
xfmt = self.x_str()
yfmt = self.y_str()
return transform(e[0], xfmt), transform(e[1], xfmt), transform(e[2], yfmt), transform(e[3], yfmt)
def x_str(self):
if self.x_size < 16:
x_str = None
elif self.x_size < 23:
x_str = "%+.2g"
else:
x_str = "%+g"
return x_str
def y_str(self):
if self.x_size < 8:
y_str = None
elif self.x_size < 11:
y_str = "%+.2g"
else:
y_str = "%+g"
return y_str
def coords_inside_buffer(self, x, y):
return (0 <= x < self.x_size) and (0 < y < self.y_size)
def coords_inside_data(self, x, y):
""" return if (x,y) covered by the data box
x, y: float
coordinates to test
"""
return (self.min_x <= x < self.max_x) and (self.min_y <= y < self.max_y)
def _clip_line(self, line_pt_1, line_pt_2):
""" clip a line to the canvas """
e = self.extent()
x_min = min(line_pt_1[0], line_pt_2[0])
x_max = max(line_pt_1[0], line_pt_2[0])
y_min = min(line_pt_1[1], line_pt_2[1])
y_max = max(line_pt_1[1], line_pt_2[1])
if line_pt_1[0] == line_pt_2[0]:
return ( ( line_pt_1[0], max(y_min, e[1]) ),
( line_pt_1[0], min(y_max, e[3]) ))
if line_pt_1[1] == line_pt_2[1]:
return ( ( max(x_min, e[0]), line_pt_1[1] ),
( min(x_max, e[2]), line_pt_1[1] ))
if ( (e[0] <= line_pt_1[0] < e[2]) and
(e[1] <= line_pt_1[1] < e[3]) and
(e[0] <= line_pt_2[0] < e[2]) and
(e[1] <= line_pt_2[1] < e[3]) ):
return line_pt_1, line_pt_2
ts = [0.0,
1.0,
float(e[0] - line_pt_1[0]) / (line_pt_2[0] - line_pt_1[0]),
float(e[2] - line_pt_1[0]) / (line_pt_2[0] - line_pt_1[0]),
float(e[1] - line_pt_1[1]) / (line_pt_2[1] - line_pt_1[1]),
float(e[3] - line_pt_1[1]) / (line_pt_2[1] - line_pt_1[1])
]
ts.sort()
if (ts[2] < 0) or (ts[2] >= 1) or (ts[3] < 0) or (ts[2] >= 1):
return None
result = [(pt_1 + t * (pt_2 - pt_1)) for t in (ts[2], ts[3]) for (pt_1, pt_2) in zip(line_pt_1, line_pt_2)]
return ( result[:2], result[2:] )
class AFigure(object):
def __init__(self, shape=(80, 20), margins=(0.05, 0.1), draw_axes=True, newline='\n',
plot_labels=True, xlim=None, ylim=None, **kwargs):
self.canvas = ACanvas(shape, margins=margins, xlim=xlim, ylim=ylim)
self.draw_axes = draw_axes
self.new_line = newline
self.plot_labels = plot_labels
self.output_buffer = None
self.tickSymbols = '\u253C' # "+"
self.x_axis_symbol = '\u2500' # u"\u23bc" # "-"
self.y_axis_symbol = '\u2502' # "|"
self.data = []
def xlim(self, vmin=None, vmax=None):
return self.canvas.xlim(vmin, vmax)
def ylim(self, vmin=None, vmax=None):
return self.canvas.ylim(vmin, vmax)
def get_coord(self, val, min, step, limits=None):
result = int((val - min) / step)
if limits is not None:
if result <= limits[0]:
result = limits[0]
elif result >= limits[1]:
result = limits[1] - 1
return result
def _draw_axes(self):
zero_x = self.get_coord(0, self.canvas.min_x, self.canvas.x_step, limits=[1, self.canvas.x_size])
if zero_x >= self.canvas.x_size:
zero_x = self.canvas.x_size - 1
for y in range(self.canvas.y_size):
self.output_buffer[zero_x][y] = self.y_axis_symbol
zero_y = self.get_coord(0, self.canvas.min_y, self.canvas.y_step, limits=[1, self.canvas.y_size])
if zero_y >= self.canvas.y_size:
zero_y = self.canvas.y_size - 1
for x in range(self.canvas.x_size):
self.output_buffer[x][zero_y] = self.x_axis_symbol # u'\u23bc'
self.output_buffer[zero_x][zero_y] = self.tickSymbols # "+"
def _get_symbol_by_slope(self, slope, default_symbol):
""" Return a line oriented directed approximatively along the slope value """
if slope > _math.tan(3 * _math.pi / 8):
draw_symbol = "|"
elif _math.tan(_math.pi / 8) < slope < _math.tan(3 * _math.pi / 8):
draw_symbol = '\u27cb' # "/"
elif abs(slope) < _math.tan(_math.pi / 8):
draw_symbol = "-"
elif slope < _math.tan(-_math.pi / 8) and slope > _math.tan(-3 * _math.pi / 8):
draw_symbol = '\u27CD' # "\\"
elif slope < _math.tan(-3 * _math.pi / 8):
draw_symbol = "|"
else:
draw_symbol = default_symbol
return draw_symbol
def _plot_labels(self):
if self.canvas.y_size < 2:
return
act_min_x, act_max_x, act_min_y, act_max_y = self.canvas.extent()
min_x_coord = self.get_coord(act_min_x, self.canvas.min_x, self.canvas.x_step, limits=[0, self.canvas.x_size])
max_x_coord = self.get_coord(act_max_x, self.canvas.min_x, self.canvas.x_step, limits=[0, self.canvas.x_size])
min_y_coord = self.get_coord(act_min_y, self.canvas.min_y, self.canvas.y_step, limits=[1, self.canvas.y_size])
max_y_coord = self.get_coord(act_max_y, self.canvas.min_y, self.canvas.y_step, limits=[1, self.canvas.y_size])
x_zero_coord = self.get_coord(0, self.canvas.min_x, self.canvas.x_step, limits=[0, self.canvas.x_size])
y_zero_coord = self.get_coord(0, self.canvas.min_y, self.canvas.y_step, limits=[1, self.canvas.y_size])
self.output_buffer[x_zero_coord][min_y_coord] = self.tickSymbols
self.output_buffer[x_zero_coord][max_y_coord] = self.tickSymbols
self.output_buffer[min_x_coord][y_zero_coord] = self.tickSymbols
self.output_buffer[max_x_coord][y_zero_coord] = self.tickSymbols
min_x_str, max_x_str, min_y_str, max_y_str = self.canvas.extent_str()
if (self.canvas.x_str() is not None):
for i, c in enumerate(min_x_str):
self.output_buffer[min_x_coord + i + 1][y_zero_coord - 1] = c
for i, c in enumerate(max_x_str):
self.output_buffer[max_x_coord + i - len(max_x_str)][y_zero_coord - 1] = c
if (self.canvas.y_str() is not None):
for i, c in enumerate(max_y_str):
self.output_buffer[x_zero_coord + i + 1][max_y_coord] = c
for i, c in enumerate(min_y_str):
self.output_buffer[x_zero_coord + i + 1][min_y_coord] = c
def _plot_line(self, start, end, data):
""" plot a line from start = (x0, y0) to end = (x1, y1) """
clipped_line = self.canvas._clip_line(start, end)
if clipped_line is None:
return False
start, end = clipped_line
x0 = self.get_coord(start[0], self.canvas.min_x, self.canvas.x_step)
y0 = self.get_coord(start[1], self.canvas.min_y, self.canvas.y_step)
x1 = self.get_coord(end[0], self.canvas.min_x, self.canvas.x_step)
y1 = self.get_coord(end[1], self.canvas.min_y, self.canvas.y_step)
if (x0, y0) == (x1, y1):
return True
#x_zero_coord = self.get_coord(0, self.canvas.min_x, self.canvas.x_step)
y_zero_coord = self.get_coord(0, self.canvas.min_y, self.canvas.y_step, limits=[1, self.canvas.y_size])
if start[0] - end[0] == 0:
draw_symbol = "|"
elif start[1] - end[1] == 0:
draw_symbol = '-'
else:
slope = (1.0 / self.canvas.ratio) * (end[1] - start[1]) / (end[0] - start[0])
draw_symbol = self._get_symbol_by_slope(slope, data.marker)
dx = x1 - x0
dy = y1 - y0
if abs(dx) > abs(dy):
s = _sign(dx)
slope = float(dy) / dx
for i in range(0, abs(int(dx))):
cur_draw_symbol = draw_symbol
x = i * s
cur_y = int(y0 + slope * x)
if (self.draw_axes) and (cur_y == y_zero_coord) and (draw_symbol == self.x_axis_symbol):
cur_draw_symbol = "-"
self.output_buffer[x0 + x][cur_y] = cur_draw_symbol
else:
s = _sign(dy)
slope = float(dx) / dy
for i in range(0, abs(int(dy))):
y = i * s
cur_draw_symbol = draw_symbol
cur_y = y0 + y
if (self.draw_axes) and (cur_y == y_zero_coord) and (draw_symbol == self.x_axis_symbol):
cur_draw_symbol = "-"
self.output_buffer[int(x0 + slope * y)][cur_y] = cur_draw_symbol
return False
def _plot_data_with_slope(self, data):
xy = list(zip(data.x, data.y))
#sort according to the x coord
xy.sort(key=lambda c: c[0])
prev_p = xy[0]
e_xy = enumerate(xy)
next(e_xy)
for i, (xi, yi) in e_xy:
line = self._plot_line(prev_p, (xi, yi), data)
prev_p = (xi, yi)
# if no line, then symbol
if not line & self.canvas.coords_inside_data(xi, yi):
draw_symbol = data.marker
px, py = xy[i - 1]
nx, ny = xy[i]
if abs(nx - px) > 0.000001:
slope = (1.0 / self.canvas.ratio) * (ny - py) / (nx - px)
draw_symbol = self._get_symbol_by_slope(slope, draw_symbol)
x_coord = self.get_coord(xi, self.canvas.min_x, self.canvas.x_step)
y_coord = self.get_coord(yi, self.canvas.min_y, self.canvas.y_step)
if self.canvas.coords_inside_buffer(x_coord, y_coord):
y0_coord = self.get_coord(0, self.canvas.min_y, self.canvas.y_step)
if self.draw_axes:
if (y_coord == y0_coord) and (draw_symbol == "\u23bc"):
draw_symbol = "="
self.output_buffer[x_coord][y_coord] = draw_symbol
def _plot_data(self, data):
if data.plot_slope:
self._plot_data_with_slope(data)
else:
for x, y in zip(data.x, data.y):
if self.canvas.coords_inside_data(x, y):
x_coord = self.get_coord(x, self.canvas.min_x, self.canvas.x_step)
y_coord = self.get_coord(y, self.canvas.min_y, self.canvas.y_step)
if self.canvas.coords_inside_buffer(x_coord, y_coord):
self.output_buffer[x_coord][y_coord] = data.marker
def auto_limits(self):
if self.canvas.auto_adjust is True:
min_x = 0.
max_x = 0.
min_y = 0.
max_y = 0.
for dk in self.data:
ek = dk.extent()
min_x = min(min_x, min(ek[:2]))
min_y = min(min_y, min(ek[2:]))
max_x = max(max_x, max(ek[:2]))
max_y = max(max_y, max(ek[2:]))
self.canvas.xlim(min_x, max_x)
self.canvas.ylim(min_y, max_y)
def append_data(self, data):
self.data.append(data)
self.auto_limits()
def plot(self, x_seq, y_seq=None, marker=None, plot_slope=False, xlim=None, ylim=None):
if y_seq is None:
y_seq = x_seq[:]
x_seq = list(range(len(y_seq)))
data = AData(x_seq, y_seq, marker=marker, plot_slope=plot_slope)
self.append_data(data)
if xlim is not None:
self.canvas.xlim(xlim)
if ylim is not None:
self.canvas.ylim(xlim)
return self.draw()
def draw(self):
self.output_buffer = [[" "] * self.canvas.y_size for i in range(self.canvas.x_size)]
if self.draw_axes:
self._draw_axes()
for dk in self.data:
self._plot_data(dk)
if self.plot_labels:
self._plot_labels()
trans_result = _transpose(_y_reverse(self.output_buffer))
result = self.new_line.join(["".join(row) for row in trans_result])
return result
def plot(x, y=None, marker=None, shape=(50, 20), draw_axes=True,
newline='\n', plot_slope=False, x_margin=0.05,
y_margin=0.1, plot_labels=True, xlim=None, ylim=None):
flags = {'shape': shape,
'draw_axes': draw_axes,
'newline': newline,
'marker': marker,
'plot_slope': plot_slope,
'margins': (x_margin, y_margin),
'plot_labels': plot_labels }
p = AFigure(**flags)
print(p.plot(x, y, marker=marker, plot_slope=plot_slope))
def steppify(x, y):
""" Steppify a curve (x,y). Useful for manually filling histograms """
dx = 0.5 * (x[1:] + x[:-1])
xx = np.zeros( 2 * len(dx), dtype=float)
yy = np.zeros( 2 * len(y), dtype=float)
xx[0::2], xx[1::2] = dx, dx
yy[0::2], yy[1::2] = y, y
xx = np.concatenate(([x[0] - (dx[0] - x[0])], xx, [x[-1] + (x[-1] - dx[-1])]))
return xx, yy
def stemify(x, y):
""" Steppify a curve (x,y). Useful for manually filling histograms """
xx = np.zeros( 3 * len(x), dtype=float)
yy = np.zeros( 3 * len(y), dtype=float)
xx[0::3], xx[1::3], xx[2::3] = x, x, x
yy[1::3] = y
return xx, yy
def hist(x, bins=10, normed=False, weights=None, density=None, histtype='stem',
shape=(50, 20), draw_axes=True, newline='\n',
marker='_.', plot_slope=False, x_margin=0.05,
y_margin=0.1, plot_labels=True, xlim=None, ylim=None ):
from numpy import histogram
if histtype not in ['None', 'stem', 'step']:
raise ValueError('histtype must be in [None, stem, step]')
n, b = histogram(x, bins=bins, range=xlim, normed=normed, weights=weights, density=density)
_x = 0.5 * ( b[:-1] + b[1:] )
if histtype == 'step':
step(_x, n.astype(float))
elif histtype == 'stem':
stem(_x, n.astype(float))
else:
_y = n.astype(float)
plot(_x, _y, shape=shape, draw_axes=draw_axes, newline=newline, marker=marker,
plot_slope=plot_slope, x_margin=x_margin, y_margin=y_margin,
plot_labels=plot_labels, xlim=xlim, ylim=ylim)
def step(x, y, shape=(50, 20), draw_axes=True,
newline='\n', marker='_.', plot_slope=True, x_margin=0.05,
y_margin=0.1, plot_labels=True, xlim=None, ylim=None ):
_x, _y = steppify(x, y)
plot(_x, _y, shape=shape, draw_axes=draw_axes, newline=newline, marker=marker,
plot_slope=plot_slope, x_margin=x_margin, y_margin=y_margin,
plot_labels=plot_labels, xlim=xlim, ylim=ylim)
def stem(x, y, shape=(50, 20), draw_axes=True,
newline='\n', marker='_.', plot_slope=True, x_margin=0.05,
y_margin=0.1, plot_labels=True, xlim=None, ylim=None ):
_x, _y = stemify(x, y)
plot(_x, _y, shape=shape, draw_axes=draw_axes, newline=newline, marker=marker,
plot_slope=plot_slope, x_margin=x_margin, y_margin=y_margin,
plot_labels=plot_labels, xlim=xlim, ylim=ylim)
def hist2d(x, y, bins=[50, 20], range=None, normed=False, weights=None, ncolors=16,
width=50, percentiles=None):
im, ex, ey = np.histogram2d(x, y, bins, range=None, normed=normed, weights=weights)
if percentiles is None:
imshow(im, extent=[min(ex), max(ex), min(ey), max(ey)],
ncolors=ncolors, width=width)
else:
percentile_imshow(im, levels=percentiles, extent=None,
width=width, ncolors=width)
def percentile_imshow(im, levels=[68, 95, 99], extent=None, width=50, ncolors=16):
_im = im.astype(float)
_im -= im.min()
_im /= _im.max()
n = len(levels)
for e, lk in enumerate(sorted(levels)):
_im[ _im <= 0.01 * float(lk) ] = n - e
imshow(1. - _im, extent=None, width=width, ncolors=ncolors)
def imshow(im, extent=None, width=50, ncolors=16):
from scipy import ndimage
width0 = im.shape[0]
_im = ndimage.zoom(im.astype(float), float(width) / float(width0) )
_im -= im.min()
_im /= _im.max()
width, height = _im.shape[:2]
if len(im.shape) > 2:
_clr = True
else:
_clr = False
if ncolors == 16:
color = "MNHQ$OC?7>!:-;. "[::-1]
else:
color = '''$@B%8&WM#*oahkbdpqwmZO0QLCJUYXzcvunxrjft/\|()1{}[]?-_+~<>i!lI;:,"^`'. '''[::-1]
ncolors = len(color)
string = ""
if not _clr:
for h in range(height): # first go through the height, otherwise will roate
for w in range(width):
string += color[int(_im[w, h] * (ncolors - 1) )]
string += "\n"
else:
for h in range(height): # first go through the height, otherwise will roate
for w in range(width):
string += color[int(sum(_im[w, h]) * (ncolors - 1) )]
string += "\n"
print(string)
| gpl-2.0 |
nelson-liu/scikit-learn | examples/cluster/plot_face_segmentation.py | 71 | 2839 | """
===================================================
Segmenting the picture of a raccoon face in regions
===================================================
This example uses :ref:`spectral_clustering` on a graph created from
voxel-to-voxel difference on an image to break this image into multiple
partly-homogeneous regions.
This procedure (spectral clustering on an image) is an efficient
approximate solution for finding normalized graph cuts.
There are two options to assign labels:
* with 'kmeans' spectral clustering will cluster samples in the embedding space
using a kmeans algorithm
* whereas 'discrete' will iteratively search for the closest partition
space to the embedding space.
"""
print(__doc__)
# Author: Gael Varoquaux <gael.varoquaux@normalesup.org>, Brian Cheung
# License: BSD 3 clause
import time
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
from sklearn.feature_extraction import image
from sklearn.cluster import spectral_clustering
from sklearn.utils.testing import SkipTest
from sklearn.utils.fixes import sp_version
if sp_version < (0, 12):
raise SkipTest("Skipping because SciPy version earlier than 0.12.0 and "
"thus does not include the scipy.misc.face() image.")
# load the raccoon face as a numpy array
try:
face = sp.face(gray=True)
except AttributeError:
# Newer versions of scipy have face in misc
from scipy import misc
face = misc.face(gray=True)
# Resize it to 10% of the original size to speed up the processing
face = sp.misc.imresize(face, 0.10) / 255.
# Convert the image into a graph with the value of the gradient on the
# edges.
graph = image.img_to_graph(face)
# Take a decreasing function of the gradient: an exponential
# The smaller beta is, the more independent the segmentation is of the
# actual image. For beta=1, the segmentation is close to a voronoi
beta = 5
eps = 1e-6
graph.data = np.exp(-beta * graph.data / graph.data.std()) + eps
# Apply spectral clustering (this step goes much faster if you have pyamg
# installed)
N_REGIONS = 25
#############################################################################
# Visualize the resulting regions
for assign_labels in ('kmeans', 'discretize'):
t0 = time.time()
labels = spectral_clustering(graph, n_clusters=N_REGIONS,
assign_labels=assign_labels, random_state=1)
t1 = time.time()
labels = labels.reshape(face.shape)
plt.figure(figsize=(5, 5))
plt.imshow(face, cmap=plt.cm.gray)
for l in range(N_REGIONS):
plt.contour(labels == l, contours=1,
colors=[plt.cm.spectral(l / float(N_REGIONS))])
plt.xticks(())
plt.yticks(())
title = 'Spectral clustering: %s, %.2fs' % (assign_labels, (t1 - t0))
print(title)
plt.title(title)
plt.show()
| bsd-3-clause |
nesterione/scikit-learn | sklearn/ensemble/tests/test_weight_boosting.py | 35 | 16763 | """Testing for the boost module (sklearn.ensemble.boost)."""
import numpy as np
from sklearn.utils.testing import assert_array_equal, assert_array_less
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_equal, assert_true
from sklearn.utils.testing import assert_raises, assert_raises_regexp
from sklearn.cross_validation import train_test_split
from sklearn.grid_search import GridSearchCV
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import AdaBoostRegressor
from sklearn.ensemble import weight_boosting
from scipy.sparse import csc_matrix
from scipy.sparse import csr_matrix
from scipy.sparse import coo_matrix
from scipy.sparse import dok_matrix
from scipy.sparse import lil_matrix
from sklearn.svm import SVC, SVR
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.utils import shuffle
from sklearn import datasets
# Common random state
rng = np.random.RandomState(0)
# Toy sample
X = [[-2, -1], [-1, -1], [-1, -2], [1, 1], [1, 2], [2, 1]]
y_class = ["foo", "foo", "foo", 1, 1, 1] # test string class labels
y_regr = [-1, -1, -1, 1, 1, 1]
T = [[-1, -1], [2, 2], [3, 2]]
y_t_class = ["foo", 1, 1]
y_t_regr = [-1, 1, 1]
# Load the iris dataset and randomly permute it
iris = datasets.load_iris()
perm = rng.permutation(iris.target.size)
iris.data, iris.target = shuffle(iris.data, iris.target, random_state=rng)
# Load the boston dataset and randomly permute it
boston = datasets.load_boston()
boston.data, boston.target = shuffle(boston.data, boston.target,
random_state=rng)
def test_samme_proba():
# Test the `_samme_proba` helper function.
# Define some example (bad) `predict_proba` output.
probs = np.array([[1, 1e-6, 0],
[0.19, 0.6, 0.2],
[-999, 0.51, 0.5],
[1e-6, 1, 1e-9]])
probs /= np.abs(probs.sum(axis=1))[:, np.newaxis]
# _samme_proba calls estimator.predict_proba.
# Make a mock object so I can control what gets returned.
class MockEstimator(object):
def predict_proba(self, X):
assert_array_equal(X.shape, probs.shape)
return probs
mock = MockEstimator()
samme_proba = weight_boosting._samme_proba(mock, 3, np.ones_like(probs))
assert_array_equal(samme_proba.shape, probs.shape)
assert_true(np.isfinite(samme_proba).all())
# Make sure that the correct elements come out as smallest --
# `_samme_proba` should preserve the ordering in each example.
assert_array_equal(np.argmin(samme_proba, axis=1), [2, 0, 0, 2])
assert_array_equal(np.argmax(samme_proba, axis=1), [0, 1, 1, 1])
def test_classification_toy():
# Check classification on a toy dataset.
for alg in ['SAMME', 'SAMME.R']:
clf = AdaBoostClassifier(algorithm=alg, random_state=0)
clf.fit(X, y_class)
assert_array_equal(clf.predict(T), y_t_class)
assert_array_equal(np.unique(np.asarray(y_t_class)), clf.classes_)
assert_equal(clf.predict_proba(T).shape, (len(T), 2))
assert_equal(clf.decision_function(T).shape, (len(T),))
def test_regression_toy():
# Check classification on a toy dataset.
clf = AdaBoostRegressor(random_state=0)
clf.fit(X, y_regr)
assert_array_equal(clf.predict(T), y_t_regr)
def test_iris():
# Check consistency on dataset iris.
classes = np.unique(iris.target)
clf_samme = prob_samme = None
for alg in ['SAMME', 'SAMME.R']:
clf = AdaBoostClassifier(algorithm=alg)
clf.fit(iris.data, iris.target)
assert_array_equal(classes, clf.classes_)
proba = clf.predict_proba(iris.data)
if alg == "SAMME":
clf_samme = clf
prob_samme = proba
assert_equal(proba.shape[1], len(classes))
assert_equal(clf.decision_function(iris.data).shape[1], len(classes))
score = clf.score(iris.data, iris.target)
assert score > 0.9, "Failed with algorithm %s and score = %f" % \
(alg, score)
# Somewhat hacky regression test: prior to
# ae7adc880d624615a34bafdb1d75ef67051b8200,
# predict_proba returned SAMME.R values for SAMME.
clf_samme.algorithm = "SAMME.R"
assert_array_less(0,
np.abs(clf_samme.predict_proba(iris.data) - prob_samme))
def test_boston():
# Check consistency on dataset boston house prices.
clf = AdaBoostRegressor(random_state=0)
clf.fit(boston.data, boston.target)
score = clf.score(boston.data, boston.target)
assert score > 0.85
def test_staged_predict():
# Check staged predictions.
rng = np.random.RandomState(0)
iris_weights = rng.randint(10, size=iris.target.shape)
boston_weights = rng.randint(10, size=boston.target.shape)
# AdaBoost classification
for alg in ['SAMME', 'SAMME.R']:
clf = AdaBoostClassifier(algorithm=alg, n_estimators=10)
clf.fit(iris.data, iris.target, sample_weight=iris_weights)
predictions = clf.predict(iris.data)
staged_predictions = [p for p in clf.staged_predict(iris.data)]
proba = clf.predict_proba(iris.data)
staged_probas = [p for p in clf.staged_predict_proba(iris.data)]
score = clf.score(iris.data, iris.target, sample_weight=iris_weights)
staged_scores = [
s for s in clf.staged_score(
iris.data, iris.target, sample_weight=iris_weights)]
assert_equal(len(staged_predictions), 10)
assert_array_almost_equal(predictions, staged_predictions[-1])
assert_equal(len(staged_probas), 10)
assert_array_almost_equal(proba, staged_probas[-1])
assert_equal(len(staged_scores), 10)
assert_array_almost_equal(score, staged_scores[-1])
# AdaBoost regression
clf = AdaBoostRegressor(n_estimators=10, random_state=0)
clf.fit(boston.data, boston.target, sample_weight=boston_weights)
predictions = clf.predict(boston.data)
staged_predictions = [p for p in clf.staged_predict(boston.data)]
score = clf.score(boston.data, boston.target, sample_weight=boston_weights)
staged_scores = [
s for s in clf.staged_score(
boston.data, boston.target, sample_weight=boston_weights)]
assert_equal(len(staged_predictions), 10)
assert_array_almost_equal(predictions, staged_predictions[-1])
assert_equal(len(staged_scores), 10)
assert_array_almost_equal(score, staged_scores[-1])
def test_gridsearch():
# Check that base trees can be grid-searched.
# AdaBoost classification
boost = AdaBoostClassifier(base_estimator=DecisionTreeClassifier())
parameters = {'n_estimators': (1, 2),
'base_estimator__max_depth': (1, 2),
'algorithm': ('SAMME', 'SAMME.R')}
clf = GridSearchCV(boost, parameters)
clf.fit(iris.data, iris.target)
# AdaBoost regression
boost = AdaBoostRegressor(base_estimator=DecisionTreeRegressor(),
random_state=0)
parameters = {'n_estimators': (1, 2),
'base_estimator__max_depth': (1, 2)}
clf = GridSearchCV(boost, parameters)
clf.fit(boston.data, boston.target)
def test_pickle():
# Check pickability.
import pickle
# Adaboost classifier
for alg in ['SAMME', 'SAMME.R']:
obj = AdaBoostClassifier(algorithm=alg)
obj.fit(iris.data, iris.target)
score = obj.score(iris.data, iris.target)
s = pickle.dumps(obj)
obj2 = pickle.loads(s)
assert_equal(type(obj2), obj.__class__)
score2 = obj2.score(iris.data, iris.target)
assert_equal(score, score2)
# Adaboost regressor
obj = AdaBoostRegressor(random_state=0)
obj.fit(boston.data, boston.target)
score = obj.score(boston.data, boston.target)
s = pickle.dumps(obj)
obj2 = pickle.loads(s)
assert_equal(type(obj2), obj.__class__)
score2 = obj2.score(boston.data, boston.target)
assert_equal(score, score2)
def test_importances():
# Check variable importances.
X, y = datasets.make_classification(n_samples=2000,
n_features=10,
n_informative=3,
n_redundant=0,
n_repeated=0,
shuffle=False,
random_state=1)
for alg in ['SAMME', 'SAMME.R']:
clf = AdaBoostClassifier(algorithm=alg)
clf.fit(X, y)
importances = clf.feature_importances_
assert_equal(importances.shape[0], 10)
assert_equal((importances[:3, np.newaxis] >= importances[3:]).all(),
True)
def test_error():
# Test that it gives proper exception on deficient input.
assert_raises(ValueError,
AdaBoostClassifier(learning_rate=-1).fit,
X, y_class)
assert_raises(ValueError,
AdaBoostClassifier(algorithm="foo").fit,
X, y_class)
assert_raises(ValueError,
AdaBoostClassifier().fit,
X, y_class, sample_weight=np.asarray([-1]))
def test_base_estimator():
# Test different base estimators.
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
# XXX doesn't work with y_class because RF doesn't support classes_
# Shouldn't AdaBoost run a LabelBinarizer?
clf = AdaBoostClassifier(RandomForestClassifier())
clf.fit(X, y_regr)
clf = AdaBoostClassifier(SVC(), algorithm="SAMME")
clf.fit(X, y_class)
from sklearn.ensemble import RandomForestRegressor
from sklearn.svm import SVR
clf = AdaBoostRegressor(RandomForestRegressor(), random_state=0)
clf.fit(X, y_regr)
clf = AdaBoostRegressor(SVR(), random_state=0)
clf.fit(X, y_regr)
# Check that an empty discrete ensemble fails in fit, not predict.
X_fail = [[1, 1], [1, 1], [1, 1], [1, 1]]
y_fail = ["foo", "bar", 1, 2]
clf = AdaBoostClassifier(SVC(), algorithm="SAMME")
assert_raises_regexp(ValueError, "worse than random",
clf.fit, X_fail, y_fail)
def test_sample_weight_missing():
from sklearn.linear_model import LinearRegression
from sklearn.cluster import KMeans
clf = AdaBoostClassifier(LinearRegression(), algorithm="SAMME")
assert_raises(ValueError, clf.fit, X, y_regr)
clf = AdaBoostRegressor(LinearRegression())
assert_raises(ValueError, clf.fit, X, y_regr)
clf = AdaBoostClassifier(KMeans(), algorithm="SAMME")
assert_raises(ValueError, clf.fit, X, y_regr)
clf = AdaBoostRegressor(KMeans())
assert_raises(ValueError, clf.fit, X, y_regr)
def test_sparse_classification():
# Check classification with sparse input.
class CustomSVC(SVC):
"""SVC variant that records the nature of the training set."""
def fit(self, X, y, sample_weight=None):
"""Modification on fit caries data type for later verification."""
super(CustomSVC, self).fit(X, y, sample_weight=sample_weight)
self.data_type_ = type(X)
return self
X, y = datasets.make_multilabel_classification(n_classes=1, n_samples=15,
n_features=5,
random_state=42)
# Flatten y to a 1d array
y = np.ravel(y)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
for sparse_format in [csc_matrix, csr_matrix, lil_matrix, coo_matrix,
dok_matrix]:
X_train_sparse = sparse_format(X_train)
X_test_sparse = sparse_format(X_test)
# Trained on sparse format
sparse_classifier = AdaBoostClassifier(
base_estimator=CustomSVC(probability=True),
random_state=1,
algorithm="SAMME"
).fit(X_train_sparse, y_train)
# Trained on dense format
dense_classifier = AdaBoostClassifier(
base_estimator=CustomSVC(probability=True),
random_state=1,
algorithm="SAMME"
).fit(X_train, y_train)
# predict
sparse_results = sparse_classifier.predict(X_test_sparse)
dense_results = dense_classifier.predict(X_test)
assert_array_equal(sparse_results, dense_results)
# decision_function
sparse_results = sparse_classifier.decision_function(X_test_sparse)
dense_results = dense_classifier.decision_function(X_test)
assert_array_equal(sparse_results, dense_results)
# predict_log_proba
sparse_results = sparse_classifier.predict_log_proba(X_test_sparse)
dense_results = dense_classifier.predict_log_proba(X_test)
assert_array_equal(sparse_results, dense_results)
# predict_proba
sparse_results = sparse_classifier.predict_proba(X_test_sparse)
dense_results = dense_classifier.predict_proba(X_test)
assert_array_equal(sparse_results, dense_results)
# score
sparse_results = sparse_classifier.score(X_test_sparse, y_test)
dense_results = dense_classifier.score(X_test, y_test)
assert_array_equal(sparse_results, dense_results)
# staged_decision_function
sparse_results = sparse_classifier.staged_decision_function(
X_test_sparse)
dense_results = dense_classifier.staged_decision_function(X_test)
for sprase_res, dense_res in zip(sparse_results, dense_results):
assert_array_equal(sprase_res, dense_res)
# staged_predict
sparse_results = sparse_classifier.staged_predict(X_test_sparse)
dense_results = dense_classifier.staged_predict(X_test)
for sprase_res, dense_res in zip(sparse_results, dense_results):
assert_array_equal(sprase_res, dense_res)
# staged_predict_proba
sparse_results = sparse_classifier.staged_predict_proba(X_test_sparse)
dense_results = dense_classifier.staged_predict_proba(X_test)
for sprase_res, dense_res in zip(sparse_results, dense_results):
assert_array_equal(sprase_res, dense_res)
# staged_score
sparse_results = sparse_classifier.staged_score(X_test_sparse,
y_test)
dense_results = dense_classifier.staged_score(X_test, y_test)
for sprase_res, dense_res in zip(sparse_results, dense_results):
assert_array_equal(sprase_res, dense_res)
# Verify sparsity of data is maintained during training
types = [i.data_type_ for i in sparse_classifier.estimators_]
assert all([(t == csc_matrix or t == csr_matrix)
for t in types])
def test_sparse_regression():
# Check regression with sparse input.
class CustomSVR(SVR):
"""SVR variant that records the nature of the training set."""
def fit(self, X, y, sample_weight=None):
"""Modification on fit caries data type for later verification."""
super(CustomSVR, self).fit(X, y, sample_weight=sample_weight)
self.data_type_ = type(X)
return self
X, y = datasets.make_regression(n_samples=15, n_features=50, n_targets=1,
random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
for sparse_format in [csc_matrix, csr_matrix, lil_matrix, coo_matrix,
dok_matrix]:
X_train_sparse = sparse_format(X_train)
X_test_sparse = sparse_format(X_test)
# Trained on sparse format
sparse_classifier = AdaBoostRegressor(
base_estimator=CustomSVR(),
random_state=1
).fit(X_train_sparse, y_train)
# Trained on dense format
dense_classifier = dense_results = AdaBoostRegressor(
base_estimator=CustomSVR(),
random_state=1
).fit(X_train, y_train)
# predict
sparse_results = sparse_classifier.predict(X_test_sparse)
dense_results = dense_classifier.predict(X_test)
assert_array_equal(sparse_results, dense_results)
# staged_predict
sparse_results = sparse_classifier.staged_predict(X_test_sparse)
dense_results = dense_classifier.staged_predict(X_test)
for sprase_res, dense_res in zip(sparse_results, dense_results):
assert_array_equal(sprase_res, dense_res)
types = [i.data_type_ for i in sparse_classifier.estimators_]
assert all([(t == csc_matrix or t == csr_matrix)
for t in types])
| bsd-3-clause |
huaxz1986/git_book | chapters/PreProcessing/feature_selection_filter.py | 1 | 1469 | # -*- coding: utf-8 -*-
"""
数据预处理
~~~~~~~~~~~~~~~~
过滤式特征选择
:copyright: (c) 2016 by the huaxz1986.
:license: lgpl-3.0, see LICENSE for more details.
"""
from sklearn.feature_selection import VarianceThreshold,SelectKBest,f_classif
def test_VarianceThreshold():
'''
测试 VarianceThreshold 的用法
:return: None
'''
X=[[100,1,2,3],
[100,4,5,6],
[100,7,8,9],
[101,11,12,13]]
selector=VarianceThreshold(1)
selector.fit(X)
print("Variances is %s"%selector.variances_)
print("After transform is %s"%selector.transform(X))
print("The surport is %s"%selector.get_support(True))
print("After reverse transform is %s"%
selector.inverse_transform(selector.transform(X)))
def test_SelectKBest():
'''
测试 SelectKBest 的用法,其中考察的特征指标是 f_classif
:return: None
'''
X=[ [1,2,3,4,5],
[5,4,3,2,1],
[3,3,3,3,3,],
[1,1,1,1,1] ]
y=[0,1,0,1]
print("before transform:",X)
selector=SelectKBest(score_func=f_classif,k=3)
selector.fit(X,y)
print("scores_:",selector.scores_)
print("pvalues_:",selector.pvalues_)
print("selected index:",selector.get_support(True))
print("after transform:",selector.transform(X))
if __name__=='__main__':
test_VarianceThreshold() # 调用 test_VarianceThreshold
# test_SelectKBest() # 调用 test_SelectKBest | gpl-3.0 |
AshleySetter/optoanalysis | PotentialComparisonMass.py | 3 | 6062 | import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import least_squares, curve_fit
def steady_state_potential(xdata,HistBins=100):
"""
Calculates the steady state potential.
Parameters
----------
xdata : ndarray
Position data for a degree of freedom
HistBins : int
Number of bins to use for histogram
of xdata. Number of position points
at which the potential is calculated.
Returns
-------
position : ndarray
positions at which potential has been
calculated
potential : ndarray
value of potential at the positions above
"""
import numpy as np
pops=np.histogram(xdata,HistBins)[0]
bins=np.histogram(xdata,HistBins)[1]
bins=bins[0:-1]
bins=bins+np.mean(np.diff(bins))
#normalise pops
pops=pops/float(np.sum(pops))
return bins,-np.log(pops)
def dynamical_potential(xdata, dt, order=3):
"""
Computes potential from spring function
Parameters
----------
xdata : ndarray
Position data for a degree of freedom,
at which to calculate potential
dt : float
time between measurements
order : int
order of polynomial to fit
Returns
-------
Potential : ndarray
valued of potential at positions in
xdata
"""
import numpy as np
adata = CalcAcceleration(xdata, dt)
xdata = xdata[2:] # removes first 2 values as differentiating twice means
# we have acceleration[n] corresponds to position[n-2]
z=np.polyfit(xdata,adata,order)
p=np.poly1d(z)
spring_pot=np.polyint(p)
return -spring_pot
def CalcAcceleration(xdata, dt):
"""
Calculates the acceleration from the position
Parameters
----------
xdata : ndarray
Position data
dt : float
time between measurements
Returns
-------
acceleration : ndarray
values of acceleration from position
2 to N.
"""
acceleration = np.diff(np.diff(xdata))/dt**2
return acceleration
import scipy.constants
def FitRadius(z, SampleFreq, Damping, HistBins=100):
"""
Fits the dynamical potential to the Steady
State Potential by varying the Radius.
z : ndarray
Position data
SampleFreq : float
frequency at which the position data was
sampled
Damping : float
value of damping (in radians/second)
HistBins : int
number of values at which to evaluate
the steady state potential / perform
the fitting to the dynamical potential
Returns
-------
Radius : float
Radius of the nanoparticle
RadiusError : float
One Standard Deviation Error in the Radius from the Fit
(doesn't take into account possible error in damping)
"""
dt = 1/SampleFreq
boltzmann=scipy.constants.Boltzmann
temp=300 # why halved??
density=1800
SteadyStatePotnl = list(steady_state_potential(z, HistBins=HistBins))
yoffset=min(SteadyStatePotnl[1])
SteadyStatePotnl[1] -= yoffset
SpringPotnlFunc = dynamical_potential(z, dt)
SpringPotnl = SpringPotnlFunc(z)
kBT_Gamma = temp*boltzmann*1/Damping
#FitSoln = least_squares(GetResiduals, 50, args=(SteadyStatePotnl, SpringPotnlFunc, kBT_Gamma), full_output=True)
#print(FitSoln)
#RADIUS = FitSoln['x'][0]
DynamicPotentialFunc = MakeDynamicPotentialFunc(kBT_Gamma, density, SpringPotnlFunc)
FitSoln = curve_fit(DynamicPotentialFunc, SteadyStatePotnl[0], SteadyStatePotnl[1], p0 = 50)
print(FitSoln)
popt, pcov = FitSoln
perr = np.sqrt(np.diag(pcov))
Radius, RadiusError = popt[0], perr[0]
mass=((4/3)*np.pi*((Radius*10**-9)**3))*density
yfit=(kBT_Gamma/mass)
Y = yfit*SpringPotnl
fig, ax = plt.subplots()
ax.plot(SteadyStatePotnl[0], SteadyStatePotnl[1], 'bo', label="Steady State Potential")
plt.plot(z,Y, 'r-', label="Dynamical Potential")
ax.legend(loc='best')
ax.set_ylabel('U ($k_{B} T $ Joules)')
ax.set_xlabel('Distance (mV)')
plt.tight_layout()
plt.show()
return Radius, RadiusError
def GetResiduals(Radius, SteadyStatePotnl, SpringPotnlFunc, kBT_Gamma):
density=1800
mass = ((4/3)*np.pi*((Radius*10**-9)**3))*density
yfit=(kBT_Gamma/mass)
ZSteadyState = SteadyStatePotnl[0]
Y = yfit*SpringPotnlFunc(ZSteadyState)
Residuals = SteadyStatePotnl[1] - Y
return Residuals
def MakeDynamicPotentialFunc(kBT_Gamma, density, SpringPotnlFunc):
"""
Creates the function that calculates the potential given
the position (in volts) and the radius of the particle.
Parameters
----------
kBT_Gamma : float
Value of kB*T/Gamma
density : float
density of the nanoparticle
SpringPotnlFunc : function
Function which takes the value of position (in volts)
and returns the spring potential
Returns
-------
PotentialFunc : function
function that calculates the potential given
the position (in volts) and the radius of the
particle.
"""
def PotentialFunc(xdata, Radius):
"""
calculates the potential given the position (in volts)
and the radius of the particle.
Parameters
----------
xdata : ndarray
Positon data (in volts)
Radius : float
Radius in units of nm
Returns
-------
Potential : ndarray
Dynamical Spring Potential at positions given by xdata
"""
mass = ((4/3)*np.pi*((Radius*10**-9)**3))*density
yfit=(kBT_Gamma/mass)
Y = yfit*SpringPotnlFunc(xdata)
return Y
return PotentialFunc
import optoanalysis as oa
dat = oa.load_data('testData.raw')
w0, A, G, _, _ = dat.get_fit_auto(70e3)
gamma = G.n
print(gamma)
z, t, _, _ = dat.filter_data(w0.n/(2*np.pi), 3, 20e3)
SampleFreq = dat.SampleFreq/3
R = FitRadius(z, SampleFreq, Damping=gamma, HistBins=120)
print(R)
| mit |
surhudm/scipy | scipy/spatial/_plotutils.py | 33 | 5483 | from __future__ import division, print_function, absolute_import
import numpy as np
from scipy._lib.decorator import decorator as _decorator
__all__ = ['delaunay_plot_2d', 'convex_hull_plot_2d', 'voronoi_plot_2d']
@_decorator
def _held_figure(func, obj, ax=None, **kw):
import matplotlib.pyplot as plt
if ax is None:
fig = plt.figure()
ax = fig.gca()
was_held = ax.ishold()
try:
ax.hold(True)
return func(obj, ax=ax, **kw)
finally:
ax.hold(was_held)
def _adjust_bounds(ax, points):
ptp_bound = points.ptp(axis=0)
ax.set_xlim(points[:,0].min() - 0.1*ptp_bound[0],
points[:,0].max() + 0.1*ptp_bound[0])
ax.set_ylim(points[:,1].min() - 0.1*ptp_bound[1],
points[:,1].max() + 0.1*ptp_bound[1])
@_held_figure
def delaunay_plot_2d(tri, ax=None):
"""
Plot the given Delaunay triangulation in 2-D
Parameters
----------
tri : scipy.spatial.Delaunay instance
Triangulation to plot
ax : matplotlib.axes.Axes instance, optional
Axes to plot on
Returns
-------
fig : matplotlib.figure.Figure instance
Figure for the plot
See Also
--------
Delaunay
matplotlib.pyplot.triplot
Notes
-----
Requires Matplotlib.
"""
if tri.points.shape[1] != 2:
raise ValueError("Delaunay triangulation is not 2-D")
ax.plot(tri.points[:,0], tri.points[:,1], 'o')
ax.triplot(tri.points[:,0], tri.points[:,1], tri.simplices.copy())
_adjust_bounds(ax, tri.points)
return ax.figure
@_held_figure
def convex_hull_plot_2d(hull, ax=None):
"""
Plot the given convex hull diagram in 2-D
Parameters
----------
hull : scipy.spatial.ConvexHull instance
Convex hull to plot
ax : matplotlib.axes.Axes instance, optional
Axes to plot on
Returns
-------
fig : matplotlib.figure.Figure instance
Figure for the plot
See Also
--------
ConvexHull
Notes
-----
Requires Matplotlib.
"""
from matplotlib.collections import LineCollection
if hull.points.shape[1] != 2:
raise ValueError("Convex hull is not 2-D")
ax.plot(hull.points[:,0], hull.points[:,1], 'o')
line_segments = []
for simplex in hull.simplices:
line_segments.append([(x, y) for x, y in hull.points[simplex]])
ax.add_collection(LineCollection(line_segments,
colors='k',
linestyle='solid'))
_adjust_bounds(ax, hull.points)
return ax.figure
@_held_figure
def voronoi_plot_2d(vor, ax=None, **kw):
"""
Plot the given Voronoi diagram in 2-D
Parameters
----------
vor : scipy.spatial.Voronoi instance
Diagram to plot
ax : matplotlib.axes.Axes instance, optional
Axes to plot on
show_points: bool, optional
Add the Voronoi points to the plot.
show_vertices : bool, optional
Add the Voronoi vertices to the plot.
line_colors : string, optional
Specifies the line color for polygon boundaries
line_width : float, optional
Specifies the line width for polygon boundaries
line_alpha: float, optional
Specifies the line alpha for polygon boundaries
Returns
-------
fig : matplotlib.figure.Figure instance
Figure for the plot
See Also
--------
Voronoi
Notes
-----
Requires Matplotlib.
"""
from matplotlib.collections import LineCollection
if vor.points.shape[1] != 2:
raise ValueError("Voronoi diagram is not 2-D")
if kw.get('show_points', True):
ax.plot(vor.points[:,0], vor.points[:,1], '.')
if kw.get('show_vertices', True):
ax.plot(vor.vertices[:,0], vor.vertices[:,1], 'o')
line_colors = kw.get('line_colors', 'k')
line_width = kw.get('line_width', 1.0)
line_alpha = kw.get('line_alpha', 1.0)
line_segments = []
for simplex in vor.ridge_vertices:
simplex = np.asarray(simplex)
if np.all(simplex >= 0):
line_segments.append([(x, y) for x, y in vor.vertices[simplex]])
lc = LineCollection(line_segments,
colors=line_colors,
lw=line_width,
linestyle='solid')
lc.set_alpha(line_alpha)
ax.add_collection(lc)
ptp_bound = vor.points.ptp(axis=0)
line_segments = []
center = vor.points.mean(axis=0)
for pointidx, simplex in zip(vor.ridge_points, vor.ridge_vertices):
simplex = np.asarray(simplex)
if np.any(simplex < 0):
i = simplex[simplex >= 0][0] # finite end Voronoi vertex
t = vor.points[pointidx[1]] - vor.points[pointidx[0]] # tangent
t /= np.linalg.norm(t)
n = np.array([-t[1], t[0]]) # normal
midpoint = vor.points[pointidx].mean(axis=0)
direction = np.sign(np.dot(midpoint - center, n)) * n
far_point = vor.vertices[i] + direction * ptp_bound.max()
line_segments.append([(vor.vertices[i, 0], vor.vertices[i, 1]),
(far_point[0], far_point[1])])
lc = LineCollection(line_segments,
colors=line_colors,
lw=line_width,
linestyle='dashed')
lc.set_alpha(line_alpha)
ax.add_collection(lc)
_adjust_bounds(ax, vor.points)
return ax.figure
| bsd-3-clause |
UCBerkeleySETI/blimpy | blimpy/plotting/plot_spectrum.py | 1 | 2041 | from .config import *
from ..utils import rebin, db
def plot_spectrum(wf, t=0, f_start=None, f_stop=None, logged=False, if_id=0, c=None, **kwargs):
""" Plot frequency spectrum of a given file
Args:
t (int): integration number to plot (0 -> len(data))
logged (bool): Plot in linear (False) or dB units (True)
if_id (int): IF identification (if multiple IF signals in file)
c: color for line
kwargs: keyword args to be passed to matplotlib plot()
"""
if wf.header['nbits'] <= 2:
logged = False
t = 'all'
ax = plt.gca()
plot_f, plot_data = wf.grab_data(f_start, f_stop, if_id)
# Using accending frequency for all plots.
if wf.header['foff'] < 0:
plot_data = plot_data[..., ::-1] # Reverse data
plot_f = plot_f[::-1]
if isinstance(t, int):
print("extracting integration %i..." % t)
plot_data = plot_data[t]
elif t == 'all':
print("averaging along time axis...")
# Since the data has been squeezed, the axis for time goes away if only one bin, causing a bug with axis=1
if len(plot_data.shape) > 1:
plot_data = plot_data.mean(axis=0)
else:
plot_data = plot_data.mean()
else:
raise RuntimeError("Unknown integration %s" % t)
# Rebin to max number of points
dec_fac_x = 1
if plot_data.shape[0] > MAX_PLT_POINTS:
dec_fac_x = int(plot_data.shape[0] / MAX_PLT_POINTS)
plot_data = rebin(plot_data, dec_fac_x, 1)
plot_f = rebin(plot_f, dec_fac_x, 1)
if not c:
kwargs['c'] = '#333333'
if logged:
plt.plot(plot_f, db(plot_data), label='Stokes I', **kwargs)
plt.ylabel("Power [dB]")
else:
plt.plot(plot_f, plot_data, label='Stokes I', **kwargs)
plt.ylabel("Power [counts]")
plt.xlabel("Frequency [MHz]")
plt.legend()
try:
plt.title(wf.header['source_name'])
except KeyError:
plt.title(wf.filename)
plt.xlim(plot_f[0], plot_f[-1])
| bsd-3-clause |
dolaameng/keras | examples/mnist_sklearn_wrapper.py | 7 | 3506 | '''Example of how to use sklearn wrapper
Builds simple CNN models on MNIST and uses sklearn's GridSearchCV to find best model
'''
from __future__ import print_function
import numpy as np
np.random.seed(1337) # for reproducibility
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.grid_search import GridSearchCV
nb_classes = 10
# input image dimensions
img_rows, img_cols = 28, 28
# load training data and do basic data normalization
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
# convert class vectors to binary class matrices
y_train = np_utils.to_categorical(y_train, nb_classes)
y_test = np_utils.to_categorical(y_test, nb_classes)
def make_model(dense_layer_sizes, nb_filters, nb_conv, nb_pool):
'''Creates model comprised of 2 convolutional layers followed by dense layers
dense_layer_sizes: List of layer sizes. This list has one number for each layer
nb_filters: Number of convolutional filters in each convolutional layer
nb_conv: Convolutional kernel size
nb_pool: Size of pooling area for max pooling
'''
model = Sequential()
model.add(Convolution2D(nb_filters, nb_conv, nb_conv,
border_mode='valid',
input_shape=(1, img_rows, img_cols)))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, nb_conv, nb_conv))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool)))
model.add(Dropout(0.25))
model.add(Flatten())
for layer_size in dense_layer_sizes:
model.add(Dense(layer_size))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
return model
dense_size_candidates = [[32], [64], [32, 32], [64, 64]]
my_classifier = KerasClassifier(make_model, batch_size=32)
validator = GridSearchCV(my_classifier,
param_grid={'dense_layer_sizes': dense_size_candidates,
# nb_epoch is avail for tuning even when not
# an argument to model building function
'nb_epoch': [3, 6],
'nb_filters': [8],
'nb_conv': [3],
'nb_pool': [2]},
scoring='log_loss',
n_jobs=1)
validator.fit(X_train, y_train)
print('The parameters of the best model are: ')
print(validator.best_params_)
# validator.best_estimator_ returns sklearn-wrapped version of best model.
# validator.best_estimator_.model returns the (unwrapped) keras model
best_model = validator.best_estimator_.model
metric_names = best_model.metrics_names
metric_values = best_model.evaluate(X_test, y_test)
for metric, value in zip(metric_names, metric_values):
print(metric, ': ', value)
| mit |
adammenges/statsmodels | examples/python/regression_plots.py | 33 | 9585 |
## Regression Plots
from __future__ import print_function
from statsmodels.compat import lzip
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.formula.api import ols
### Duncan's Prestige Dataset
#### Load the Data
# We can use a utility function to load any R dataset available from the great <a href="http://vincentarelbundock.github.com/Rdatasets/">Rdatasets package</a>.
prestige = sm.datasets.get_rdataset("Duncan", "car", cache=True).data
prestige.head()
prestige_model = ols("prestige ~ income + education", data=prestige).fit()
print(prestige_model.summary())
#### Influence plots
# Influence plots show the (externally) studentized residuals vs. the leverage of each observation as measured by the hat matrix.
#
# Externally studentized residuals are residuals that are scaled by their standard deviation where
#
# $$var(\\hat{\epsilon}_i)=\hat{\sigma}^2_i(1-h_{ii})$$
#
# with
#
# $$\hat{\sigma}^2_i=\frac{1}{n - p - 1 \;\;}\sum_{j}^{n}\;\;\;\forall \;\;\; j \neq i$$
#
# $n$ is the number of observations and $p$ is the number of regressors. $h_{ii}$ is the $i$-th diagonal element of the hat matrix
#
# $$H=X(X^{\;\prime}X)^{-1}X^{\;\prime}$$
#
# The influence of each point can be visualized by the criterion keyword argument. Options are Cook's distance and DFFITS, two measures of influence.
fig, ax = plt.subplots(figsize=(12,8))
fig = sm.graphics.influence_plot(prestige_model, ax=ax, criterion="cooks")
# As you can see there are a few worrisome observations. Both contractor and reporter have low leverage but a large residual. <br />
# RR.engineer has small residual and large leverage. Conductor and minister have both high leverage and large residuals, and, <br />
# therefore, large influence.
#### Partial Regression Plots
# Since we are doing multivariate regressions, we cannot just look at individual bivariate plots to discern relationships. <br />
# Instead, we want to look at the relationship of the dependent variable and independent variables conditional on the other <br />
# independent variables. We can do this through using partial regression plots, otherwise known as added variable plots. <br />
#
# In a partial regression plot, to discern the relationship between the response variable and the $k$-th variabe, we compute <br />
# the residuals by regressing the response variable versus the independent variables excluding $X_k$. We can denote this by <br />
# $X_{\sim k}$. We then compute the residuals by regressing $X_k$ on $X_{\sim k}$. The partial regression plot is the plot <br />
# of the former versus the latter residuals. <br />
#
# The notable points of this plot are that the fitted line has slope $\beta_k$ and intercept zero. The residuals of this plot <br />
# are the same as those of the least squares fit of the original model with full $X$. You can discern the effects of the <br />
# individual data values on the estimation of a coefficient easily. If obs_labels is True, then these points are annotated <br />
# with their observation label. You can also see the violation of underlying assumptions such as homooskedasticity and <br />
# linearity.
fig, ax = plt.subplots(figsize=(12,8))
fig = sm.graphics.plot_partregress("prestige", "income", ["income", "education"], data=prestige, ax=ax)
ax = fig.axes[0]
ax.set_xlim(-2e-15, 1e-14)
ax.set_ylim(-25, 30);
fix, ax = plt.subplots(figsize=(12,14))
fig = sm.graphics.plot_partregress("prestige", "income", ["education"], data=prestige, ax=ax)
# As you can see the partial regression plot confirms the influence of conductor, minister, and RR.engineer on the partial relationship between income and prestige. The cases greatly decrease the effect of income on prestige. Dropping these cases confirms this.
subset = ~prestige.index.isin(["conductor", "RR.engineer", "minister"])
prestige_model2 = ols("prestige ~ income + education", data=prestige, subset=subset).fit()
print(prestige_model2.summary())
# For a quick check of all the regressors, you can use plot_partregress_grid. These plots will not label the <br />
# points, but you can use them to identify problems and then use plot_partregress to get more information.
fig = plt.figure(figsize=(12,8))
fig = sm.graphics.plot_partregress_grid(prestige_model, fig=fig)
#### Component-Component plus Residual (CCPR) Plots
# The CCPR plot provides a way to judge the effect of one regressor on the <br />
# response variable by taking into account the effects of the other <br />
# independent variables. The partial residuals plot is defined as <br />
# $\text{Residuals} + B_iX_i \text{ }\text{ }$ versus $X_i$. The component adds $B_iX_i$ versus <br />
# $X_i$ to show where the fitted line would lie. Care should be taken if $X_i$ <br />
# is highly correlated with any of the other independent variables. If this <br />
# is the case, the variance evident in the plot will be an underestimate of <br />
# the true variance.
fig, ax = plt.subplots(figsize=(12, 8))
fig = sm.graphics.plot_ccpr(prestige_model, "education", ax=ax)
# As you can see the relationship between the variation in prestige explained by education conditional on income seems to be linear, though you can see there are some observations that are exerting considerable influence on the relationship. We can quickly look at more than one variable by using plot_ccpr_grid.
fig = plt.figure(figsize=(12, 8))
fig = sm.graphics.plot_ccpr_grid(prestige_model, fig=fig)
#### Regression Plots
# The plot_regress_exog function is a convenience function that gives a 2x2 plot containing the dependent variable and fitted values with confidence intervals vs. the independent variable chosen, the residuals of the model vs. the chosen independent variable, a partial regression plot, and a CCPR plot. This function can be used for quickly checking modeling assumptions with respect to a single regressor.
fig = plt.figure(figsize=(12,8))
fig = sm.graphics.plot_regress_exog(prestige_model, "education", fig=fig)
#### Fit Plot
# The plot_fit function plots the fitted values versus a chosen independent variable. It includes prediction confidence intervals and optionally plots the true dependent variable.
fig, ax = plt.subplots(figsize=(12, 8))
fig = sm.graphics.plot_fit(prestige_model, "education", ax=ax)
### Statewide Crime 2009 Dataset
# Compare the following to http://www.ats.ucla.edu/stat/stata/webbooks/reg/chapter4/statareg_self_assessment_answers4.htm
#
# Though the data here is not the same as in that example. You could run that example by uncommenting the necessary cells below.
#dta = pd.read_csv("http://www.stat.ufl.edu/~aa/social/csv_files/statewide-crime-2.csv")
#dta = dta.set_index("State", inplace=True).dropna()
#dta.rename(columns={"VR" : "crime",
# "MR" : "murder",
# "M" : "pctmetro",
# "W" : "pctwhite",
# "H" : "pcths",
# "P" : "poverty",
# "S" : "single"
# }, inplace=True)
#
#crime_model = ols("murder ~ pctmetro + poverty + pcths + single", data=dta).fit()
dta = sm.datasets.statecrime.load_pandas().data
crime_model = ols("murder ~ urban + poverty + hs_grad + single", data=dta).fit()
print(crime_model.summary())
#### Partial Regression Plots
fig = plt.figure(figsize=(12,8))
fig = sm.graphics.plot_partregress_grid(crime_model, fig=fig)
fig, ax = plt.subplots(figsize=(12,8))
fig = sm.graphics.plot_partregress("murder", "hs_grad", ["urban", "poverty", "single"], ax=ax, data=dta)
#### Leverage-Resid<sup>2</sup> Plot
# Closely related to the influence_plot is the leverage-resid<sup>2</sup> plot.
fig, ax = plt.subplots(figsize=(8,6))
fig = sm.graphics.plot_leverage_resid2(crime_model, ax=ax)
#### Influence Plot
fig, ax = plt.subplots(figsize=(8,6))
fig = sm.graphics.influence_plot(crime_model, ax=ax)
#### Using robust regression to correct for outliers.
# Part of the problem here in recreating the Stata results is that M-estimators are not robust to leverage points. MM-estimators should do better with this examples.
from statsmodels.formula.api import rlm
rob_crime_model = rlm("murder ~ urban + poverty + hs_grad + single", data=dta,
M=sm.robust.norms.TukeyBiweight(3)).fit(conv="weights")
print(rob_crime_model.summary())
#rob_crime_model = rlm("murder ~ pctmetro + poverty + pcths + single", data=dta, M=sm.robust.norms.TukeyBiweight()).fit(conv="weights")
#print(rob_crime_model.summary())
# There aren't yet an influence diagnostics as part of RLM, but we can recreate them. (This depends on the status of [issue #888](https://github.com/statsmodels/statsmodels/issues/808))
weights = rob_crime_model.weights
idx = weights > 0
X = rob_crime_model.model.exog[idx]
ww = weights[idx] / weights[idx].mean()
hat_matrix_diag = ww*(X*np.linalg.pinv(X).T).sum(1)
resid = rob_crime_model.resid
resid2 = resid**2
resid2 /= resid2.sum()
nobs = int(idx.sum())
hm = hat_matrix_diag.mean()
rm = resid2.mean()
from statsmodels.graphics import utils
fig, ax = plt.subplots(figsize=(12,8))
ax.plot(resid2[idx], hat_matrix_diag, 'o')
ax = utils.annotate_axes(range(nobs), labels=rob_crime_model.model.data.row_labels[idx],
points=lzip(resid2[idx], hat_matrix_diag), offset_points=[(-5,5)]*nobs,
size="large", ax=ax)
ax.set_xlabel("resid2")
ax.set_ylabel("leverage")
ylim = ax.get_ylim()
ax.vlines(rm, *ylim)
xlim = ax.get_xlim()
ax.hlines(hm, *xlim)
ax.margins(0,0)
| bsd-3-clause |
bigbigdata/IEA-electricity-statistics | IEA_extract.py | 1 | 1607 | import xlrd
import pandas
import matplotlib.pyplot as plt
from pandas import DataFrame
OrgIndex=[] # index of organization
CombustibuleYear=[]
NuclearYear=[]
HydroYear=[]
GWSOYear=[] #Geothermal + wind + solar + other
CombustibuleMonth=[]
NuclearMonth=[]
HydroMonth=[]
GWSOMonth=[] #Geothermal + wind + solar + other
yrcol=[]
yr = range(2007,2014)
month = range(1,13)
for i in yr:
for j in month[-1:]:
if j<10:
date = str(i)+'0'+ str(j)
else:
date = str(i)+ str(j)
filename =date+'.xls'
wb = xlrd.open_workbook(filename)
SheetNames=wb.sheet_names()
for i in SheetNames:
if i[0:5]=='Table':
sh = wb.sheet_by_name(i)
# Extract yearly data first
OrgIndex.append(sh.cell_value(3,0).encode('ascii','ignore'))
yrcol.append(int(sh.cell_value(7,15)))
CombustibuleYear.append(sh.cell_value(9,15))
NuclearYear.append(sh.cell_value(10,15))
HydroYear.append(sh.cell_value(11,15))
GWSOYear.append(sh.cell_value(12,15))
df = DataFrame([OrgIndex,yrcol,CombustibuleYear,NuclearYear,HydroYear,GWSOYear])
df = df.transpose()
df.columns=['Org','date','Comb','Nuclear','Hydro','GWSO']
#print US as an example
df1 = df[df.Org=="UNITED STATES"]
df2 = df1.GWSO
df2.index=df1.date.values
#plot
fig=plt.figure(); ax=fig.add_subplot(1,1,1)
df2.plot(kind='bar')
ax.set_xlabel('Year')
ax.set_ylabel('GSWO Energy (GWh)')
plt.legend(loc='upper center',bbox_to_anchor=(0.5,-0.05),fancybox=True,shadow=True,ncol=6)
plt.savefig('IEA_US_GSWO_Annual.tiff')
| mit |
SENeC-Initiative/PyNCulture | setup.py | 1 | 1509 | #!/usr/bin/env python
#-*- coding:utf-8 -*-
import os, errno
from setuptools import setup, find_packages
# create directory
directory = 'PyNCulture/'
try:
os.makedirs(directory)
except OSError as e:
if e.errno != errno.EEXIST:
raise
# move important
move = (
'__init__.py',
'LICENSE',
'dxf_import',
'examples',
'backup_shape.py',
'dxftools.py',
'geom_utils.py',
'plot.py',
'pync_log.py',
'shape.py',
'svgtools.py',
)
for fname in move:
os.rename(fname, directory + fname)
from PyNCulture import __version__
try:
# install
setup(
name = 'PyNCulture',
version = __version__,
description = 'Python module to describe neuronal cultures as '+\
' complex shapes.',
package_dir = {'': '.'},
packages = find_packages('.'),
# Requirements
install_requires = ['numpy', 'scipy>=0.11'],
extras_require = {
'dxfgrabber': 'dxfgrabber',
'matplotlib': 'matplotlib',
'PyOpenGL': 'PyOpenGL',
'shapely': 'shapely',
'svg.path': 'svg.path',
},
# Metadata
url = 'https://github.com/Silmathoron/PyNCulture',
author = 'Tanguy Fardet, Samuel Bottani',
author_email = 'tanguy.fardet@univ-paris-diderot.fr',
license = 'GPL3',
keywords = 'neuronal cultures geometry'
)
finally:
for fname in move:
os.rename(directory + fname, fname)
| gpl-3.0 |
msarahan/bokeh | bokeh/charts/builders/bar_builder.py | 1 | 12402 | """This is the Bokeh charts interface. It gives you a high level API to build
complex plot is a simple way.
This is the Bar class which lets you build your Bar charts just passing
the arguments to the Chart class and calling the proper functions.
It also add a new chained stacked method.
"""
# -----------------------------------------------------------------------------
# Copyright (c) 2012 - 2014, Continuum Analytics, Inc. All rights reserved.
#
# Powered by the Bokeh Development Team.
#
# The full license is in the file LICENSE.txt, distributed with this software.
# -----------------------------------------------------------------------------
# -----------------------------------------------------------------------------
# Imports
# -----------------------------------------------------------------------------
from __future__ import absolute_import, print_function, division
from ..builder import Builder, create_and_build
from ...models import FactorRange, Range1d
from ..glyphs import BarGlyph
from ...core.properties import Float, Enum, Bool, Override
from ..properties import Dimension
from ..attributes import ColorAttr, CatAttr
from ..operations import Stack, Dodge
from ...core.enums import Aggregation
from ..stats import stats
from ...models.sources import ColumnDataSource
from ..utils import help
# -----------------------------------------------------------------------------
# Classes and functions
# -----------------------------------------------------------------------------
class BarBuilder(Builder):
"""This is the Bar builder and it is in charge of plotting
Bar chart (grouped and stacked) in an easy and intuitive way.
Essentially, it utilizes a standardized way to ingest the data,
make the proper calculations and generate renderers. The renderers
reference the transformed data, which represent the groups of data
that were derived from the inputs. We additionally make calculations
for the ranges.
The x_range is categorical, and is made either from the label argument
or from the `pandas.DataFrame.index`. The y_range can be supplied as the
parameter continuous_range, or will be calculated as a linear range
(Range1d) based on the supplied values.
The bar builder is and can be further used as a base class for other
builders that might also be performing some aggregation across
derived groups of data.
"""
# ToDo: add label back as a discrete dimension
values = Dimension('values')
dimensions = ['values']
# req_dimensions = [['values']]
default_attributes = {'label': CatAttr(),
'color': ColorAttr(),
'line_color': ColorAttr(default='white'),
'stack': CatAttr(),
'group': CatAttr()}
agg = Enum(Aggregation, default='sum')
max_height = Float(1.0)
min_height = Float(0.0)
bar_width = Float(default=0.8)
fill_alpha = Float(default=0.8)
glyph = BarGlyph
comp_glyph_types = Override(default=[BarGlyph])
label_attributes = ['stack', 'group']
label_only = Bool(False)
values_only = Bool(False)
_perform_stack = False
_perform_group = False
def setup(self):
if self.attributes['color'].columns is None:
if self.attributes['stack'].columns is not None:
self.attributes['color'].setup(columns=self.attributes['stack'].columns)
if self.attributes['group'].columns is not None:
self.attributes['color'].setup(columns=self.attributes['group'].columns)
if self.attributes['stack'].columns is not None:
self._perform_stack = True
if self.attributes['group'].columns is not None:
self._perform_group = True
# ToDo: perform aggregation validation
# Not given values kw, so using only categorical data
if self.values.dtype.name == 'object' and len(self.attribute_columns) == 0:
# agg must be count
self.agg = 'count'
self.attributes['label'].set_columns(self.values.selection)
else:
pass
self._apply_inferred_index()
if self.xlabel is None:
if self.attributes['label'].columns is not None:
self.xlabel = str(
', '.join(self.attributes['label'].columns).title()).title()
else:
self.xlabel = self.values.selection
if self.ylabel is None:
if not self.label_only:
self.ylabel = '%s( %s )' % (
self.agg.title(), str(self.values.selection).title())
else:
self.ylabel = '%s( %s )' % (
self.agg.title(), ', '.join(self.attributes['label'].columns).title())
def _apply_inferred_index(self):
"""Configure chart when labels are provided as index instead of as kwarg."""
# try to infer grouping vs stacking labels
if (self.attributes['label'].columns is None and
self.values.selection is not None):
if self.attributes['stack'].columns is not None:
special_column = 'unity'
else:
special_column = 'index'
self._data['label'] = special_column
self.attributes['label'].setup(data=ColumnDataSource(self._data.df),
columns=special_column)
self.xlabel = ''
def set_ranges(self):
"""Push the Bar data into the ColumnDataSource and calculate
the proper ranges.
"""
x_items = self.attributes['label'].items
if x_items is None:
x_items = ''
x_labels = []
# Items are identified by tuples. If the tuple has a single value,
# we unpack it
for item in x_items:
item = self._get_label(item)
x_labels.append(str(item))
self.x_range = FactorRange(factors=x_labels)
y_shift = abs(0.1 * ((self.min_height + self.max_height) / 2))
if self.min_height < 0:
start = self.min_height - y_shift
else:
start = 0.0
if self.max_height > 0:
end = self.max_height + y_shift
else:
end = 0.0
self.y_range = Range1d(start=start, end=end)
def get_extra_args(self):
if self.__class__ is not BarBuilder:
attrs = self.properties(with_bases=False)
return {attr: getattr(self, attr) for attr in attrs}
else:
return {}
def yield_renderers(self):
"""Use the rect glyphs to display the bars.
Takes reference points from data loaded at the ColumnDataSource.
"""
kwargs = self.get_extra_args()
attrs = self.collect_attr_kwargs()
for group in self._data.groupby(**self.attributes):
glyph_kwargs = self.get_group_kwargs(group, attrs)
group_kwargs = kwargs.copy()
group_kwargs.update(glyph_kwargs)
props = self.glyph.properties().difference(set(['label']))
# make sure we always pass the color and line color
for k in ['color', 'line_color']:
group_kwargs[k] = group[k]
# TODO(fpliger): we shouldn't need to do this to ensure we don't
# have extra kwargs... this is needed now because
# of label, group and stack being "special"
for k in set(group_kwargs):
if k not in props:
group_kwargs.pop(k)
bg = self.glyph(label=group.label,
x_label=self._get_label(group['label']),
values=group.data[self.values.selection].values,
agg=stats[self.agg](),
width=self.bar_width,
fill_alpha=self.fill_alpha,
stack_label=self._get_label(group['stack']),
dodge_label=self._get_label(group['group']),
**group_kwargs)
self.add_glyph(group, bg)
if self._perform_stack:
Stack().apply(self.comp_glyphs)
if self._perform_group:
Dodge().apply(self.comp_glyphs)
# a higher level function of bar chart is to keep track of max height of all bars
self.max_height = max([renderer.y_max for renderer in self.comp_glyphs])
self.min_height = min([renderer.y_min for renderer in self.comp_glyphs])
for renderer in self.comp_glyphs:
for sub_renderer in renderer.renderers:
yield sub_renderer
@help(BarBuilder)
def Bar(data, label=None, values=None, color=None, stack=None, group=None, agg="sum",
xscale="categorical", yscale="linear", xgrid=False, ygrid=True,
continuous_range=None, **kw):
""" Create a Bar chart using :class:`BarBuilder <bokeh.charts.builders.bar_builder.BarBuilder>`
render the geometry from values, cat and stacked.
Args:
data (:ref:`userguide_charts_data_types`): the data
source for the chart.
label (list(str) or str, optional): list of string representing the categories.
(Defaults to None)
values (str, optional): iterable 2d representing the data series
values matrix.
color (str or list(str) or `~bokeh.charts._attributes.ColorAttr`): string color,
string column name, list of string columns or a custom `ColorAttr`,
which replaces the default `ColorAttr` for the builder.
stack (list(str) or str, optional): columns to use for stacking.
(Defaults to False, so grouping is assumed)
group (list(str) or str, optional): columns to use for grouping.
agg (str): how to aggregate the `values`. (Defaults to 'sum', or only label is
provided, then performs a `count`)
continuous_range(Range1d, optional): Custom continuous_range to be
used. (Defaults to None)
In addition to the parameters specific to this chart,
:ref:`userguide_charts_defaults` are also accepted as keyword parameters.
Returns:
:class:`Chart`: includes glyph renderers that generate bars
Examples:
.. bokeh-plot::
:source-position: above
from bokeh.charts import Bar, output_file, show, hplot
# best support is with data in a format that is table-like
data = {
'sample': ['1st', '2nd', '1st', '2nd', '1st', '2nd'],
'interpreter': ['python', 'python', 'pypy', 'pypy', 'jython', 'jython'],
'timing': [-2, 5, 12, 40, 22, 30]
}
# x-axis labels pulled from the interpreter column, stacking labels from sample column
bar = Bar(data, values='timing', label='interpreter', stack='sample', agg='mean',
title="Python Interpreter Sampling", legend='top_right', plot_width=400)
# table-like data results in reconfiguration of the chart with no data manipulation
bar2 = Bar(data, values='timing', label=['interpreter', 'sample'],
agg='mean', title="Python Interpreters", plot_width=400)
output_file("stacked_bar.html")
show(hplot(bar, bar2))
"""
if continuous_range and not isinstance(continuous_range, Range1d):
raise ValueError(
"continuous_range must be an instance of bokeh.models.ranges.Range1d"
)
if label is not None and values is None:
kw['label_only'] = True
if (agg == 'sum') or (agg == 'mean'):
agg = 'count'
values = label
# The continuous_range is the y_range (until we implement HBar charts)
y_range = continuous_range
kw['label'] = label
kw['values'] = values
kw['color'] = color
kw['stack'] = stack
kw['group'] = group
kw['agg'] = agg
kw['xscale'] = xscale
kw['yscale'] = yscale
kw['xgrid'] = xgrid
kw['ygrid'] = ygrid
kw['y_range'] = y_range
chart = create_and_build(BarBuilder, data, **kw)
# hide x labels if there is a single value, implying stacking only
if len(chart.x_range.factors) == 1:
chart.below[0].visible = False
return chart
| bsd-3-clause |
matthiasmengel/sealevel | sealevel/projection.py | 1 | 5508 | # This file is part of SEALEVEL - a tool to estimates future sea-level rise
# constrained by past obervations and long-term sea-level commitment
# Copyright (C) 2016 Matthias Mengel working at PIK Potsdam
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# LICENSE.txt for more details.
import os
import numpy as np
import pandas as pd
import dimarray as da
import sealevel.contributor_functions as cf
reload(cf)
project_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
inputdatadir = os.path.join(project_dir, "data/")
######## parameters that need to be known for calibrations and projection
# add temperature offset to be used with the box & colgan 2013 data to fit past observations
# similar offset is used by e.g. Rahmstorf 2007, Science.
gis_colgan_temperature_offset = 0.5
######## sea level projection using for Monte-Carlo sampling ########
def project(gmt, proj_period, calibdata, temp_anomaly_year, sl_contributor,
sample_number, contrib_name):
"""
Monte Carlo sampling for slr contribution
for a single global mean temperature (gmt) timeseries or
an ensemble of gmt timeseries,
for one random choice of observations obs_choice,
and one random tuple of independent and dependent parameter.
the contributor function (i.e. thermal expansion) is chosen through
contrib_name.
Parameters
----------
gmt : single or ensemble of gmt timeseries
proj_period : time period for which slr projection is done
calibdata : calibration data for the several observations per component
temp_anomaly_year: year in which global mean temperature passes zero,
depending on observation.
sl_contributor: function to calculate transient sea level rise.
sample_number : number for seed to be created to make sampling reproducible
Returns
-------
contrib : timeseries of sea level contribution with length proj_period
"""
np.random.seed(sample_number)
try:
gmt_ensemble_size = gmt.shape[1]
gmt_choice = np.random.randint(gmt_ensemble_size)
# print gmt_choice
driving_temperature = gmt[:, gmt_choice]
except IndexError:
# this is the case if single gmt is supplied
gmt_choice = 0
driving_temperature = gmt
# print contrib_name, temp_anomaly_year
# convert index to str to avoid floating point issues
calibdata.index = [str(i) for i in calibdata.index]
# use one of the observational dataset
obs_choice = np.random.choice(calibdata.index.unique())
params_of_obs = calibdata.loc[obs_choice]
# print params_of_obs
# temp_anomaly_year = params.temp_anomaly_year
if obs_choice == "box_colgan13":
driving_temperature += gis_colgan_temperature_offset
# for dp16, the different ensemble members are interpreted
# as different observations, so selection already happened
# above through obs_choice
if contrib_name == "ant_dp16":
params = params_of_obs
else:
# choose a random parameter set
paramset_choice = np.random.randint(len(params_of_obs.index))
# can be variable number of parameters per each observation
params = params_of_obs.iloc[paramset_choice,:]
# print "pp",params
contributor = sl_contributor(params, temp_anomaly_year.loc[obs_choice][0])
contrib = contributor.calc_contribution(
driving_temperature,proj_period)
# print contrib
return [contrib, gmt_choice, obs_choice, params]
def project_slr(scen, gmt, settings):
projection_data = {}
temp_anomaly_years = pd.read_csv(os.path.join(
settings.calibfolder, "temp_anomaly_years.csv"),index_col=[0,1])
temp_anomaly_years = temp_anomaly_years.where(
pd.notnull(temp_anomaly_years), None)
for i, contrib_name in enumerate(settings.project_these):
print "conribution", contrib_name
realizations = np.arange(settings.nrealizations)
calibdata = pd.read_csv(
os.path.join(settings.calibfolder, contrib_name+".csv"),
index_col=[0])
temp_anomaly_year = temp_anomaly_years.loc[contrib_name]
sl_contributor = cf.contributor_functions[contrib_name]
proj = np.zeros([len(settings.proj_period), settings.nrealizations])
for n in realizations:
slr, gmt_n, obs_choice, params = project(
gmt, settings.proj_period, calibdata, temp_anomaly_year,
sl_contributor, n, contrib_name)
proj[:, n] = slr
pdata = da.DimArray(proj, axes=[settings.proj_period, realizations],
dims=["time", "runnumber"])
projection_data[contrib_name] = pdata
if not os.path.exists(settings.projected_slr_folder):
os.makedirs(settings.projected_slr_folder)
fname = "projected_slr_"+scen+"_n"+str(settings.nrealizations)+".nc"
da.Dataset(projection_data).write_nc(os.path.join(
settings.projected_slr_folder,fname))
print "Sea level projection data written to"
print settings.projected_slr_folder | gpl-3.0 |
lancezlin/ml_template_py | lib/python2.7/site-packages/pandas/tools/merge.py | 7 | 67927 | """
SQL-style merge routines
"""
import copy
import warnings
import string
import numpy as np
from pandas.compat import range, lrange, lzip, zip, map, filter
import pandas.compat as compat
from pandas import (Categorical, DataFrame, Series,
Index, MultiIndex, Timedelta)
from pandas.core.categorical import (_factorize_from_iterable,
_factorize_from_iterables)
from pandas.core.frame import _merge_doc
from pandas.types.generic import ABCSeries
from pandas.types.common import (is_datetime64tz_dtype,
is_datetime64_dtype,
needs_i8_conversion,
is_int64_dtype,
is_integer_dtype,
is_float_dtype,
is_integer,
is_int_or_datetime_dtype,
is_dtype_equal,
is_bool,
is_list_like,
_ensure_int64,
_ensure_float64,
_ensure_object,
_get_dtype)
from pandas.types.missing import na_value_for_dtype
from pandas.core.generic import NDFrame
from pandas.core.index import (_get_combined_index,
_ensure_index, _get_consensus_names,
_all_indexes_same)
from pandas.core.internals import (items_overlap_with_suffix,
concatenate_block_managers)
from pandas.util.decorators import Appender, Substitution
import pandas.core.algorithms as algos
import pandas.core.common as com
import pandas.types.concat as _concat
import pandas._join as _join
import pandas.hashtable as _hash
@Substitution('\nleft : DataFrame')
@Appender(_merge_doc, indents=0)
def merge(left, right, how='inner', on=None, left_on=None, right_on=None,
left_index=False, right_index=False, sort=False,
suffixes=('_x', '_y'), copy=True, indicator=False):
op = _MergeOperation(left, right, how=how, on=on, left_on=left_on,
right_on=right_on, left_index=left_index,
right_index=right_index, sort=sort, suffixes=suffixes,
copy=copy, indicator=indicator)
return op.get_result()
if __debug__:
merge.__doc__ = _merge_doc % '\nleft : DataFrame'
class MergeError(ValueError):
pass
def _groupby_and_merge(by, on, left, right, _merge_pieces,
check_duplicates=True):
"""
groupby & merge; we are always performing a left-by type operation
Parameters
----------
by: field to group
on: duplicates field
left: left frame
right: right frame
_merge_pieces: function for merging
check_duplicates: boolean, default True
should we check & clean duplicates
"""
pieces = []
if not isinstance(by, (list, tuple)):
by = [by]
lby = left.groupby(by, sort=False)
# if we can groupby the rhs
# then we can get vastly better perf
try:
# we will check & remove duplicates if indicated
if check_duplicates:
if on is None:
on = []
elif not isinstance(on, (list, tuple)):
on = [on]
if right.duplicated(by + on).any():
right = right.drop_duplicates(by + on, keep='last')
rby = right.groupby(by, sort=False)
except KeyError:
rby = None
for key, lhs in lby:
if rby is None:
rhs = right
else:
try:
rhs = right.take(rby.indices[key])
except KeyError:
# key doesn't exist in left
lcols = lhs.columns.tolist()
cols = lcols + [r for r in right.columns
if r not in set(lcols)]
merged = lhs.reindex(columns=cols)
merged.index = range(len(merged))
pieces.append(merged)
continue
merged = _merge_pieces(lhs, rhs)
# make sure join keys are in the merged
# TODO, should _merge_pieces do this?
for k in by:
try:
if k in merged:
merged[k] = key
except:
pass
pieces.append(merged)
# preserve the original order
# if we have a missing piece this can be reset
result = concat(pieces, ignore_index=True)
result = result.reindex(columns=pieces[0].columns, copy=False)
return result, lby
def ordered_merge(left, right, on=None,
left_on=None, right_on=None,
left_by=None, right_by=None,
fill_method=None, suffixes=('_x', '_y')):
warnings.warn("ordered_merge is deprecated and replaced by merge_ordered",
FutureWarning, stacklevel=2)
return merge_ordered(left, right, on=on,
left_on=left_on, right_on=right_on,
left_by=left_by, right_by=right_by,
fill_method=fill_method, suffixes=suffixes)
def merge_ordered(left, right, on=None,
left_on=None, right_on=None,
left_by=None, right_by=None,
fill_method=None, suffixes=('_x', '_y'),
how='outer'):
"""Perform merge with optional filling/interpolation designed for ordered
data like time series data. Optionally perform group-wise merge (see
examples)
Parameters
----------
left : DataFrame
right : DataFrame
on : label or list
Field names to join on. Must be found in both DataFrames.
left_on : label or list, or array-like
Field names to join on in left DataFrame. Can be a vector or list of
vectors of the length of the DataFrame to use a particular vector as
the join key instead of columns
right_on : label or list, or array-like
Field names to join on in right DataFrame or vector/list of vectors per
left_on docs
left_by : column name or list of column names
Group left DataFrame by group columns and merge piece by piece with
right DataFrame
right_by : column name or list of column names
Group right DataFrame by group columns and merge piece by piece with
left DataFrame
fill_method : {'ffill', None}, default None
Interpolation method for data
suffixes : 2-length sequence (tuple, list, ...)
Suffix to apply to overlapping column names in the left and right
side, respectively
how : {'left', 'right', 'outer', 'inner'}, default 'outer'
* left: use only keys from left frame (SQL: left outer join)
* right: use only keys from right frame (SQL: right outer join)
* outer: use union of keys from both frames (SQL: full outer join)
* inner: use intersection of keys from both frames (SQL: inner join)
.. versionadded:: 0.19.0
Examples
--------
>>> A >>> B
key lvalue group key rvalue
0 a 1 a 0 b 1
1 c 2 a 1 c 2
2 e 3 a 2 d 3
3 a 1 b
4 c 2 b
5 e 3 b
>>> ordered_merge(A, B, fill_method='ffill', left_by='group')
key lvalue group rvalue
0 a 1 a NaN
1 b 1 a 1
2 c 2 a 2
3 d 2 a 3
4 e 3 a 3
5 f 3 a 4
6 a 1 b NaN
7 b 1 b 1
8 c 2 b 2
9 d 2 b 3
10 e 3 b 3
11 f 3 b 4
Returns
-------
merged : DataFrame
The output type will the be same as 'left', if it is a subclass
of DataFrame.
See also
--------
merge
merge_asof
"""
def _merger(x, y):
# perform the ordered merge operation
op = _OrderedMerge(x, y, on=on, left_on=left_on, right_on=right_on,
suffixes=suffixes, fill_method=fill_method,
how=how)
return op.get_result()
if left_by is not None and right_by is not None:
raise ValueError('Can only group either left or right frames')
elif left_by is not None:
result, _ = _groupby_and_merge(left_by, on, left, right,
lambda x, y: _merger(x, y),
check_duplicates=False)
elif right_by is not None:
result, _ = _groupby_and_merge(right_by, on, right, left,
lambda x, y: _merger(y, x),
check_duplicates=False)
else:
result = _merger(left, right)
return result
ordered_merge.__doc__ = merge_ordered.__doc__
def merge_asof(left, right, on=None,
left_on=None, right_on=None,
left_index=False, right_index=False,
by=None, left_by=None, right_by=None,
suffixes=('_x', '_y'),
tolerance=None,
allow_exact_matches=True):
"""Perform an asof merge. This is similar to a left-join except that we
match on nearest key rather than equal keys.
For each row in the left DataFrame, we select the last row in the right
DataFrame whose 'on' key is less than or equal to the left's key. Both
DataFrames must be sorted by the key.
Optionally match on equivalent keys with 'by' before searching for nearest
match with 'on'.
.. versionadded:: 0.19.0
Parameters
----------
left : DataFrame
right : DataFrame
on : label
Field name to join on. Must be found in both DataFrames.
The data MUST be ordered. Furthermore this must be a numeric column,
such as datetimelike, integer, or float. On or left_on/right_on
must be given.
left_on : label
Field name to join on in left DataFrame.
right_on : label
Field name to join on in right DataFrame.
left_index : boolean
Use the index of the left DataFrame as the join key.
.. versionadded:: 0.19.2
right_index : boolean
Use the index of the right DataFrame as the join key.
.. versionadded:: 0.19.2
by : column name or list of column names
Match on these columns before performing merge operation.
left_by : column name
Field names to match on in the left DataFrame.
.. versionadded:: 0.19.2
right_by : column name
Field names to match on in the right DataFrame.
.. versionadded:: 0.19.2
suffixes : 2-length sequence (tuple, list, ...)
Suffix to apply to overlapping column names in the left and right
side, respectively
tolerance : integer or Timedelta, optional, default None
select asof tolerance within this range; must be compatible
to the merge index.
allow_exact_matches : boolean, default True
- If True, allow matching the same 'on' value
(i.e. less-than-or-equal-to)
- If False, don't match the same 'on' value
(i.e., stricly less-than)
Returns
-------
merged : DataFrame
Examples
--------
>>> left
a left_val
0 1 a
1 5 b
2 10 c
>>> right
a right_val
0 1 1
1 2 2
2 3 3
3 6 6
4 7 7
>>> pd.merge_asof(left, right, on='a')
a left_val right_val
0 1 a 1
1 5 b 3
2 10 c 7
>>> pd.merge_asof(left, right, on='a', allow_exact_matches=False)
a left_val right_val
0 1 a NaN
1 5 b 3.0
2 10 c 7.0
For this example, we can achieve a similar result thru
``pd.merge_ordered()``, though its not nearly as performant.
>>> (pd.merge_ordered(left, right, on='a')
... .ffill()
... .drop_duplicates(['left_val'])
... )
a left_val right_val
0 1 a 1.0
3 5 b 3.0
6 10 c 7.0
We can use indexed DataFrames as well.
>>> left
left_val
1 a
5 b
10 c
>>> right
right_val
1 1
2 2
3 3
6 6
7 7
>>> pd.merge_asof(left, right, left_index=True, right_index=True)
left_val right_val
1 a 1
5 b 3
10 c 7
Here is a real-world times-series example
>>> quotes
time ticker bid ask
0 2016-05-25 13:30:00.023 GOOG 720.50 720.93
1 2016-05-25 13:30:00.023 MSFT 51.95 51.96
2 2016-05-25 13:30:00.030 MSFT 51.97 51.98
3 2016-05-25 13:30:00.041 MSFT 51.99 52.00
4 2016-05-25 13:30:00.048 GOOG 720.50 720.93
5 2016-05-25 13:30:00.049 AAPL 97.99 98.01
6 2016-05-25 13:30:00.072 GOOG 720.50 720.88
7 2016-05-25 13:30:00.075 MSFT 52.01 52.03
>>> trades
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
By default we are taking the asof of the quotes
>>> pd.merge_asof(trades, quotes,
... on='time',
... by='ticker')
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 2ms betwen the quote time and the trade time
>>> pd.merge_asof(trades, quotes,
... on='time',
... by='ticker',
... tolerance=pd.Timedelta('2ms'))
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 NaN NaN
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 10ms betwen the quote time and the trade time
and we exclude exact matches on time. However *prior* data will
propogate forward
>>> pd.merge_asof(trades, quotes,
... on='time',
... by='ticker',
... tolerance=pd.Timedelta('10ms'),
... allow_exact_matches=False)
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 NaN NaN
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
See also
--------
merge
merge_ordered
"""
op = _AsOfMerge(left, right,
on=on, left_on=left_on, right_on=right_on,
left_index=left_index, right_index=right_index,
by=by, left_by=left_by, right_by=right_by,
suffixes=suffixes,
how='asof', tolerance=tolerance,
allow_exact_matches=allow_exact_matches)
return op.get_result()
# TODO: transformations??
# TODO: only copy DataFrames when modification necessary
class _MergeOperation(object):
"""
Perform a database (SQL) merge operation between two DataFrame objects
using either columns as keys or their row indexes
"""
_merge_type = 'merge'
def __init__(self, left, right, how='inner', on=None,
left_on=None, right_on=None, axis=1,
left_index=False, right_index=False, sort=True,
suffixes=('_x', '_y'), copy=True, indicator=False):
self.left = self.orig_left = left
self.right = self.orig_right = right
self.how = how
self.axis = axis
self.on = com._maybe_make_list(on)
self.left_on = com._maybe_make_list(left_on)
self.right_on = com._maybe_make_list(right_on)
self.copy = copy
self.suffixes = suffixes
self.sort = sort
self.left_index = left_index
self.right_index = right_index
self.indicator = indicator
if isinstance(self.indicator, compat.string_types):
self.indicator_name = self.indicator
elif isinstance(self.indicator, bool):
self.indicator_name = '_merge' if self.indicator else None
else:
raise ValueError(
'indicator option can only accept boolean or string arguments')
if not isinstance(left, DataFrame):
raise ValueError(
'can not merge DataFrame with instance of '
'type {0}'.format(type(left)))
if not isinstance(right, DataFrame):
raise ValueError(
'can not merge DataFrame with instance of '
'type {0}'.format(type(right)))
if not is_bool(left_index):
raise ValueError(
'left_index parameter must be of type bool, not '
'{0}'.format(type(left_index)))
if not is_bool(right_index):
raise ValueError(
'right_index parameter must be of type bool, not '
'{0}'.format(type(right_index)))
# warn user when merging between different levels
if left.columns.nlevels != right.columns.nlevels:
msg = ('merging between different levels can give an unintended '
'result ({0} levels on the left, {1} on the right)')
msg = msg.format(left.columns.nlevels, right.columns.nlevels)
warnings.warn(msg, UserWarning)
self._validate_specification()
# note this function has side effects
(self.left_join_keys,
self.right_join_keys,
self.join_names) = self._get_merge_keys()
def get_result(self):
if self.indicator:
self.left, self.right = self._indicator_pre_merge(
self.left, self.right)
join_index, left_indexer, right_indexer = self._get_join_info()
ldata, rdata = self.left._data, self.right._data
lsuf, rsuf = self.suffixes
llabels, rlabels = items_overlap_with_suffix(ldata.items, lsuf,
rdata.items, rsuf)
lindexers = {1: left_indexer} if left_indexer is not None else {}
rindexers = {1: right_indexer} if right_indexer is not None else {}
result_data = concatenate_block_managers(
[(ldata, lindexers), (rdata, rindexers)],
axes=[llabels.append(rlabels), join_index],
concat_axis=0, copy=self.copy)
typ = self.left._constructor
result = typ(result_data).__finalize__(self, method=self._merge_type)
if self.indicator:
result = self._indicator_post_merge(result)
self._maybe_add_join_keys(result, left_indexer, right_indexer)
return result
def _indicator_pre_merge(self, left, right):
columns = left.columns.union(right.columns)
for i in ['_left_indicator', '_right_indicator']:
if i in columns:
raise ValueError("Cannot use `indicator=True` option when "
"data contains a column named {}".format(i))
if self.indicator_name in columns:
raise ValueError(
"Cannot use name of an existing column for indicator column")
left = left.copy()
right = right.copy()
left['_left_indicator'] = 1
left['_left_indicator'] = left['_left_indicator'].astype('int8')
right['_right_indicator'] = 2
right['_right_indicator'] = right['_right_indicator'].astype('int8')
return left, right
def _indicator_post_merge(self, result):
result['_left_indicator'] = result['_left_indicator'].fillna(0)
result['_right_indicator'] = result['_right_indicator'].fillna(0)
result[self.indicator_name] = Categorical((result['_left_indicator'] +
result['_right_indicator']),
categories=[1, 2, 3])
result[self.indicator_name] = (
result[self.indicator_name]
.cat.rename_categories(['left_only', 'right_only', 'both']))
result = result.drop(labels=['_left_indicator', '_right_indicator'],
axis=1)
return result
def _maybe_add_join_keys(self, result, left_indexer, right_indexer):
left_has_missing = None
right_has_missing = None
keys = zip(self.join_names, self.left_on, self.right_on)
for i, (name, lname, rname) in enumerate(keys):
if not _should_fill(lname, rname):
continue
take_left, take_right = None, None
if name in result:
if left_indexer is not None and right_indexer is not None:
if name in self.left:
if left_has_missing is None:
left_has_missing = (left_indexer == -1).any()
if left_has_missing:
take_right = self.right_join_keys[i]
if not is_dtype_equal(result[name].dtype,
self.left[name].dtype):
take_left = self.left[name]._values
elif name in self.right:
if right_has_missing is None:
right_has_missing = (right_indexer == -1).any()
if right_has_missing:
take_left = self.left_join_keys[i]
if not is_dtype_equal(result[name].dtype,
self.right[name].dtype):
take_right = self.right[name]._values
elif left_indexer is not None \
and isinstance(self.left_join_keys[i], np.ndarray):
take_left = self.left_join_keys[i]
take_right = self.right_join_keys[i]
if take_left is not None or take_right is not None:
if take_left is None:
lvals = result[name]._values
else:
lfill = na_value_for_dtype(take_left.dtype)
lvals = algos.take_1d(take_left, left_indexer,
fill_value=lfill)
if take_right is None:
rvals = result[name]._values
else:
rfill = na_value_for_dtype(take_right.dtype)
rvals = algos.take_1d(take_right, right_indexer,
fill_value=rfill)
# if we have an all missing left_indexer
# make sure to just use the right values
mask = left_indexer == -1
if mask.all():
key_col = rvals
else:
key_col = Index(lvals).where(~mask, rvals)
if name in result:
result[name] = key_col
else:
result.insert(i, name or 'key_%d' % i, key_col)
def _get_join_indexers(self):
""" return the join indexers """
return _get_join_indexers(self.left_join_keys,
self.right_join_keys,
sort=self.sort,
how=self.how)
def _get_join_info(self):
left_ax = self.left._data.axes[self.axis]
right_ax = self.right._data.axes[self.axis]
if self.left_index and self.right_index and self.how != 'asof':
join_index, left_indexer, right_indexer = \
left_ax.join(right_ax, how=self.how, return_indexers=True)
elif self.right_index and self.how == 'left':
join_index, left_indexer, right_indexer = \
_left_join_on_index(left_ax, right_ax, self.left_join_keys,
sort=self.sort)
elif self.left_index and self.how == 'right':
join_index, right_indexer, left_indexer = \
_left_join_on_index(right_ax, left_ax, self.right_join_keys,
sort=self.sort)
else:
(left_indexer,
right_indexer) = self._get_join_indexers()
if self.right_index:
if len(self.left) > 0:
join_index = self.left.index.take(left_indexer)
else:
join_index = self.right.index.take(right_indexer)
left_indexer = np.array([-1] * len(join_index))
elif self.left_index:
if len(self.right) > 0:
join_index = self.right.index.take(right_indexer)
else:
join_index = self.left.index.take(left_indexer)
right_indexer = np.array([-1] * len(join_index))
else:
join_index = Index(np.arange(len(left_indexer)))
if len(join_index) == 0:
join_index = join_index.astype(object)
return join_index, left_indexer, right_indexer
def _get_merge_data(self):
"""
Handles overlapping column names etc.
"""
ldata, rdata = self.left._data, self.right._data
lsuf, rsuf = self.suffixes
llabels, rlabels = items_overlap_with_suffix(
ldata.items, lsuf, rdata.items, rsuf)
if not llabels.equals(ldata.items):
ldata = ldata.copy(deep=False)
ldata.set_axis(0, llabels)
if not rlabels.equals(rdata.items):
rdata = rdata.copy(deep=False)
rdata.set_axis(0, rlabels)
return ldata, rdata
def _get_merge_keys(self):
"""
Note: has side effects (copy/delete key columns)
Parameters
----------
left
right
on
Returns
-------
left_keys, right_keys
"""
left_keys = []
right_keys = []
join_names = []
right_drop = []
left_drop = []
left, right = self.left, self.right
is_lkey = lambda x: isinstance(
x, (np.ndarray, ABCSeries)) and len(x) == len(left)
is_rkey = lambda x: isinstance(
x, (np.ndarray, ABCSeries)) and len(x) == len(right)
# Note that pd.merge_asof() has separate 'on' and 'by' parameters. A
# user could, for example, request 'left_index' and 'left_by'. In a
# regular pd.merge(), users cannot specify both 'left_index' and
# 'left_on'. (Instead, users have a MultiIndex). That means the
# self.left_on in this function is always empty in a pd.merge(), but
# a pd.merge_asof(left_index=True, left_by=...) will result in a
# self.left_on array with a None in the middle of it. This requires
# a work-around as designated in the code below.
# See _validate_specification() for where this happens.
# ugh, spaghetti re #733
if _any(self.left_on) and _any(self.right_on):
for lk, rk in zip(self.left_on, self.right_on):
if is_lkey(lk):
left_keys.append(lk)
if is_rkey(rk):
right_keys.append(rk)
join_names.append(None) # what to do?
else:
if rk is not None:
right_keys.append(right[rk]._values)
join_names.append(rk)
else:
# work-around for merge_asof(right_index=True)
right_keys.append(right.index)
join_names.append(right.index.name)
else:
if not is_rkey(rk):
if rk is not None:
right_keys.append(right[rk]._values)
else:
# work-around for merge_asof(right_index=True)
right_keys.append(right.index)
if lk is not None and lk == rk:
# avoid key upcast in corner case (length-0)
if len(left) > 0:
right_drop.append(rk)
else:
left_drop.append(lk)
else:
right_keys.append(rk)
if lk is not None:
left_keys.append(left[lk]._values)
join_names.append(lk)
else:
# work-around for merge_asof(left_index=True)
left_keys.append(left.index)
join_names.append(left.index.name)
elif _any(self.left_on):
for k in self.left_on:
if is_lkey(k):
left_keys.append(k)
join_names.append(None)
else:
left_keys.append(left[k]._values)
join_names.append(k)
if isinstance(self.right.index, MultiIndex):
right_keys = [lev._values.take(lab)
for lev, lab in zip(self.right.index.levels,
self.right.index.labels)]
else:
right_keys = [self.right.index.values]
elif _any(self.right_on):
for k in self.right_on:
if is_rkey(k):
right_keys.append(k)
join_names.append(None)
else:
right_keys.append(right[k]._values)
join_names.append(k)
if isinstance(self.left.index, MultiIndex):
left_keys = [lev._values.take(lab)
for lev, lab in zip(self.left.index.levels,
self.left.index.labels)]
else:
left_keys = [self.left.index.values]
if left_drop:
self.left = self.left.drop(left_drop, axis=1)
if right_drop:
self.right = self.right.drop(right_drop, axis=1)
return left_keys, right_keys, join_names
def _validate_specification(self):
# Hm, any way to make this logic less complicated??
if self.on is None and self.left_on is None and self.right_on is None:
if self.left_index and self.right_index:
self.left_on, self.right_on = (), ()
elif self.left_index:
if self.right_on is None:
raise MergeError('Must pass right_on or right_index=True')
elif self.right_index:
if self.left_on is None:
raise MergeError('Must pass left_on or left_index=True')
else:
# use the common columns
common_cols = self.left.columns.intersection(
self.right.columns)
if len(common_cols) == 0:
raise MergeError('No common columns to perform merge on')
if not common_cols.is_unique:
raise MergeError("Data columns not unique: %s"
% repr(common_cols))
self.left_on = self.right_on = common_cols
elif self.on is not None:
if self.left_on is not None or self.right_on is not None:
raise MergeError('Can only pass on OR left_on and '
'right_on')
self.left_on = self.right_on = self.on
elif self.left_on is not None:
n = len(self.left_on)
if self.right_index:
if len(self.left_on) != self.right.index.nlevels:
raise ValueError('len(left_on) must equal the number '
'of levels in the index of "right"')
self.right_on = [None] * n
elif self.right_on is not None:
n = len(self.right_on)
if self.left_index:
if len(self.right_on) != self.left.index.nlevels:
raise ValueError('len(right_on) must equal the number '
'of levels in the index of "left"')
self.left_on = [None] * n
if len(self.right_on) != len(self.left_on):
raise ValueError("len(right_on) must equal len(left_on)")
def _get_join_indexers(left_keys, right_keys, sort=False, how='inner',
**kwargs):
"""
Parameters
----------
Returns
-------
"""
from functools import partial
assert len(left_keys) == len(right_keys), \
'left_key and right_keys must be the same length'
# bind `sort` arg. of _factorize_keys
fkeys = partial(_factorize_keys, sort=sort)
# get left & right join labels and num. of levels at each location
llab, rlab, shape = map(list, zip(* map(fkeys, left_keys, right_keys)))
# get flat i8 keys from label lists
lkey, rkey = _get_join_keys(llab, rlab, shape, sort)
# factorize keys to a dense i8 space
# `count` is the num. of unique keys
# set(lkey) | set(rkey) == range(count)
lkey, rkey, count = fkeys(lkey, rkey)
# preserve left frame order if how == 'left' and sort == False
kwargs = copy.copy(kwargs)
if how == 'left':
kwargs['sort'] = sort
join_func = _join_functions[how]
return join_func(lkey, rkey, count, **kwargs)
class _OrderedMerge(_MergeOperation):
_merge_type = 'ordered_merge'
def __init__(self, left, right, on=None, left_on=None, right_on=None,
left_index=False, right_index=False, axis=1,
suffixes=('_x', '_y'), copy=True,
fill_method=None, how='outer'):
self.fill_method = fill_method
_MergeOperation.__init__(self, left, right, on=on, left_on=left_on,
left_index=left_index,
right_index=right_index,
right_on=right_on, axis=axis,
how=how, suffixes=suffixes,
sort=True # factorize sorts
)
def get_result(self):
join_index, left_indexer, right_indexer = self._get_join_info()
# this is a bit kludgy
ldata, rdata = self.left._data, self.right._data
lsuf, rsuf = self.suffixes
llabels, rlabels = items_overlap_with_suffix(ldata.items, lsuf,
rdata.items, rsuf)
if self.fill_method == 'ffill':
left_join_indexer = _join.ffill_indexer(left_indexer)
right_join_indexer = _join.ffill_indexer(right_indexer)
else:
left_join_indexer = left_indexer
right_join_indexer = right_indexer
lindexers = {
1: left_join_indexer} if left_join_indexer is not None else {}
rindexers = {
1: right_join_indexer} if right_join_indexer is not None else {}
result_data = concatenate_block_managers(
[(ldata, lindexers), (rdata, rindexers)],
axes=[llabels.append(rlabels), join_index],
concat_axis=0, copy=self.copy)
typ = self.left._constructor
result = typ(result_data).__finalize__(self, method=self._merge_type)
self._maybe_add_join_keys(result, left_indexer, right_indexer)
return result
def _asof_function(on_type):
return getattr(_join, 'asof_join_%s' % on_type, None)
def _asof_by_function(on_type, by_type):
return getattr(_join, 'asof_join_%s_by_%s' % (on_type, by_type), None)
_type_casters = {
'int64_t': _ensure_int64,
'double': _ensure_float64,
'object': _ensure_object,
}
_cython_types = {
'uint8': 'uint8_t',
'uint32': 'uint32_t',
'uint16': 'uint16_t',
'uint64': 'uint64_t',
'int8': 'int8_t',
'int32': 'int32_t',
'int16': 'int16_t',
'int64': 'int64_t',
'float16': 'error',
'float32': 'float',
'float64': 'double',
}
def _get_cython_type(dtype):
""" Given a dtype, return a C name like 'int64_t' or 'double' """
type_name = _get_dtype(dtype).name
ctype = _cython_types.get(type_name, 'object')
if ctype == 'error':
raise MergeError('unsupported type: ' + type_name)
return ctype
def _get_cython_type_upcast(dtype):
""" Upcast a dtype to 'int64_t', 'double', or 'object' """
if is_integer_dtype(dtype):
return 'int64_t'
elif is_float_dtype(dtype):
return 'double'
else:
return 'object'
class _AsOfMerge(_OrderedMerge):
_merge_type = 'asof_merge'
def __init__(self, left, right, on=None, left_on=None, right_on=None,
left_index=False, right_index=False,
by=None, left_by=None, right_by=None,
axis=1, suffixes=('_x', '_y'), copy=True,
fill_method=None,
how='asof', tolerance=None,
allow_exact_matches=True):
self.by = by
self.left_by = left_by
self.right_by = right_by
self.tolerance = tolerance
self.allow_exact_matches = allow_exact_matches
_OrderedMerge.__init__(self, left, right, on=on, left_on=left_on,
right_on=right_on, left_index=left_index,
right_index=right_index, axis=axis,
how=how, suffixes=suffixes,
fill_method=fill_method)
def _validate_specification(self):
super(_AsOfMerge, self)._validate_specification()
# we only allow on to be a single item for on
if len(self.left_on) != 1 and not self.left_index:
raise MergeError("can only asof on a key for left")
if len(self.right_on) != 1 and not self.right_index:
raise MergeError("can only asof on a key for right")
if self.left_index and isinstance(self.left.index, MultiIndex):
raise MergeError("left can only have one index")
if self.right_index and isinstance(self.right.index, MultiIndex):
raise MergeError("right can only have one index")
# set 'by' columns
if self.by is not None:
if self.left_by is not None or self.right_by is not None:
raise MergeError('Can only pass by OR left_by '
'and right_by')
self.left_by = self.right_by = self.by
if self.left_by is None and self.right_by is not None:
raise MergeError('missing left_by')
if self.left_by is not None and self.right_by is None:
raise MergeError('missing right_by')
# add by to our key-list so we can have it in the
# output as a key
if self.left_by is not None:
if not is_list_like(self.left_by):
self.left_by = [self.left_by]
if not is_list_like(self.right_by):
self.right_by = [self.right_by]
self.left_on = self.left_by + list(self.left_on)
self.right_on = self.right_by + list(self.right_on)
@property
def _asof_key(self):
""" This is our asof key, the 'on' """
return self.left_on[-1]
def _get_merge_keys(self):
# note this function has side effects
(left_join_keys,
right_join_keys,
join_names) = super(_AsOfMerge, self)._get_merge_keys()
# validate index types are the same
for lk, rk in zip(left_join_keys, right_join_keys):
if not is_dtype_equal(lk.dtype, rk.dtype):
raise MergeError("incompatible merge keys, "
"must be the same type")
# validate tolerance; must be a Timedelta if we have a DTI
if self.tolerance is not None:
lt = left_join_keys[-1]
msg = "incompatible tolerance, must be compat " \
"with type {0}".format(type(lt))
if is_datetime64_dtype(lt) or is_datetime64tz_dtype(lt):
if not isinstance(self.tolerance, Timedelta):
raise MergeError(msg)
if self.tolerance < Timedelta(0):
raise MergeError("tolerance must be positive")
elif is_int64_dtype(lt):
if not is_integer(self.tolerance):
raise MergeError(msg)
if self.tolerance < 0:
raise MergeError("tolerance must be positive")
else:
raise MergeError("key must be integer or timestamp")
# validate allow_exact_matches
if not is_bool(self.allow_exact_matches):
raise MergeError("allow_exact_matches must be boolean, "
"passed {0}".format(self.allow_exact_matches))
return left_join_keys, right_join_keys, join_names
def _get_join_indexers(self):
""" return the join indexers """
def flip(xs):
""" unlike np.transpose, this returns an array of tuples """
labels = list(string.ascii_lowercase[:len(xs)])
dtypes = [x.dtype for x in xs]
labeled_dtypes = list(zip(labels, dtypes))
return np.array(lzip(*xs), labeled_dtypes)
# values to compare
left_values = (self.left.index.values if self.left_index else
self.left_join_keys[-1])
right_values = (self.right.index.values if self.right_index else
self.right_join_keys[-1])
tolerance = self.tolerance
# we required sortedness in the join keys
msg = " keys must be sorted"
if not Index(left_values).is_monotonic:
raise ValueError('left' + msg)
if not Index(right_values).is_monotonic:
raise ValueError('right' + msg)
# initial type conversion as needed
if needs_i8_conversion(left_values):
left_values = left_values.view('i8')
right_values = right_values.view('i8')
if tolerance is not None:
tolerance = tolerance.value
# a "by" parameter requires special handling
if self.left_by is not None:
if len(self.left_join_keys) > 2:
# get tuple representation of values if more than one
left_by_values = flip(self.left_join_keys[0:-1])
right_by_values = flip(self.right_join_keys[0:-1])
else:
left_by_values = self.left_join_keys[0]
right_by_values = self.right_join_keys[0]
# upcast 'by' parameter because HashTable is limited
by_type = _get_cython_type_upcast(left_by_values.dtype)
by_type_caster = _type_casters[by_type]
left_by_values = by_type_caster(left_by_values)
right_by_values = by_type_caster(right_by_values)
# choose appropriate function by type
on_type = _get_cython_type(left_values.dtype)
func = _asof_by_function(on_type, by_type)
return func(left_values,
right_values,
left_by_values,
right_by_values,
self.allow_exact_matches,
tolerance)
else:
# choose appropriate function by type
on_type = _get_cython_type(left_values.dtype)
func = _asof_function(on_type)
return func(left_values,
right_values,
self.allow_exact_matches,
tolerance)
def _get_multiindex_indexer(join_keys, index, sort):
from functools import partial
# bind `sort` argument
fkeys = partial(_factorize_keys, sort=sort)
# left & right join labels and num. of levels at each location
rlab, llab, shape = map(list, zip(* map(fkeys, index.levels, join_keys)))
if sort:
rlab = list(map(np.take, rlab, index.labels))
else:
i8copy = lambda a: a.astype('i8', subok=False, copy=True)
rlab = list(map(i8copy, index.labels))
# fix right labels if there were any nulls
for i in range(len(join_keys)):
mask = index.labels[i] == -1
if mask.any():
# check if there already was any nulls at this location
# if there was, it is factorized to `shape[i] - 1`
a = join_keys[i][llab[i] == shape[i] - 1]
if a.size == 0 or not a[0] != a[0]:
shape[i] += 1
rlab[i][mask] = shape[i] - 1
# get flat i8 join keys
lkey, rkey = _get_join_keys(llab, rlab, shape, sort)
# factorize keys to a dense i8 space
lkey, rkey, count = fkeys(lkey, rkey)
return _join.left_outer_join(lkey, rkey, count, sort=sort)
def _get_single_indexer(join_key, index, sort=False):
left_key, right_key, count = _factorize_keys(join_key, index, sort=sort)
left_indexer, right_indexer = _join.left_outer_join(
_ensure_int64(left_key),
_ensure_int64(right_key),
count, sort=sort)
return left_indexer, right_indexer
def _left_join_on_index(left_ax, right_ax, join_keys, sort=False):
if len(join_keys) > 1:
if not ((isinstance(right_ax, MultiIndex) and
len(join_keys) == right_ax.nlevels)):
raise AssertionError("If more than one join key is given then "
"'right_ax' must be a MultiIndex and the "
"number of join keys must be the number of "
"levels in right_ax")
left_indexer, right_indexer = \
_get_multiindex_indexer(join_keys, right_ax, sort=sort)
else:
jkey = join_keys[0]
left_indexer, right_indexer = \
_get_single_indexer(jkey, right_ax, sort=sort)
if sort or len(left_ax) != len(left_indexer):
# if asked to sort or there are 1-to-many matches
join_index = left_ax.take(left_indexer)
return join_index, left_indexer, right_indexer
# left frame preserves order & length of its index
return left_ax, None, right_indexer
def _right_outer_join(x, y, max_groups):
right_indexer, left_indexer = _join.left_outer_join(y, x, max_groups)
return left_indexer, right_indexer
_join_functions = {
'inner': _join.inner_join,
'left': _join.left_outer_join,
'right': _right_outer_join,
'outer': _join.full_outer_join,
}
def _factorize_keys(lk, rk, sort=True):
if is_datetime64tz_dtype(lk) and is_datetime64tz_dtype(rk):
lk = lk.values
rk = rk.values
if is_int_or_datetime_dtype(lk) and is_int_or_datetime_dtype(rk):
klass = _hash.Int64Factorizer
lk = _ensure_int64(com._values_from_object(lk))
rk = _ensure_int64(com._values_from_object(rk))
else:
klass = _hash.Factorizer
lk = _ensure_object(lk)
rk = _ensure_object(rk)
rizer = klass(max(len(lk), len(rk)))
llab = rizer.factorize(lk)
rlab = rizer.factorize(rk)
count = rizer.get_count()
if sort:
uniques = rizer.uniques.to_array()
llab, rlab = _sort_labels(uniques, llab, rlab)
# NA group
lmask = llab == -1
lany = lmask.any()
rmask = rlab == -1
rany = rmask.any()
if lany or rany:
if lany:
np.putmask(llab, lmask, count)
if rany:
np.putmask(rlab, rmask, count)
count += 1
return llab, rlab, count
def _sort_labels(uniques, left, right):
if not isinstance(uniques, np.ndarray):
# tuplesafe
uniques = Index(uniques).values
l = len(left)
labels = np.concatenate([left, right])
_, new_labels = algos.safe_sort(uniques, labels, na_sentinel=-1)
new_labels = _ensure_int64(new_labels)
new_left, new_right = new_labels[:l], new_labels[l:]
return new_left, new_right
def _get_join_keys(llab, rlab, shape, sort):
from pandas.core.groupby import _int64_overflow_possible
# how many levels can be done without overflow
pred = lambda i: not _int64_overflow_possible(shape[:i])
nlev = next(filter(pred, range(len(shape), 0, -1)))
# get keys for the first `nlev` levels
stride = np.prod(shape[1:nlev], dtype='i8')
lkey = stride * llab[0].astype('i8', subok=False, copy=False)
rkey = stride * rlab[0].astype('i8', subok=False, copy=False)
for i in range(1, nlev):
stride //= shape[i]
lkey += llab[i] * stride
rkey += rlab[i] * stride
if nlev == len(shape): # all done!
return lkey, rkey
# densify current keys to avoid overflow
lkey, rkey, count = _factorize_keys(lkey, rkey, sort=sort)
llab = [lkey] + llab[nlev:]
rlab = [rkey] + rlab[nlev:]
shape = [count] + shape[nlev:]
return _get_join_keys(llab, rlab, shape, sort)
# ---------------------------------------------------------------------
# Concatenate DataFrame objects
def concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False,
keys=None, levels=None, names=None, verify_integrity=False,
copy=True):
"""
Concatenate pandas objects along a particular axis with optional set logic
along the other axes. Can also add a layer of hierarchical indexing on the
concatenation axis, which may be useful if the labels are the same (or
overlapping) on the passed axis number
Parameters
----------
objs : a sequence or mapping of Series, DataFrame, or Panel objects
If a dict is passed, the sorted keys will be used as the `keys`
argument, unless it is passed, in which case the values will be
selected (see below). Any None objects will be dropped silently unless
they are all None in which case a ValueError will be raised
axis : {0/'index', 1/'columns'}, default 0
The axis to concatenate along
join : {'inner', 'outer'}, default 'outer'
How to handle indexes on other axis(es)
join_axes : list of Index objects
Specific indexes to use for the other n - 1 axes instead of performing
inner/outer set logic
ignore_index : boolean, default False
If True, do not use the index values along the concatenation axis. The
resulting axis will be labeled 0, ..., n - 1. This is useful if you are
concatenating objects where the concatenation axis does not have
meaningful indexing information. Note the index values on the other
axes are still respected in the join.
keys : sequence, default None
If multiple levels passed, should contain tuples. Construct
hierarchical index using the passed keys as the outermost level
levels : list of sequences, default None
Specific levels (unique values) to use for constructing a
MultiIndex. Otherwise they will be inferred from the keys
names : list, default None
Names for the levels in the resulting hierarchical index
verify_integrity : boolean, default False
Check whether the new concatenated axis contains duplicates. This can
be very expensive relative to the actual data concatenation
copy : boolean, default True
If False, do not copy data unnecessarily
Notes
-----
The keys, levels, and names arguments are all optional
Returns
-------
concatenated : type of objects
"""
op = _Concatenator(objs, axis=axis, join_axes=join_axes,
ignore_index=ignore_index, join=join,
keys=keys, levels=levels, names=names,
verify_integrity=verify_integrity,
copy=copy)
return op.get_result()
class _Concatenator(object):
"""
Orchestrates a concatenation operation for BlockManagers
"""
def __init__(self, objs, axis=0, join='outer', join_axes=None,
keys=None, levels=None, names=None,
ignore_index=False, verify_integrity=False, copy=True):
if isinstance(objs, (NDFrame, compat.string_types)):
raise TypeError('first argument must be an iterable of pandas '
'objects, you passed an object of type '
'"{0}"'.format(type(objs).__name__))
if join == 'outer':
self.intersect = False
elif join == 'inner':
self.intersect = True
else: # pragma: no cover
raise ValueError('Only can inner (intersect) or outer (union) '
'join the other axis')
if isinstance(objs, dict):
if keys is None:
keys = sorted(objs)
objs = [objs[k] for k in keys]
else:
objs = list(objs)
if len(objs) == 0:
raise ValueError('No objects to concatenate')
if keys is None:
objs = [obj for obj in objs if obj is not None]
else:
# #1649
clean_keys = []
clean_objs = []
for k, v in zip(keys, objs):
if v is None:
continue
clean_keys.append(k)
clean_objs.append(v)
objs = clean_objs
name = getattr(keys, 'name', None)
keys = Index(clean_keys, name=name)
if len(objs) == 0:
raise ValueError('All objects passed were None')
# consolidate data & figure out what our result ndim is going to be
ndims = set()
for obj in objs:
if not isinstance(obj, NDFrame):
raise TypeError("cannot concatenate a non-NDFrame object")
# consolidate
obj.consolidate(inplace=True)
ndims.add(obj.ndim)
# get the sample
# want the higest ndim that we have, and must be non-empty
# unless all objs are empty
sample = None
if len(ndims) > 1:
max_ndim = max(ndims)
for obj in objs:
if obj.ndim == max_ndim and np.sum(obj.shape):
sample = obj
break
else:
# filter out the empties if we have not multi-index possibiltes
# note to keep empty Series as it affect to result columns / name
non_empties = [obj for obj in objs
if sum(obj.shape) > 0 or isinstance(obj, Series)]
if (len(non_empties) and (keys is None and names is None and
levels is None and join_axes is None)):
objs = non_empties
sample = objs[0]
if sample is None:
sample = objs[0]
self.objs = objs
# Standardize axis parameter to int
if isinstance(sample, Series):
axis = DataFrame()._get_axis_number(axis)
else:
axis = sample._get_axis_number(axis)
# Need to flip BlockManager axis in the DataFrame special case
self._is_frame = isinstance(sample, DataFrame)
if self._is_frame:
axis = 1 if axis == 0 else 0
self._is_series = isinstance(sample, ABCSeries)
if not 0 <= axis <= sample.ndim:
raise AssertionError("axis must be between 0 and {0}, "
"input was {1}".format(sample.ndim, axis))
# if we have mixed ndims, then convert to highest ndim
# creating column numbers as needed
if len(ndims) > 1:
current_column = 0
max_ndim = sample.ndim
self.objs, objs = [], self.objs
for obj in objs:
ndim = obj.ndim
if ndim == max_ndim:
pass
elif ndim != max_ndim - 1:
raise ValueError("cannot concatenate unaligned mixed "
"dimensional NDFrame objects")
else:
name = getattr(obj, 'name', None)
if ignore_index or name is None:
name = current_column
current_column += 1
# doing a row-wise concatenation so need everything
# to line up
if self._is_frame and axis == 1:
name = 0
obj = sample._constructor({name: obj})
self.objs.append(obj)
# note: this is the BlockManager axis (since DataFrame is transposed)
self.axis = axis
self.join_axes = join_axes
self.keys = keys
self.names = names or getattr(keys, 'names', None)
self.levels = levels
self.ignore_index = ignore_index
self.verify_integrity = verify_integrity
self.copy = copy
self.new_axes = self._get_new_axes()
def get_result(self):
# series only
if self._is_series:
# stack blocks
if self.axis == 0:
# concat Series with length to keep dtype as much
non_empties = [x for x in self.objs if len(x) > 0]
if len(non_empties) > 0:
values = [x._values for x in non_empties]
else:
values = [x._values for x in self.objs]
new_data = _concat._concat_compat(values)
name = com._consensus_name_attr(self.objs)
cons = _concat._get_series_result_type(new_data)
return (cons(new_data, index=self.new_axes[0],
name=name, dtype=new_data.dtype)
.__finalize__(self, method='concat'))
# combine as columns in a frame
else:
data = dict(zip(range(len(self.objs)), self.objs))
cons = _concat._get_series_result_type(data)
index, columns = self.new_axes
df = cons(data, index=index)
df.columns = columns
return df.__finalize__(self, method='concat')
# combine block managers
else:
mgrs_indexers = []
for obj in self.objs:
mgr = obj._data
indexers = {}
for ax, new_labels in enumerate(self.new_axes):
if ax == self.axis:
# Suppress reindexing on concat axis
continue
obj_labels = mgr.axes[ax]
if not new_labels.equals(obj_labels):
indexers[ax] = obj_labels.reindex(new_labels)[1]
mgrs_indexers.append((obj._data, indexers))
new_data = concatenate_block_managers(
mgrs_indexers, self.new_axes, concat_axis=self.axis,
copy=self.copy)
if not self.copy:
new_data._consolidate_inplace()
cons = _concat._get_frame_result_type(new_data, self.objs)
return (cons._from_axes(new_data, self.new_axes)
.__finalize__(self, method='concat'))
def _get_result_dim(self):
if self._is_series and self.axis == 1:
return 2
else:
return self.objs[0].ndim
def _get_new_axes(self):
ndim = self._get_result_dim()
new_axes = [None] * ndim
if self.join_axes is None:
for i in range(ndim):
if i == self.axis:
continue
new_axes[i] = self._get_comb_axis(i)
else:
if len(self.join_axes) != ndim - 1:
raise AssertionError("length of join_axes must not be "
"equal to {0}".format(ndim - 1))
# ufff...
indices = lrange(ndim)
indices.remove(self.axis)
for i, ax in zip(indices, self.join_axes):
new_axes[i] = ax
new_axes[self.axis] = self._get_concat_axis()
return new_axes
def _get_comb_axis(self, i):
if self._is_series:
all_indexes = [x.index for x in self.objs]
else:
try:
all_indexes = [x._data.axes[i] for x in self.objs]
except IndexError:
types = [type(x).__name__ for x in self.objs]
raise TypeError("Cannot concatenate list of %s" % types)
return _get_combined_index(all_indexes, intersect=self.intersect)
def _get_concat_axis(self):
"""
Return index to be used along concatenation axis.
"""
if self._is_series:
if self.axis == 0:
indexes = [x.index for x in self.objs]
elif self.ignore_index:
idx = com._default_index(len(self.objs))
return idx
elif self.keys is None:
names = [None] * len(self.objs)
num = 0
has_names = False
for i, x in enumerate(self.objs):
if not isinstance(x, Series):
raise TypeError("Cannot concatenate type 'Series' "
"with object of type "
"%r" % type(x).__name__)
if x.name is not None:
names[i] = x.name
has_names = True
else:
names[i] = num
num += 1
if has_names:
return Index(names)
else:
return com._default_index(len(self.objs))
else:
return _ensure_index(self.keys)
else:
indexes = [x._data.axes[self.axis] for x in self.objs]
if self.ignore_index:
idx = com._default_index(sum(len(i) for i in indexes))
return idx
if self.keys is None:
concat_axis = _concat_indexes(indexes)
else:
concat_axis = _make_concat_multiindex(indexes, self.keys,
self.levels, self.names)
self._maybe_check_integrity(concat_axis)
return concat_axis
def _maybe_check_integrity(self, concat_index):
if self.verify_integrity:
if not concat_index.is_unique:
overlap = concat_index.get_duplicates()
raise ValueError('Indexes have overlapping values: %s'
% str(overlap))
def _concat_indexes(indexes):
return indexes[0].append(indexes[1:])
def _make_concat_multiindex(indexes, keys, levels=None, names=None):
if ((levels is None and isinstance(keys[0], tuple)) or
(levels is not None and len(levels) > 1)):
zipped = lzip(*keys)
if names is None:
names = [None] * len(zipped)
if levels is None:
_, levels = _factorize_from_iterables(zipped)
else:
levels = [_ensure_index(x) for x in levels]
else:
zipped = [keys]
if names is None:
names = [None]
if levels is None:
levels = [_ensure_index(keys)]
else:
levels = [_ensure_index(x) for x in levels]
if not _all_indexes_same(indexes):
label_list = []
# things are potentially different sizes, so compute the exact labels
# for each level and pass those to MultiIndex.from_arrays
for hlevel, level in zip(zipped, levels):
to_concat = []
for key, index in zip(hlevel, indexes):
try:
i = level.get_loc(key)
except KeyError:
raise ValueError('Key %s not in level %s'
% (str(key), str(level)))
to_concat.append(np.repeat(i, len(index)))
label_list.append(np.concatenate(to_concat))
concat_index = _concat_indexes(indexes)
# these go at the end
if isinstance(concat_index, MultiIndex):
levels.extend(concat_index.levels)
label_list.extend(concat_index.labels)
else:
codes, categories = _factorize_from_iterable(concat_index)
levels.append(categories)
label_list.append(codes)
if len(names) == len(levels):
names = list(names)
else:
# make sure that all of the passed indices have the same nlevels
if not len(set([idx.nlevels for idx in indexes])) == 1:
raise AssertionError("Cannot concat indices that do"
" not have the same number of levels")
# also copies
names = names + _get_consensus_names(indexes)
return MultiIndex(levels=levels, labels=label_list, names=names,
verify_integrity=False)
new_index = indexes[0]
n = len(new_index)
kpieces = len(indexes)
# also copies
new_names = list(names)
new_levels = list(levels)
# construct labels
new_labels = []
# do something a bit more speedy
for hlevel, level in zip(zipped, levels):
hlevel = _ensure_index(hlevel)
mapped = level.get_indexer(hlevel)
mask = mapped == -1
if mask.any():
raise ValueError('Values not found in passed level: %s'
% str(hlevel[mask]))
new_labels.append(np.repeat(mapped, n))
if isinstance(new_index, MultiIndex):
new_levels.extend(new_index.levels)
new_labels.extend([np.tile(lab, kpieces) for lab in new_index.labels])
else:
new_levels.append(new_index)
new_labels.append(np.tile(np.arange(n), kpieces))
if len(new_names) < len(new_levels):
new_names.extend(new_index.names)
return MultiIndex(levels=new_levels, labels=new_labels, names=new_names,
verify_integrity=False)
def _should_fill(lname, rname):
if (not isinstance(lname, compat.string_types) or
not isinstance(rname, compat.string_types)):
return True
return lname == rname
def _any(x):
return x is not None and len(x) > 0 and any([y is not None for y in x])
| mit |
raman-sharma/pyAudioAnalysis | data/testComputational.py | 5 | 3609 | import sys
from pyAudioAnalysis import audioBasicIO
from pyAudioAnalysis import audioFeatureExtraction
from pyAudioAnalysis import audioTrainTest as aT
from pyAudioAnalysis import audioSegmentation as aS
import matplotlib.pyplot as plt
import time
nExp = 4
def main(argv):
if argv[1] == "-shortTerm":
for i in range(nExp):
[Fs, x] = audioBasicIO.readAudioFile("diarizationExample.wav");
duration = x.shape[0] / float(Fs)
t1 = time.clock()
F = audioFeatureExtraction.stFeatureExtraction(x, Fs, 0.050*Fs, 0.050*Fs);
t2 = time.clock()
perTime1 = duration / (t2-t1); print "short-term feature extraction: {0:.1f} x realtime".format(perTime1)
elif argv[1] == "-classifyFile":
for i in range(nExp):
[Fs, x] = audioBasicIO.readAudioFile("diarizationExample.wav");
duration = x.shape[0] / float(Fs)
t1 = time.clock()
aT.fileClassification("diarizationExample.wav", "svmSM","svm")
t2 = time.clock()
perTime1 = duration / (t2-t1); print "Mid-term feature extraction + classification \t {0:.1f} x realtime".format(perTime1)
elif argv[1] == "-mtClassify":
for i in range(nExp):
[Fs, x] = audioBasicIO.readAudioFile("diarizationExample.wav");
duration = x.shape[0] / float(Fs)
t1 = time.clock()
[flagsInd, classesAll, acc] = aS.mtFileClassification("diarizationExample.wav", "svmSM", "svm", False, '')
t2 = time.clock()
perTime1 = duration / (t2-t1); print "Fix-sized classification - segmentation \t {0:.1f} x realtime".format(perTime1)
elif argv[1] == "-hmmSegmentation":
for i in range(nExp):
[Fs, x] = audioBasicIO.readAudioFile("diarizationExample.wav");
duration = x.shape[0] / float(Fs)
t1 = time.clock()
aS.hmmSegmentation('diarizationExample.wav', 'hmmRadioSM', False, '')
t2 = time.clock()
perTime1 = duration / (t2-t1); print "HMM-based classification - segmentation \t {0:.1f} x realtime".format(perTime1)
elif argv[1] == "-silenceRemoval":
for i in range(nExp):
[Fs, x] = audioBasicIO.readAudioFile("diarizationExample.wav");
duration = x.shape[0] / float(Fs)
t1 = time.clock()
[Fs, x] = audioBasicIO.readAudioFile("diarizationExample.wav");
segments = aS.silenceRemoval(x, Fs, 0.050, 0.050, smoothWindow = 1.0, Weight = 0.3, plot = False)
t2 = time.clock()
perTime1 = duration / (t2-t1); print "Silence removal \t {0:.1f} x realtime".format(perTime1)
elif argv[1] == "-thumbnailing":
for i in range(nExp):
[Fs1, x1] = audioBasicIO.readAudioFile("scottish.wav")
duration1 = x1.shape[0] / float(Fs1)
t1 = time.clock()
[A1, A2, B1, B2, Smatrix] = aS.musicThumbnailing(x1, Fs1, 1.0, 1.0, 15.0) # find thumbnail endpoints
t2 = time.clock()
perTime1 = duration1 / (t2-t1); print "Thumbnail \t {0:.1f} x realtime".format(perTime1)
elif argv[1] == "-diarization-noLDA":
for i in range(nExp):
[Fs1, x1] = audioBasicIO.readAudioFile("diarizationExample.wav")
duration1 = x1.shape[0] / float(Fs1)
t1 = time.clock()
aS.speakerDiarization("diarizationExample.wav", 4, LDAdim = 0, PLOT = False)
t2 = time.clock()
perTime1 = duration1 / (t2-t1); print "Diarization \t {0:.1f} x realtime".format(perTime1)
elif argv[1] == "-diarization-LDA":
for i in range(nExp):
[Fs1, x1] = audioBasicIO.readAudioFile("diarizationExample.wav")
duration1 = x1.shape[0] / float(Fs1)
t1 = time.clock()
aS.speakerDiarization("diarizationExample.wav", 4, PLOT = False)
t2 = time.clock()
perTime1 = duration1 / (t2-t1); print "Diarization \t {0:.1f} x realtime".format(perTime1)
if __name__ == '__main__':
main(sys.argv)
| apache-2.0 |
prasadtalasila/MailingListParser | lib/deprecated/graph_authors_infomap_community.py | 1 | 19491 | """
This module is used to find the community structure of the network according to the Infomap method of Martin Rosvall
and Carl T. Bergstrom and returns an appropriate VertexClustering object. This module has been implemented using both
the iGraph package and the Infomap tool from MapEquation.org. The VertexClustering object represents the clustering of
the vertex set of a graph and also provides some methods for getting the subgraph corresponding to a cluster and such.
"""
import json
import subprocess
import sys
import igraph
import numpy
import plotly
from matplotlib import pyplot as plt
from plotly.tools import FigureFactory as FF
from scipy.cluster.hierarchy import dendrogram, linkage
from analysis.author import generate_author_ranking
from util.read import *
sys.setrecursionlimit(10000)
def write_authors_data_matrix(json_data, tree_filename="infomap/output/"+"author_graph.tree"):
"""
:param json_data:
:return:
"""
top_authors = set()
top_authors_data = dict()
author_scores = generate_author_ranking(active_score=2, passive_score=1, write_to_file=False)
index = 0
for email_addr, author_score in author_scores:
index += 1
top_authors.add(email_addr)
top_authors_data[email_addr] = [author_score]
if index == 100:
break
print("Adding nodes to author's graph...")
author_graph = nx.DiGraph()
for msg_id, message in json_data.items():
if message['From'] in top_authors:
if message['Cc'] is None:
addr_list = message['To']
else:
addr_list = message['To'] | message['Cc']
for to_address in addr_list:
if to_address in top_authors:
if author_graph.has_edge(message['From'], to_address):
author_graph[message['From']][to_address]['weight'] *= \
author_graph[message['From']][to_address]['weight'] / (author_graph[message['From']][to_address]['weight'] + 1)
else:
author_graph.add_edge(message['From'], to_address, weight=1)
author_graph_undirected = author_graph.to_undirected()
clustering_coeff = nx.clustering(author_graph_undirected)
in_degree_dict = author_graph.in_degree(nbunch=author_graph.nodes_iter())
out_degree_dict = author_graph.out_degree(nbunch=author_graph.nodes_iter())
for email_addr in top_authors:
top_authors_data[email_addr].append(in_degree_dict[email_addr])
top_authors_data[email_addr].append(out_degree_dict[email_addr])
top_authors_data[email_addr].append(clustering_coeff[email_addr])
print("Parsing", tree_filename + "...")
with open(tree_filename, 'r') as tree_file:
for line in tree_file:
if not line or line[0] == '#':
continue
line = line.split()
if line[2][1:-1] in top_authors:
top_authors_data[line[2][1:-1]].append(float(line[1]))
tree_file.close()
with open("top_authors_data.csv", 'w') as output_file:
output_file.write("Email Address,Author Score,In-Degree,Out-Degree,Clustering Coeff,Module Flow\n")
for email_addr, data_list in top_authors_data.items():
output_file.write(email_addr+","+",".join([str(x) for x in data_list])+"\n")
output_file.close()
print("Authors data written to file.")
def write_to_pajek(author_graph, filename="author_graph.net"):
# Write Pajek file compatible with the Infomap Community Detection module
nx.write_pajek(author_graph, filename)
lines_in_file= list()
with open(filename, 'r') as pajek_file:
for line in pajek_file:
lines_in_file.append(line)
num_vertices = int(lines_in_file[0].split()[1])
for i in range(1, num_vertices+1):
line = lines_in_file[i].split()
line[1] = "\"" + line[1] + "\""
del line[2:]
line.append("\n")
lines_in_file[i] = " ".join(line)
with open(filename, 'w') as pajek_file:
for line in lines_in_file:
pajek_file.write(line)
print("Written to:", filename)
def write_pajek_for_submodules(json_data, tree_filename="infomap/output/"+"author_graph.tree"):
"""
:param tree_filename:
:param json_data:
:return:
"""
current_module = 1
authors_in_module = set()
with open(tree_filename, 'r') as tree_file:
for line in tree_file:
if line[0] == '#':
continue
if int(line[:line.index(":")]) > current_module:
author_graph = nx.DiGraph()
for msg_id, message in json_data.items():
if message['Cc'] is None:
addr_list = message['To']
else:
addr_list = message['To'] | message['Cc']
# Adding only the required edges to the authors graph:
for to_address in addr_list & authors_in_module:
if author_graph.has_edge(message['From'], to_address):
author_graph[message['From']][to_address]['weight'] += 1
else:
author_graph.add_edge(message['From'], to_address, weight=1)
output_filename = "submodule_"+str(current_module)+".net"
write_to_pajek(author_graph,filename=output_filename)
# Run the infomaps algorithm
output_folder = 'output_submodule' + str(current_module) + "/"
subprocess.run(args=['mkdir', output_folder])
subprocess.run(args=['./infomap/Infomap', output_filename + ' ' + output_folder
+' --tree --bftree --btree -d -c --node-ranks --flow-network --map'])
current_module += 1
authors_in_module = {line[line.index("\"")+1:line.rindex("\"")]}
else:
authors_in_module.add(line[line.index("\"")+1:line.rindex("\"")])
def generate_author_communities(json_data):
"""
:param json_data:
:return:
"""
author_graph = igraph.Graph()
author_graph.es["weight"] = 1.0
author_map = dict()
# c_surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, 1600, 900)
# ctx = cairo.Context(c_surface)
# ctx.scale(1900, 900)
# ctx.rectangle(0, 0, 1, 1)
# ctx.set_source_rgba(0,0,0,0)
# ctx.fill()
"""
Graphs can also be indexed by strings or pairs of vertex indices or vertex names. When a graph is
indexed by a string, the operation translates to the retrieval, creation, modification or deletion
of a graph attribute.
When a graph is indexed by a pair of vertex indices or names, the graph itself is treated as an
adjacency matrix and the corresponding cell of the matrix is returned. Assigning values different
from zero or one to the adjacency matrix will be translated to one, unless the graph is weighted,
in which case the numbers will be treated as weights.
"""
top_authors = set()
author_scores = generate_author_ranking(active_score=2, passive_score=1, write_to_file=False)
index = 0
for email_addr, author_score in author_scores:
index += 1
top_authors.add(email_addr)
if index == 100:
break
index = 0
for id, node in json_data.items():
if node['From'] in top_authors:
if node['From'] not in author_map:
author_map[node['From']] = index
author_graph.add_vertex(name=node['From'], label=node['From'])
index += 1
for to_addr in node['To']:
if to_addr in top_authors:
if to_addr not in author_map:
author_map[to_addr] = index
author_graph.add_vertex(name=to_addr, label=to_addr)
index += 1
if author_graph[node['From'], to_addr] == 0:
author_graph.add_edge(node['From'], to_addr, weight=1)
else:
author_graph[node['From'], to_addr] += 1
if node['Cc'] is None:
continue
for to_addr in node['Cc']:
if to_addr in top_authors:
if to_addr not in author_map:
author_map[to_addr] = index
author_graph.add_vertex(name=to_addr, label=to_addr)
index += 1
if author_graph[node['From'], to_addr] == 0:
author_graph.add_edge(node['From'], to_addr, weight=1)
else:
author_graph[node['From'], to_addr] += 1
print("Nodes and Edges added to iGraph.")
# vertex_dendogram = author_graph.community_edge_betweenness(clusters=8, directed=True, weights="weight")
# igraph.plot(vertex_dendogram, "vd.pdf", vertex_label_size=3, bbox=(1200, 1200))
# print("Dendrogram saved as PDF.")
vertex_clustering_obj = author_graph.community_infomap(edge_weights=author_graph.es["weight"])
igraph.plot(vertex_clustering_obj, "vc.pdf", vertex_label_size=10, bbox=(1500, 1500), edge_color="gray")
print("Vertex Clustering saved as PDF.")
# with open("community_vertex_clustering.txt", 'w') as output_file:
# output_file.write(str(vertex_clustering_obj))
# output_file.close()
def generate_dendrogram_scipy(json_data, tree_filename="infomap/output/"+"author_graph.tree"):
author_graph = nx.Graph()
dist_queue = linkage_matrix = pair_queue = list()
print("Reading author UIDs from JSON file...")
with open('author_uid_map.json', 'r') as map_file:
author_uid_map = json.load(map_file)
map_file.close()
# Use node_limit to limit the number of authors
# node_limit = len(author_uid_map)
node_limit = 50
buffer_queue = numpy.ndarray((node_limit,2), dtype=float)
print("Adding nodes to author's graph...")
for msg_id, message in json_data.items():
if message['Cc'] is None:
addr_list = message['To']
else:
addr_list = message['To'] | message['Cc']
for to_address in addr_list:
if author_graph.has_edge(message['From'], to_address):
author_graph[message['From']][to_address]['weight'] *= \
author_graph[message['From']][to_address]['weight'] / (author_graph[message['From']][to_address]['weight'] + 1)
else:
author_graph.add_edge(message['From'], to_address, weight=1)
shortest_paths = nx.single_source_shortest_path_length(author_graph, 'linux-kernel@vger.kernel.org')
print("Parsing", tree_filename + "...")
with open(tree_filename, 'r') as tree_file:
for line in tree_file:
if not line or line[0] == '#':
continue
line = line.split()
author_uid = author_uid_map[line[2][1:-1]]
if author_uid < node_limit:
if line[2][1:-1] in shortest_paths.keys():
buffer_queue[author_uid][0] = shortest_paths[line[2][1:-1]]
else:
buffer_queue[author_uid][0] = 100.0
buffer_queue[author_uid][1] = float(line[1])
tree_file.close()
# for node1 in buffer_queue:
# for node2 in buffer_queue:
# if node1 != node2 and node1[0] == node2[0]:
# node1 = node1[1]
# node2 = node2[1]
# dist1 = dist2 = float('inf')
# if node1 in shortest_paths.keys():
# if node2 in shortest_paths[node1].keys():
# dist1 = shortest_paths[node1][node2]
# if node2 in shortest_paths.keys():
# if node1 in shortest_paths[node2].keys():
# dist2 = shortest_paths[node2][node1]
# dist_queue.add(min(dist1, dist2), node1, node2)
# dist_queue.sort(reverse=True)
#
# # Using a disjoint set to track the nodes joined:
# disjoint_set = UnionFind(num_nodes)
#
# while dist_queue:
# dist1, node1, node2 = dist_queue.pop()
# if not disjoint_set.is_connected(node1, node2):
# linkage_matrix.add(node1, node2, dist1, 2)
# disjoint_set.union(node1, node2)
#
# for node1 in buffer_queue:
# for node2 in buffer_queue:
# if node1 != node2 and node1[0] == node2[0]:
# if not disjoint_set.is_connected(node1, node2):
# linkage_matrix.add(node1, node2, dist1, 4)
# disjoint_set.union(node1, node2)
print("Drawing the dendrogram...")
linkage_matrix = linkage(buffer_queue, 'single')
# print(linkage_matrix)
print("Saving figure to dendrogram_infomaps.png")
plt.figure(figsize=(80, 40))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('Author UID')
plt.ylabel('Code Length')
dendrogram(linkage_matrix)
plt.savefig("dendrogram_infomaps.png")
def generate_dendrogram_plotly(json_data, tree_filename="infomap/output/"+"author_graph.tree"):
author_graph = nx.Graph()
dist_queue = linkage_matrix = pair_queue = list()
print("Reading author UIDs from JSON file...")
with open('author_uid_map.json', 'r') as map_file:
author_uid_map = json.load(map_file)
map_file.close()
# Use node_limit to limit the number of authors
# node_limit = len(author_uid_map)
node_limit = 100
# buffer_queue = numpy.ndarray((node_limit,2), dtype=float)
#
# print("Adding nodes to author's graph...")
# for msg_id, message in json_data.items():
# if message['Cc'] is None:
# addr_list = message['To']
# else:
# addr_list = message['To'] | message['Cc']
# for to_address in addr_list:
# if author_graph.has_edge(message['From'], to_address):
# author_graph[message['From']][to_address]['weight'] *= \
# author_graph[message['From']][to_address]['weight'] / (author_graph[message['From']][to_address]['weight'] + 1)
# else:
# author_graph.add_edge(message['From'], to_address, weight=1)
# shortest_paths = nx.single_source_shortest_path_length(author_graph, 'linux-kernel@vger.kernel.org')
#
# print("Parsing", tree_filename + "...")
# with open(tree_filename, 'r') as tree_file:
# for line in tree_file:
# if not line or line[0] == '#':
# continue
# line = line.split()
# author_uid = author_uid_map[line[2][1:-1]]
# if author_uid < node_limit:
# if line[2][1:-1] in shortest_paths.keys():
# buffer_queue[author_uid][0] = shortest_paths[line[2][1:-1]]
# else:
# buffer_queue[author_uid][0] = 1000.0
# buffer_queue[author_uid][1] = float(line[1])
# tree_file.close()
dist_matrix = numpy.ndarray((node_limit, node_limit), dtype=float)
for msg_id, message in json_data.items():
if author_uid_map[message['From']] < node_limit:
if message['Cc'] is None:
addr_list = message['To']
else:
addr_list = message['To'] | message['Cc']
for to_address in addr_list:
if author_uid_map[to_address] < node_limit:
if author_graph.has_edge(message['From'], to_address):
author_graph[message['From']][to_address]['weight'] *= \
author_graph[message['From']][to_address]['weight'] / (
author_graph[message['From']][to_address]['weight'] + 1)
else:
author_graph.add_edge(message['From'], to_address, weight=1)
shortest_paths = nx.all_pairs_shortest_path_length(author_graph)
print("Nodes added to the author's graph.")
for i1 in author_graph.nodes():
for i2 in author_graph.nodes():
if i2 in shortest_paths[i1]:
dist_matrix[author_uid_map[i1]][author_uid_map[i2]] = shortest_paths[i1][i2]
else:
dist_matrix[author_uid_map[i1]][author_uid_map[i2]] = 100
print("Drawing the dendrogram...")
# linkage_matrix = linkage(buffer_queue, 'single')
# dendro = FF.create_dendrogram(linkage_matrix)
dendro = FF.create_dendrogram(dist_matrix)
dendro['layout'].update({'width': 1200, 'height': 800})
plotly.offline.plot(dendro, filename='dendrogram_infomaps.html')
json_data = dict()
email_re = re.compile(r'[\w\.-]+@[\w\.-]+')
# Time limit can be specified here in the form of a timestamp in one of the identifiable formats and all messages
# that have arrived after this timestamp will be ignored.
time_limit = None
# If true, then messages that belong to threads that have only a single author are ignored.
ignore_lat = True
if time_limit is None:
time_limit = time.strftime("%a, %d %b %Y %H:%M:%S %z")
msgs_before_time = set()
time_limit = get_datetime_object(time_limit)
print("All messages before", time_limit, "are being considered.")
if not ignore_lat:
with open('clean_data.json', 'r') as json_file:
for chunk in lines_per_n(json_file, 9):
json_obj = json.loads(chunk)
json_obj['Message-ID'] = int(json_obj['Message-ID'])
json_obj['Time'] = datetime.datetime.strptime(json_obj['Time'], "%a, %d %b %Y %H:%M:%S %z")
if json_obj['Time'] < time_limit:
# print("\nFrom", json_obj['From'], "\nTo", json_obj['To'], "\nCc", json_obj['Cc'])
from_addr = email_re.search(json_obj['From'])
json_obj['From'] = from_addr.group(0) if from_addr is not None else json_obj['From']
json_obj['To'] = set(email_re.findall(json_obj['To']))
json_obj['Cc'] = set(email_re.findall(json_obj['Cc'])) if json_obj['Cc'] is not None else None
# print("\nFrom", json_obj['From'], "\nTo", json_obj['To'], "\nCc", json_obj['Cc'])
json_data[json_obj['Message-ID']] = json_obj
else:
lone_author_threads = get_lone_author_threads(False)
with open('clean_data.json', 'r') as json_file:
for chunk in lines_per_n(json_file, 9):
json_obj = json.loads(chunk)
json_obj['Message-ID'] = int(json_obj['Message-ID'])
if json_obj['Message-ID'] not in lone_author_threads:
json_obj['Time'] = datetime.datetime.strptime(json_obj['Time'], "%a, %d %b %Y %H:%M:%S %z")
if json_obj['Time'] < time_limit:
# print("\nFrom", json_obj['From'], "\nTo", json_obj['To'], "\nCc", json_obj['Cc'])
from_addr = email_re.search(json_obj['From'])
json_obj['From'] = from_addr.group(0) if from_addr is not None else json_obj['From']
json_obj['To'] = set(email_re.findall(json_obj['To']))
json_obj['Cc'] = set(email_re.findall(json_obj['Cc'])) if json_obj['Cc'] is not None else None
# print("\nFrom", json_obj['From'], "\nTo", json_obj['To'], "\nCc", json_obj['Cc'])
json_data[json_obj['Message-ID']] = json_obj
print("JSON data loaded.")
write_authors_data_matrix(json_data) | gpl-3.0 |
louispotok/pandas | pandas/core/reshape/merge.py | 2 | 61842 | """
SQL-style merge routines
"""
import copy
import warnings
import string
import numpy as np
from pandas.compat import range, lzip, zip, map, filter
import pandas.compat as compat
from pandas import (Categorical, DataFrame,
Index, MultiIndex, Timedelta)
from pandas.core.arrays.categorical import _recode_for_categories
from pandas.core.frame import _merge_doc
from pandas.core.dtypes.common import (
is_datetime64tz_dtype,
is_datetime64_dtype,
needs_i8_conversion,
is_int64_dtype,
is_array_like,
is_categorical_dtype,
is_integer_dtype,
is_float_dtype,
is_numeric_dtype,
is_integer,
is_int_or_datetime_dtype,
is_dtype_equal,
is_bool,
is_bool_dtype,
is_list_like,
is_datetimelike,
_ensure_int64,
_ensure_float64,
_ensure_object,
_get_dtype)
from pandas.core.dtypes.missing import na_value_for_dtype
from pandas.core.internals import (items_overlap_with_suffix,
concatenate_block_managers)
from pandas.util._decorators import Appender, Substitution
from pandas.core.sorting import is_int64_overflow_possible
import pandas.core.algorithms as algos
import pandas.core.sorting as sorting
import pandas.core.common as com
from pandas._libs import hashtable as libhashtable, join as libjoin, lib
from pandas.errors import MergeError
@Substitution('\nleft : DataFrame')
@Appender(_merge_doc, indents=0)
def merge(left, right, how='inner', on=None, left_on=None, right_on=None,
left_index=False, right_index=False, sort=False,
suffixes=('_x', '_y'), copy=True, indicator=False,
validate=None):
op = _MergeOperation(left, right, how=how, on=on, left_on=left_on,
right_on=right_on, left_index=left_index,
right_index=right_index, sort=sort, suffixes=suffixes,
copy=copy, indicator=indicator,
validate=validate)
return op.get_result()
if __debug__:
merge.__doc__ = _merge_doc % '\nleft : DataFrame'
def _groupby_and_merge(by, on, left, right, _merge_pieces,
check_duplicates=True):
"""
groupby & merge; we are always performing a left-by type operation
Parameters
----------
by: field to group
on: duplicates field
left: left frame
right: right frame
_merge_pieces: function for merging
check_duplicates: boolean, default True
should we check & clean duplicates
"""
pieces = []
if not isinstance(by, (list, tuple)):
by = [by]
lby = left.groupby(by, sort=False)
# if we can groupby the rhs
# then we can get vastly better perf
try:
# we will check & remove duplicates if indicated
if check_duplicates:
if on is None:
on = []
elif not isinstance(on, (list, tuple)):
on = [on]
if right.duplicated(by + on).any():
right = right.drop_duplicates(by + on, keep='last')
rby = right.groupby(by, sort=False)
except KeyError:
rby = None
for key, lhs in lby:
if rby is None:
rhs = right
else:
try:
rhs = right.take(rby.indices[key])
except KeyError:
# key doesn't exist in left
lcols = lhs.columns.tolist()
cols = lcols + [r for r in right.columns
if r not in set(lcols)]
merged = lhs.reindex(columns=cols)
merged.index = range(len(merged))
pieces.append(merged)
continue
merged = _merge_pieces(lhs, rhs)
# make sure join keys are in the merged
# TODO, should _merge_pieces do this?
for k in by:
try:
if k in merged:
merged[k] = key
except KeyError:
pass
pieces.append(merged)
# preserve the original order
# if we have a missing piece this can be reset
from pandas.core.reshape.concat import concat
result = concat(pieces, ignore_index=True)
result = result.reindex(columns=pieces[0].columns, copy=False)
return result, lby
def merge_ordered(left, right, on=None,
left_on=None, right_on=None,
left_by=None, right_by=None,
fill_method=None, suffixes=('_x', '_y'),
how='outer'):
"""Perform merge with optional filling/interpolation designed for ordered
data like time series data. Optionally perform group-wise merge (see
examples)
Parameters
----------
left : DataFrame
right : DataFrame
on : label or list
Field names to join on. Must be found in both DataFrames.
left_on : label or list, or array-like
Field names to join on in left DataFrame. Can be a vector or list of
vectors of the length of the DataFrame to use a particular vector as
the join key instead of columns
right_on : label or list, or array-like
Field names to join on in right DataFrame or vector/list of vectors per
left_on docs
left_by : column name or list of column names
Group left DataFrame by group columns and merge piece by piece with
right DataFrame
right_by : column name or list of column names
Group right DataFrame by group columns and merge piece by piece with
left DataFrame
fill_method : {'ffill', None}, default None
Interpolation method for data
suffixes : 2-length sequence (tuple, list, ...)
Suffix to apply to overlapping column names in the left and right
side, respectively
how : {'left', 'right', 'outer', 'inner'}, default 'outer'
* left: use only keys from left frame (SQL: left outer join)
* right: use only keys from right frame (SQL: right outer join)
* outer: use union of keys from both frames (SQL: full outer join)
* inner: use intersection of keys from both frames (SQL: inner join)
.. versionadded:: 0.19.0
Examples
--------
>>> A >>> B
key lvalue group key rvalue
0 a 1 a 0 b 1
1 c 2 a 1 c 2
2 e 3 a 2 d 3
3 a 1 b
4 c 2 b
5 e 3 b
>>> merge_ordered(A, B, fill_method='ffill', left_by='group')
group key lvalue rvalue
0 a a 1 NaN
1 a b 1 1.0
2 a c 2 2.0
3 a d 2 3.0
4 a e 3 3.0
5 b a 1 NaN
6 b b 1 1.0
7 b c 2 2.0
8 b d 2 3.0
9 b e 3 3.0
Returns
-------
merged : DataFrame
The output type will the be same as 'left', if it is a subclass
of DataFrame.
See also
--------
merge
merge_asof
"""
def _merger(x, y):
# perform the ordered merge operation
op = _OrderedMerge(x, y, on=on, left_on=left_on, right_on=right_on,
suffixes=suffixes, fill_method=fill_method,
how=how)
return op.get_result()
if left_by is not None and right_by is not None:
raise ValueError('Can only group either left or right frames')
elif left_by is not None:
result, _ = _groupby_and_merge(left_by, on, left, right,
lambda x, y: _merger(x, y),
check_duplicates=False)
elif right_by is not None:
result, _ = _groupby_and_merge(right_by, on, right, left,
lambda x, y: _merger(y, x),
check_duplicates=False)
else:
result = _merger(left, right)
return result
def merge_asof(left, right, on=None,
left_on=None, right_on=None,
left_index=False, right_index=False,
by=None, left_by=None, right_by=None,
suffixes=('_x', '_y'),
tolerance=None,
allow_exact_matches=True,
direction='backward'):
"""Perform an asof merge. This is similar to a left-join except that we
match on nearest key rather than equal keys.
Both DataFrames must be sorted by the key.
For each row in the left DataFrame:
- A "backward" search selects the last row in the right DataFrame whose
'on' key is less than or equal to the left's key.
- A "forward" search selects the first row in the right DataFrame whose
'on' key is greater than or equal to the left's key.
- A "nearest" search selects the row in the right DataFrame whose 'on'
key is closest in absolute distance to the left's key.
The default is "backward" and is compatible in versions below 0.20.0.
The direction parameter was added in version 0.20.0 and introduces
"forward" and "nearest".
Optionally match on equivalent keys with 'by' before searching with 'on'.
.. versionadded:: 0.19.0
Parameters
----------
left : DataFrame
right : DataFrame
on : label
Field name to join on. Must be found in both DataFrames.
The data MUST be ordered. Furthermore this must be a numeric column,
such as datetimelike, integer, or float. On or left_on/right_on
must be given.
left_on : label
Field name to join on in left DataFrame.
right_on : label
Field name to join on in right DataFrame.
left_index : boolean
Use the index of the left DataFrame as the join key.
.. versionadded:: 0.19.2
right_index : boolean
Use the index of the right DataFrame as the join key.
.. versionadded:: 0.19.2
by : column name or list of column names
Match on these columns before performing merge operation.
left_by : column name
Field names to match on in the left DataFrame.
.. versionadded:: 0.19.2
right_by : column name
Field names to match on in the right DataFrame.
.. versionadded:: 0.19.2
suffixes : 2-length sequence (tuple, list, ...)
Suffix to apply to overlapping column names in the left and right
side, respectively.
tolerance : integer or Timedelta, optional, default None
Select asof tolerance within this range; must be compatible
with the merge index.
allow_exact_matches : boolean, default True
- If True, allow matching with the same 'on' value
(i.e. less-than-or-equal-to / greater-than-or-equal-to)
- If False, don't match the same 'on' value
(i.e., strictly less-than / strictly greater-than)
direction : 'backward' (default), 'forward', or 'nearest'
Whether to search for prior, subsequent, or closest matches.
.. versionadded:: 0.20.0
Returns
-------
merged : DataFrame
Examples
--------
>>> left = pd.DataFrame({'a': [1, 5, 10], 'left_val': ['a', 'b', 'c']})
>>> left
a left_val
0 1 a
1 5 b
2 10 c
>>> right = pd.DataFrame({'a': [1, 2, 3, 6, 7],
... 'right_val': [1, 2, 3, 6, 7]})
>>> right
a right_val
0 1 1
1 2 2
2 3 3
3 6 6
4 7 7
>>> pd.merge_asof(left, right, on='a')
a left_val right_val
0 1 a 1
1 5 b 3
2 10 c 7
>>> pd.merge_asof(left, right, on='a', allow_exact_matches=False)
a left_val right_val
0 1 a NaN
1 5 b 3.0
2 10 c 7.0
>>> pd.merge_asof(left, right, on='a', direction='forward')
a left_val right_val
0 1 a 1.0
1 5 b 6.0
2 10 c NaN
>>> pd.merge_asof(left, right, on='a', direction='nearest')
a left_val right_val
0 1 a 1
1 5 b 6
2 10 c 7
We can use indexed DataFrames as well.
>>> left = pd.DataFrame({'left_val': ['a', 'b', 'c']}, index=[1, 5, 10])
>>> left
left_val
1 a
5 b
10 c
>>> right = pd.DataFrame({'right_val': [1, 2, 3, 6, 7]},
... index=[1, 2, 3, 6, 7])
>>> right
right_val
1 1
2 2
3 3
6 6
7 7
>>> pd.merge_asof(left, right, left_index=True, right_index=True)
left_val right_val
1 a 1
5 b 3
10 c 7
Here is a real-world times-series example
>>> quotes
time ticker bid ask
0 2016-05-25 13:30:00.023 GOOG 720.50 720.93
1 2016-05-25 13:30:00.023 MSFT 51.95 51.96
2 2016-05-25 13:30:00.030 MSFT 51.97 51.98
3 2016-05-25 13:30:00.041 MSFT 51.99 52.00
4 2016-05-25 13:30:00.048 GOOG 720.50 720.93
5 2016-05-25 13:30:00.049 AAPL 97.99 98.01
6 2016-05-25 13:30:00.072 GOOG 720.50 720.88
7 2016-05-25 13:30:00.075 MSFT 52.01 52.03
>>> trades
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
By default we are taking the asof of the quotes
>>> pd.merge_asof(trades, quotes,
... on='time',
... by='ticker')
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 2ms between the quote time and the trade time
>>> pd.merge_asof(trades, quotes,
... on='time',
... by='ticker',
... tolerance=pd.Timedelta('2ms'))
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 NaN NaN
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 10ms between the quote time and the trade time
and we exclude exact matches on time. However *prior* data will
propagate forward
>>> pd.merge_asof(trades, quotes,
... on='time',
... by='ticker',
... tolerance=pd.Timedelta('10ms'),
... allow_exact_matches=False)
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 NaN NaN
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 NaN NaN
3 2016-05-25 13:30:00.048 GOOG 720.92 100 NaN NaN
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
See also
--------
merge
merge_ordered
"""
op = _AsOfMerge(left, right,
on=on, left_on=left_on, right_on=right_on,
left_index=left_index, right_index=right_index,
by=by, left_by=left_by, right_by=right_by,
suffixes=suffixes,
how='asof', tolerance=tolerance,
allow_exact_matches=allow_exact_matches,
direction=direction)
return op.get_result()
# TODO: transformations??
# TODO: only copy DataFrames when modification necessary
class _MergeOperation(object):
"""
Perform a database (SQL) merge operation between two DataFrame objects
using either columns as keys or their row indexes
"""
_merge_type = 'merge'
def __init__(self, left, right, how='inner', on=None,
left_on=None, right_on=None, axis=1,
left_index=False, right_index=False, sort=True,
suffixes=('_x', '_y'), copy=True, indicator=False,
validate=None):
self.left = self.orig_left = left
self.right = self.orig_right = right
self.how = how
self.axis = axis
self.on = com._maybe_make_list(on)
self.left_on = com._maybe_make_list(left_on)
self.right_on = com._maybe_make_list(right_on)
self.copy = copy
self.suffixes = suffixes
self.sort = sort
self.left_index = left_index
self.right_index = right_index
self.indicator = indicator
if isinstance(self.indicator, compat.string_types):
self.indicator_name = self.indicator
elif isinstance(self.indicator, bool):
self.indicator_name = '_merge' if self.indicator else None
else:
raise ValueError(
'indicator option can only accept boolean or string arguments')
if not isinstance(left, DataFrame):
raise ValueError('can not merge DataFrame with instance of '
'type {left}'.format(left=type(left)))
if not isinstance(right, DataFrame):
raise ValueError('can not merge DataFrame with instance of '
'type {right}'.format(right=type(right)))
if not is_bool(left_index):
raise ValueError(
'left_index parameter must be of type bool, not '
'{left_index}'.format(left_index=type(left_index)))
if not is_bool(right_index):
raise ValueError(
'right_index parameter must be of type bool, not '
'{right_index}'.format(right_index=type(right_index)))
# warn user when merging between different levels
if left.columns.nlevels != right.columns.nlevels:
msg = ('merging between different levels can give an unintended '
'result ({left} levels on the left, {right} on the right)'
).format(left=left.columns.nlevels,
right=right.columns.nlevels)
warnings.warn(msg, UserWarning)
self._validate_specification()
# note this function has side effects
(self.left_join_keys,
self.right_join_keys,
self.join_names) = self._get_merge_keys()
# validate the merge keys dtypes. We may need to coerce
# to avoid incompat dtypes
self._maybe_coerce_merge_keys()
# If argument passed to validate,
# check if columns specified as unique
# are in fact unique.
if validate is not None:
self._validate(validate)
def get_result(self):
if self.indicator:
self.left, self.right = self._indicator_pre_merge(
self.left, self.right)
join_index, left_indexer, right_indexer = self._get_join_info()
ldata, rdata = self.left._data, self.right._data
lsuf, rsuf = self.suffixes
llabels, rlabels = items_overlap_with_suffix(ldata.items, lsuf,
rdata.items, rsuf)
lindexers = {1: left_indexer} if left_indexer is not None else {}
rindexers = {1: right_indexer} if right_indexer is not None else {}
result_data = concatenate_block_managers(
[(ldata, lindexers), (rdata, rindexers)],
axes=[llabels.append(rlabels), join_index],
concat_axis=0, copy=self.copy)
typ = self.left._constructor
result = typ(result_data).__finalize__(self, method=self._merge_type)
if self.indicator:
result = self._indicator_post_merge(result)
self._maybe_add_join_keys(result, left_indexer, right_indexer)
self._maybe_restore_index_levels(result)
return result
def _indicator_pre_merge(self, left, right):
columns = left.columns.union(right.columns)
for i in ['_left_indicator', '_right_indicator']:
if i in columns:
raise ValueError("Cannot use `indicator=True` option when "
"data contains a column named {name}"
.format(name=i))
if self.indicator_name in columns:
raise ValueError(
"Cannot use name of an existing column for indicator column")
left = left.copy()
right = right.copy()
left['_left_indicator'] = 1
left['_left_indicator'] = left['_left_indicator'].astype('int8')
right['_right_indicator'] = 2
right['_right_indicator'] = right['_right_indicator'].astype('int8')
return left, right
def _indicator_post_merge(self, result):
result['_left_indicator'] = result['_left_indicator'].fillna(0)
result['_right_indicator'] = result['_right_indicator'].fillna(0)
result[self.indicator_name] = Categorical((result['_left_indicator'] +
result['_right_indicator']),
categories=[1, 2, 3])
result[self.indicator_name] = (
result[self.indicator_name]
.cat.rename_categories(['left_only', 'right_only', 'both']))
result = result.drop(labels=['_left_indicator', '_right_indicator'],
axis=1)
return result
def _maybe_restore_index_levels(self, result):
"""
Restore index levels specified as `on` parameters
Here we check for cases where `self.left_on` and `self.right_on` pairs
each reference an index level in their respective DataFrames. The
joined columns corresponding to these pairs are then restored to the
index of `result`.
**Note:** This method has side effects. It modifies `result` in-place
Parameters
----------
result: DataFrame
merge result
Returns
-------
None
"""
names_to_restore = []
for name, left_key, right_key in zip(self.join_names,
self.left_on,
self.right_on):
if (self.orig_left._is_level_reference(left_key) and
self.orig_right._is_level_reference(right_key) and
name not in result.index.names):
names_to_restore.append(name)
if names_to_restore:
result.set_index(names_to_restore, inplace=True)
def _maybe_add_join_keys(self, result, left_indexer, right_indexer):
left_has_missing = None
right_has_missing = None
keys = zip(self.join_names, self.left_on, self.right_on)
for i, (name, lname, rname) in enumerate(keys):
if not _should_fill(lname, rname):
continue
take_left, take_right = None, None
if name in result:
if left_indexer is not None and right_indexer is not None:
if name in self.left:
if left_has_missing is None:
left_has_missing = (left_indexer == -1).any()
if left_has_missing:
take_right = self.right_join_keys[i]
if not is_dtype_equal(result[name].dtype,
self.left[name].dtype):
take_left = self.left[name]._values
elif name in self.right:
if right_has_missing is None:
right_has_missing = (right_indexer == -1).any()
if right_has_missing:
take_left = self.left_join_keys[i]
if not is_dtype_equal(result[name].dtype,
self.right[name].dtype):
take_right = self.right[name]._values
elif left_indexer is not None \
and is_array_like(self.left_join_keys[i]):
take_left = self.left_join_keys[i]
take_right = self.right_join_keys[i]
if take_left is not None or take_right is not None:
if take_left is None:
lvals = result[name]._values
else:
lfill = na_value_for_dtype(take_left.dtype)
lvals = algos.take_1d(take_left, left_indexer,
fill_value=lfill)
if take_right is None:
rvals = result[name]._values
else:
rfill = na_value_for_dtype(take_right.dtype)
rvals = algos.take_1d(take_right, right_indexer,
fill_value=rfill)
# if we have an all missing left_indexer
# make sure to just use the right values
mask = left_indexer == -1
if mask.all():
key_col = rvals
else:
key_col = Index(lvals).where(~mask, rvals)
if result._is_label_reference(name):
result[name] = key_col
elif result._is_level_reference(name):
if isinstance(result.index, MultiIndex):
idx_list = [result.index.get_level_values(level_name)
if level_name != name else key_col
for level_name in result.index.names]
result.set_index(idx_list, inplace=True)
else:
result.index = Index(key_col, name=name)
else:
result.insert(i, name or 'key_{i}'.format(i=i), key_col)
def _get_join_indexers(self):
""" return the join indexers """
return _get_join_indexers(self.left_join_keys,
self.right_join_keys,
sort=self.sort,
how=self.how)
def _get_join_info(self):
left_ax = self.left._data.axes[self.axis]
right_ax = self.right._data.axes[self.axis]
if self.left_index and self.right_index and self.how != 'asof':
join_index, left_indexer, right_indexer = \
left_ax.join(right_ax, how=self.how, return_indexers=True,
sort=self.sort)
elif self.right_index and self.how == 'left':
join_index, left_indexer, right_indexer = \
_left_join_on_index(left_ax, right_ax, self.left_join_keys,
sort=self.sort)
elif self.left_index and self.how == 'right':
join_index, right_indexer, left_indexer = \
_left_join_on_index(right_ax, left_ax, self.right_join_keys,
sort=self.sort)
else:
(left_indexer,
right_indexer) = self._get_join_indexers()
if self.right_index:
if len(self.left) > 0:
join_index = self.left.index.take(left_indexer)
else:
join_index = self.right.index.take(right_indexer)
left_indexer = np.array([-1] * len(join_index))
elif self.left_index:
if len(self.right) > 0:
join_index = self.right.index.take(right_indexer)
else:
join_index = self.left.index.take(left_indexer)
right_indexer = np.array([-1] * len(join_index))
else:
join_index = Index(np.arange(len(left_indexer)))
if len(join_index) == 0:
join_index = join_index.astype(object)
return join_index, left_indexer, right_indexer
def _get_merge_keys(self):
"""
Note: has side effects (copy/delete key columns)
Parameters
----------
left
right
on
Returns
-------
left_keys, right_keys
"""
left_keys = []
right_keys = []
join_names = []
right_drop = []
left_drop = []
left, right = self.left, self.right
stacklevel = 5 # Number of stack levels from df.merge
is_lkey = lambda x: is_array_like(x) and len(x) == len(left)
is_rkey = lambda x: is_array_like(x) and len(x) == len(right)
# Note that pd.merge_asof() has separate 'on' and 'by' parameters. A
# user could, for example, request 'left_index' and 'left_by'. In a
# regular pd.merge(), users cannot specify both 'left_index' and
# 'left_on'. (Instead, users have a MultiIndex). That means the
# self.left_on in this function is always empty in a pd.merge(), but
# a pd.merge_asof(left_index=True, left_by=...) will result in a
# self.left_on array with a None in the middle of it. This requires
# a work-around as designated in the code below.
# See _validate_specification() for where this happens.
# ugh, spaghetti re #733
if _any(self.left_on) and _any(self.right_on):
for lk, rk in zip(self.left_on, self.right_on):
if is_lkey(lk):
left_keys.append(lk)
if is_rkey(rk):
right_keys.append(rk)
join_names.append(None) # what to do?
else:
if rk is not None:
right_keys.append(
right._get_label_or_level_values(
rk, stacklevel=stacklevel))
join_names.append(rk)
else:
# work-around for merge_asof(right_index=True)
right_keys.append(right.index)
join_names.append(right.index.name)
else:
if not is_rkey(rk):
if rk is not None:
right_keys.append(
right._get_label_or_level_values(
rk, stacklevel=stacklevel))
else:
# work-around for merge_asof(right_index=True)
right_keys.append(right.index)
if lk is not None and lk == rk:
# avoid key upcast in corner case (length-0)
if len(left) > 0:
right_drop.append(rk)
else:
left_drop.append(lk)
else:
right_keys.append(rk)
if lk is not None:
left_keys.append(left._get_label_or_level_values(
lk, stacklevel=stacklevel))
join_names.append(lk)
else:
# work-around for merge_asof(left_index=True)
left_keys.append(left.index)
join_names.append(left.index.name)
elif _any(self.left_on):
for k in self.left_on:
if is_lkey(k):
left_keys.append(k)
join_names.append(None)
else:
left_keys.append(left._get_label_or_level_values(
k, stacklevel=stacklevel))
join_names.append(k)
if isinstance(self.right.index, MultiIndex):
right_keys = [lev._values.take(lab)
for lev, lab in zip(self.right.index.levels,
self.right.index.labels)]
else:
right_keys = [self.right.index.values]
elif _any(self.right_on):
for k in self.right_on:
if is_rkey(k):
right_keys.append(k)
join_names.append(None)
else:
right_keys.append(right._get_label_or_level_values(
k, stacklevel=stacklevel))
join_names.append(k)
if isinstance(self.left.index, MultiIndex):
left_keys = [lev._values.take(lab)
for lev, lab in zip(self.left.index.levels,
self.left.index.labels)]
else:
left_keys = [self.left.index.values]
if left_drop:
self.left = self.left._drop_labels_or_levels(left_drop)
if right_drop:
self.right = self.right._drop_labels_or_levels(right_drop)
return left_keys, right_keys, join_names
def _maybe_coerce_merge_keys(self):
# we have valid mergees but we may have to further
# coerce these if they are originally incompatible types
#
# for example if these are categorical, but are not dtype_equal
# or if we have object and integer dtypes
for lk, rk, name in zip(self.left_join_keys,
self.right_join_keys,
self.join_names):
if (len(lk) and not len(rk)) or (not len(lk) and len(rk)):
continue
lk_is_cat = is_categorical_dtype(lk)
rk_is_cat = is_categorical_dtype(rk)
# if either left or right is a categorical
# then the must match exactly in categories & ordered
if lk_is_cat and rk_is_cat:
if lk.is_dtype_equal(rk):
continue
elif lk_is_cat or rk_is_cat:
pass
elif is_dtype_equal(lk.dtype, rk.dtype):
continue
msg = ("You are trying to merge on {lk_dtype} and "
"{rk_dtype} columns. If you wish to proceed "
"you should use pd.concat".format(lk_dtype=lk.dtype,
rk_dtype=rk.dtype))
# if we are numeric, then allow differing
# kinds to proceed, eg. int64 and int8, int and float
# further if we are object, but we infer to
# the same, then proceed
if is_numeric_dtype(lk) and is_numeric_dtype(rk):
if lk.dtype.kind == rk.dtype.kind:
pass
# check whether ints and floats
elif is_integer_dtype(rk) and is_float_dtype(lk):
if not (lk == lk.astype(rk.dtype))[~np.isnan(lk)].all():
warnings.warn('You are merging on int and float '
'columns where the float values '
'are not equal to their int '
'representation', UserWarning)
elif is_float_dtype(rk) and is_integer_dtype(lk):
if not (rk == rk.astype(lk.dtype))[~np.isnan(rk)].all():
warnings.warn('You are merging on int and float '
'columns where the float values '
'are not equal to their int '
'representation', UserWarning)
# let's infer and see if we are ok
elif lib.infer_dtype(lk) == lib.infer_dtype(rk):
pass
# Check if we are trying to merge on obviously
# incompatible dtypes GH 9780, GH 15800
# boolean values are considered as numeric, but are still allowed
# to be merged on object boolean values
elif ((is_numeric_dtype(lk) and not is_bool_dtype(lk))
and not is_numeric_dtype(rk)):
raise ValueError(msg)
elif (not is_numeric_dtype(lk)
and (is_numeric_dtype(rk) and not is_bool_dtype(rk))):
raise ValueError(msg)
elif is_datetimelike(lk) and not is_datetimelike(rk):
raise ValueError(msg)
elif not is_datetimelike(lk) and is_datetimelike(rk):
raise ValueError(msg)
elif is_datetime64tz_dtype(lk) and not is_datetime64tz_dtype(rk):
raise ValueError(msg)
elif not is_datetime64tz_dtype(lk) and is_datetime64tz_dtype(rk):
raise ValueError(msg)
# Houston, we have a problem!
# let's coerce to object if the dtypes aren't
# categorical, otherwise coerce to the category
# dtype. If we coerced categories to object,
# then we would lose type information on some
# columns, and end up trying to merge
# incompatible dtypes. See GH 16900.
else:
if name in self.left.columns:
typ = lk.categories.dtype if lk_is_cat else object
self.left = self.left.assign(
**{name: self.left[name].astype(typ)})
if name in self.right.columns:
typ = rk.categories.dtype if rk_is_cat else object
self.right = self.right.assign(
**{name: self.right[name].astype(typ)})
def _validate_specification(self):
# Hm, any way to make this logic less complicated??
if self.on is None and self.left_on is None and self.right_on is None:
if self.left_index and self.right_index:
self.left_on, self.right_on = (), ()
elif self.left_index:
if self.right_on is None:
raise MergeError('Must pass right_on or right_index=True')
elif self.right_index:
if self.left_on is None:
raise MergeError('Must pass left_on or left_index=True')
else:
# use the common columns
common_cols = self.left.columns.intersection(
self.right.columns)
if len(common_cols) == 0:
raise MergeError(
'No common columns to perform merge on. '
'Merge options: left_on={lon}, right_on={ron}, '
'left_index={lidx}, right_index={ridx}'
.format(lon=self.left_on, ron=self.right_on,
lidx=self.left_index, ridx=self.right_index))
if not common_cols.is_unique:
raise MergeError("Data columns not unique: {common!r}"
.format(common=common_cols))
self.left_on = self.right_on = common_cols
elif self.on is not None:
if self.left_on is not None or self.right_on is not None:
raise MergeError('Can only pass argument "on" OR "left_on" '
'and "right_on", not a combination of both.')
self.left_on = self.right_on = self.on
elif self.left_on is not None:
n = len(self.left_on)
if self.right_index:
if len(self.left_on) != self.right.index.nlevels:
raise ValueError('len(left_on) must equal the number '
'of levels in the index of "right"')
self.right_on = [None] * n
elif self.right_on is not None:
n = len(self.right_on)
if self.left_index:
if len(self.right_on) != self.left.index.nlevels:
raise ValueError('len(right_on) must equal the number '
'of levels in the index of "left"')
self.left_on = [None] * n
if len(self.right_on) != len(self.left_on):
raise ValueError("len(right_on) must equal len(left_on)")
def _validate(self, validate):
# Check uniqueness of each
if self.left_index:
left_unique = self.orig_left.index.is_unique
else:
left_unique = MultiIndex.from_arrays(self.left_join_keys
).is_unique
if self.right_index:
right_unique = self.orig_right.index.is_unique
else:
right_unique = MultiIndex.from_arrays(self.right_join_keys
).is_unique
# Check data integrity
if validate in ["one_to_one", "1:1"]:
if not left_unique and not right_unique:
raise MergeError("Merge keys are not unique in either left"
" or right dataset; not a one-to-one merge")
elif not left_unique:
raise MergeError("Merge keys are not unique in left dataset;"
" not a one-to-one merge")
elif not right_unique:
raise MergeError("Merge keys are not unique in right dataset;"
" not a one-to-one merge")
elif validate in ["one_to_many", "1:m"]:
if not left_unique:
raise MergeError("Merge keys are not unique in left dataset;"
"not a one-to-many merge")
elif validate in ["many_to_one", "m:1"]:
if not right_unique:
raise MergeError("Merge keys are not unique in right dataset;"
" not a many-to-one merge")
elif validate in ['many_to_many', 'm:m']:
pass
else:
raise ValueError("Not a valid argument for validate")
def _get_join_indexers(left_keys, right_keys, sort=False, how='inner',
**kwargs):
"""
Parameters
----------
left_keys: ndarray, Index, Series
right_keys: ndarray, Index, Series
sort: boolean, default False
how: string {'inner', 'outer', 'left', 'right'}, default 'inner'
Returns
-------
tuple of (left_indexer, right_indexer)
indexers into the left_keys, right_keys
"""
from functools import partial
assert len(left_keys) == len(right_keys), \
'left_key and right_keys must be the same length'
# bind `sort` arg. of _factorize_keys
fkeys = partial(_factorize_keys, sort=sort)
# get left & right join labels and num. of levels at each location
llab, rlab, shape = map(list, zip(* map(fkeys, left_keys, right_keys)))
# get flat i8 keys from label lists
lkey, rkey = _get_join_keys(llab, rlab, shape, sort)
# factorize keys to a dense i8 space
# `count` is the num. of unique keys
# set(lkey) | set(rkey) == range(count)
lkey, rkey, count = fkeys(lkey, rkey)
# preserve left frame order if how == 'left' and sort == False
kwargs = copy.copy(kwargs)
if how == 'left':
kwargs['sort'] = sort
join_func = _join_functions[how]
return join_func(lkey, rkey, count, **kwargs)
class _OrderedMerge(_MergeOperation):
_merge_type = 'ordered_merge'
def __init__(self, left, right, on=None, left_on=None, right_on=None,
left_index=False, right_index=False, axis=1,
suffixes=('_x', '_y'), copy=True,
fill_method=None, how='outer'):
self.fill_method = fill_method
_MergeOperation.__init__(self, left, right, on=on, left_on=left_on,
left_index=left_index,
right_index=right_index,
right_on=right_on, axis=axis,
how=how, suffixes=suffixes,
sort=True # factorize sorts
)
def get_result(self):
join_index, left_indexer, right_indexer = self._get_join_info()
# this is a bit kludgy
ldata, rdata = self.left._data, self.right._data
lsuf, rsuf = self.suffixes
llabels, rlabels = items_overlap_with_suffix(ldata.items, lsuf,
rdata.items, rsuf)
if self.fill_method == 'ffill':
left_join_indexer = libjoin.ffill_indexer(left_indexer)
right_join_indexer = libjoin.ffill_indexer(right_indexer)
else:
left_join_indexer = left_indexer
right_join_indexer = right_indexer
lindexers = {
1: left_join_indexer} if left_join_indexer is not None else {}
rindexers = {
1: right_join_indexer} if right_join_indexer is not None else {}
result_data = concatenate_block_managers(
[(ldata, lindexers), (rdata, rindexers)],
axes=[llabels.append(rlabels), join_index],
concat_axis=0, copy=self.copy)
typ = self.left._constructor
result = typ(result_data).__finalize__(self, method=self._merge_type)
self._maybe_add_join_keys(result, left_indexer, right_indexer)
return result
def _asof_function(direction, on_type):
name = 'asof_join_{dir}_{on}'.format(dir=direction, on=on_type)
return getattr(libjoin, name, None)
def _asof_by_function(direction, on_type, by_type):
name = 'asof_join_{dir}_{on}_by_{by}'.format(
dir=direction, on=on_type, by=by_type)
return getattr(libjoin, name, None)
_type_casters = {
'int64_t': _ensure_int64,
'double': _ensure_float64,
'object': _ensure_object,
}
_cython_types = {
'uint8': 'uint8_t',
'uint32': 'uint32_t',
'uint16': 'uint16_t',
'uint64': 'uint64_t',
'int8': 'int8_t',
'int32': 'int32_t',
'int16': 'int16_t',
'int64': 'int64_t',
'float16': 'error',
'float32': 'float',
'float64': 'double',
}
def _get_cython_type(dtype):
""" Given a dtype, return a C name like 'int64_t' or 'double' """
type_name = _get_dtype(dtype).name
ctype = _cython_types.get(type_name, 'object')
if ctype == 'error':
raise MergeError('unsupported type: {type}'.format(type=type_name))
return ctype
def _get_cython_type_upcast(dtype):
""" Upcast a dtype to 'int64_t', 'double', or 'object' """
if is_integer_dtype(dtype):
return 'int64_t'
elif is_float_dtype(dtype):
return 'double'
else:
return 'object'
class _AsOfMerge(_OrderedMerge):
_merge_type = 'asof_merge'
def __init__(self, left, right, on=None, left_on=None, right_on=None,
left_index=False, right_index=False,
by=None, left_by=None, right_by=None,
axis=1, suffixes=('_x', '_y'), copy=True,
fill_method=None,
how='asof', tolerance=None,
allow_exact_matches=True,
direction='backward'):
self.by = by
self.left_by = left_by
self.right_by = right_by
self.tolerance = tolerance
self.allow_exact_matches = allow_exact_matches
self.direction = direction
_OrderedMerge.__init__(self, left, right, on=on, left_on=left_on,
right_on=right_on, left_index=left_index,
right_index=right_index, axis=axis,
how=how, suffixes=suffixes,
fill_method=fill_method)
def _validate_specification(self):
super(_AsOfMerge, self)._validate_specification()
# we only allow on to be a single item for on
if len(self.left_on) != 1 and not self.left_index:
raise MergeError("can only asof on a key for left")
if len(self.right_on) != 1 and not self.right_index:
raise MergeError("can only asof on a key for right")
if self.left_index and isinstance(self.left.index, MultiIndex):
raise MergeError("left can only have one index")
if self.right_index and isinstance(self.right.index, MultiIndex):
raise MergeError("right can only have one index")
# set 'by' columns
if self.by is not None:
if self.left_by is not None or self.right_by is not None:
raise MergeError('Can only pass by OR left_by '
'and right_by')
self.left_by = self.right_by = self.by
if self.left_by is None and self.right_by is not None:
raise MergeError('missing left_by')
if self.left_by is not None and self.right_by is None:
raise MergeError('missing right_by')
# add 'by' to our key-list so we can have it in the
# output as a key
if self.left_by is not None:
if not is_list_like(self.left_by):
self.left_by = [self.left_by]
if not is_list_like(self.right_by):
self.right_by = [self.right_by]
if len(self.left_by) != len(self.right_by):
raise MergeError('left_by and right_by must be same length')
self.left_on = self.left_by + list(self.left_on)
self.right_on = self.right_by + list(self.right_on)
# check 'direction' is valid
if self.direction not in ['backward', 'forward', 'nearest']:
raise MergeError('direction invalid: {direction}'
.format(direction=self.direction))
@property
def _asof_key(self):
""" This is our asof key, the 'on' """
return self.left_on[-1]
def _get_merge_keys(self):
# note this function has side effects
(left_join_keys,
right_join_keys,
join_names) = super(_AsOfMerge, self)._get_merge_keys()
# validate index types are the same
for i, (lk, rk) in enumerate(zip(left_join_keys, right_join_keys)):
if not is_dtype_equal(lk.dtype, rk.dtype):
raise MergeError("incompatible merge keys [{i}] {lkdtype} and "
"{rkdtype}, must be the same type"
.format(i=i, lkdtype=lk.dtype,
rkdtype=rk.dtype))
# validate tolerance; must be a Timedelta if we have a DTI
if self.tolerance is not None:
if self.left_index:
lt = self.left.index
else:
lt = left_join_keys[-1]
msg = ("incompatible tolerance {tolerance}, must be compat "
"with type {lkdtype}".format(
tolerance=type(self.tolerance),
lkdtype=lt.dtype))
if is_datetime64_dtype(lt) or is_datetime64tz_dtype(lt):
if not isinstance(self.tolerance, Timedelta):
raise MergeError(msg)
if self.tolerance < Timedelta(0):
raise MergeError("tolerance must be positive")
elif is_int64_dtype(lt):
if not is_integer(self.tolerance):
raise MergeError(msg)
if self.tolerance < 0:
raise MergeError("tolerance must be positive")
else:
raise MergeError("key must be integer or timestamp")
# validate allow_exact_matches
if not is_bool(self.allow_exact_matches):
msg = "allow_exact_matches must be boolean, passed {passed}"
raise MergeError(msg.format(passed=self.allow_exact_matches))
return left_join_keys, right_join_keys, join_names
def _get_join_indexers(self):
""" return the join indexers """
def flip(xs):
""" unlike np.transpose, this returns an array of tuples """
labels = list(string.ascii_lowercase[:len(xs)])
dtypes = [x.dtype for x in xs]
labeled_dtypes = list(zip(labels, dtypes))
return np.array(lzip(*xs), labeled_dtypes)
# values to compare
left_values = (self.left.index.values if self.left_index else
self.left_join_keys[-1])
right_values = (self.right.index.values if self.right_index else
self.right_join_keys[-1])
tolerance = self.tolerance
# we required sortedness in the join keys
msg = "{side} keys must be sorted"
if not Index(left_values).is_monotonic:
raise ValueError(msg.format(side='left'))
if not Index(right_values).is_monotonic:
raise ValueError(msg.format(side='right'))
# initial type conversion as needed
if needs_i8_conversion(left_values):
left_values = left_values.view('i8')
right_values = right_values.view('i8')
if tolerance is not None:
tolerance = tolerance.value
# a "by" parameter requires special handling
if self.left_by is not None:
# remove 'on' parameter from values if one existed
if self.left_index and self.right_index:
left_by_values = self.left_join_keys
right_by_values = self.right_join_keys
else:
left_by_values = self.left_join_keys[0:-1]
right_by_values = self.right_join_keys[0:-1]
# get tuple representation of values if more than one
if len(left_by_values) == 1:
left_by_values = left_by_values[0]
right_by_values = right_by_values[0]
else:
left_by_values = flip(left_by_values)
right_by_values = flip(right_by_values)
# upcast 'by' parameter because HashTable is limited
by_type = _get_cython_type_upcast(left_by_values.dtype)
by_type_caster = _type_casters[by_type]
left_by_values = by_type_caster(left_by_values)
right_by_values = by_type_caster(right_by_values)
# choose appropriate function by type
on_type = _get_cython_type(left_values.dtype)
func = _asof_by_function(self.direction, on_type, by_type)
return func(left_values,
right_values,
left_by_values,
right_by_values,
self.allow_exact_matches,
tolerance)
else:
# choose appropriate function by type
on_type = _get_cython_type(left_values.dtype)
func = _asof_function(self.direction, on_type)
return func(left_values,
right_values,
self.allow_exact_matches,
tolerance)
def _get_multiindex_indexer(join_keys, index, sort):
from functools import partial
# bind `sort` argument
fkeys = partial(_factorize_keys, sort=sort)
# left & right join labels and num. of levels at each location
rlab, llab, shape = map(list, zip(* map(fkeys, index.levels, join_keys)))
if sort:
rlab = list(map(np.take, rlab, index.labels))
else:
i8copy = lambda a: a.astype('i8', subok=False, copy=True)
rlab = list(map(i8copy, index.labels))
# fix right labels if there were any nulls
for i in range(len(join_keys)):
mask = index.labels[i] == -1
if mask.any():
# check if there already was any nulls at this location
# if there was, it is factorized to `shape[i] - 1`
a = join_keys[i][llab[i] == shape[i] - 1]
if a.size == 0 or not a[0] != a[0]:
shape[i] += 1
rlab[i][mask] = shape[i] - 1
# get flat i8 join keys
lkey, rkey = _get_join_keys(llab, rlab, shape, sort)
# factorize keys to a dense i8 space
lkey, rkey, count = fkeys(lkey, rkey)
return libjoin.left_outer_join(lkey, rkey, count, sort=sort)
def _get_single_indexer(join_key, index, sort=False):
left_key, right_key, count = _factorize_keys(join_key, index, sort=sort)
left_indexer, right_indexer = libjoin.left_outer_join(
_ensure_int64(left_key),
_ensure_int64(right_key),
count, sort=sort)
return left_indexer, right_indexer
def _left_join_on_index(left_ax, right_ax, join_keys, sort=False):
if len(join_keys) > 1:
if not ((isinstance(right_ax, MultiIndex) and
len(join_keys) == right_ax.nlevels)):
raise AssertionError("If more than one join key is given then "
"'right_ax' must be a MultiIndex and the "
"number of join keys must be the number of "
"levels in right_ax")
left_indexer, right_indexer = \
_get_multiindex_indexer(join_keys, right_ax, sort=sort)
else:
jkey = join_keys[0]
left_indexer, right_indexer = \
_get_single_indexer(jkey, right_ax, sort=sort)
if sort or len(left_ax) != len(left_indexer):
# if asked to sort or there are 1-to-many matches
join_index = left_ax.take(left_indexer)
return join_index, left_indexer, right_indexer
# left frame preserves order & length of its index
return left_ax, None, right_indexer
def _right_outer_join(x, y, max_groups):
right_indexer, left_indexer = libjoin.left_outer_join(y, x, max_groups)
return left_indexer, right_indexer
_join_functions = {
'inner': libjoin.inner_join,
'left': libjoin.left_outer_join,
'right': _right_outer_join,
'outer': libjoin.full_outer_join,
}
def _factorize_keys(lk, rk, sort=True):
if is_datetime64tz_dtype(lk) and is_datetime64tz_dtype(rk):
lk = lk.values
rk = rk.values
# if we exactly match in categories, allow us to factorize on codes
if (is_categorical_dtype(lk) and
is_categorical_dtype(rk) and
lk.is_dtype_equal(rk)):
klass = libhashtable.Int64Factorizer
if lk.categories.equals(rk.categories):
rk = rk.codes
else:
# Same categories in different orders -> recode
rk = _recode_for_categories(rk.codes, rk.categories, lk.categories)
lk = _ensure_int64(lk.codes)
rk = _ensure_int64(rk)
elif is_int_or_datetime_dtype(lk) and is_int_or_datetime_dtype(rk):
klass = libhashtable.Int64Factorizer
lk = _ensure_int64(com._values_from_object(lk))
rk = _ensure_int64(com._values_from_object(rk))
else:
klass = libhashtable.Factorizer
lk = _ensure_object(lk)
rk = _ensure_object(rk)
rizer = klass(max(len(lk), len(rk)))
llab = rizer.factorize(lk)
rlab = rizer.factorize(rk)
count = rizer.get_count()
if sort:
uniques = rizer.uniques.to_array()
llab, rlab = _sort_labels(uniques, llab, rlab)
# NA group
lmask = llab == -1
lany = lmask.any()
rmask = rlab == -1
rany = rmask.any()
if lany or rany:
if lany:
np.putmask(llab, lmask, count)
if rany:
np.putmask(rlab, rmask, count)
count += 1
return llab, rlab, count
def _sort_labels(uniques, left, right):
if not isinstance(uniques, np.ndarray):
# tuplesafe
uniques = Index(uniques).values
llength = len(left)
labels = np.concatenate([left, right])
_, new_labels = sorting.safe_sort(uniques, labels, na_sentinel=-1)
new_labels = _ensure_int64(new_labels)
new_left, new_right = new_labels[:llength], new_labels[llength:]
return new_left, new_right
def _get_join_keys(llab, rlab, shape, sort):
# how many levels can be done without overflow
pred = lambda i: not is_int64_overflow_possible(shape[:i])
nlev = next(filter(pred, range(len(shape), 0, -1)))
# get keys for the first `nlev` levels
stride = np.prod(shape[1:nlev], dtype='i8')
lkey = stride * llab[0].astype('i8', subok=False, copy=False)
rkey = stride * rlab[0].astype('i8', subok=False, copy=False)
for i in range(1, nlev):
with np.errstate(divide='ignore'):
stride //= shape[i]
lkey += llab[i] * stride
rkey += rlab[i] * stride
if nlev == len(shape): # all done!
return lkey, rkey
# densify current keys to avoid overflow
lkey, rkey, count = _factorize_keys(lkey, rkey, sort=sort)
llab = [lkey] + llab[nlev:]
rlab = [rkey] + rlab[nlev:]
shape = [count] + shape[nlev:]
return _get_join_keys(llab, rlab, shape, sort)
def _should_fill(lname, rname):
if (not isinstance(lname, compat.string_types) or
not isinstance(rname, compat.string_types)):
return True
return lname == rname
def _any(x):
return x is not None and com._any_not_none(*x)
| bsd-3-clause |
ppries/tensorflow | tensorflow/contrib/learn/python/learn/learn_io/pandas_io_test.py | 11 | 2404 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for pandas_io."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
from tensorflow.contrib.learn.python.learn.learn_io import pandas_io
from tensorflow.python.framework import errors
# pylint: disable=g-import-not-at-top
try:
import pandas as pd
HAS_PANDAS = True
except ImportError:
HAS_PANDAS = False
class PandasIoTest(tf.test.TestCase):
def testPandasInputFn(self):
if not HAS_PANDAS:
return
index = np.arange(100, 104)
a = np.arange(4)
b = np.arange(32, 36)
x = pd.DataFrame({'a': a, 'b': b}, index=index)
y_noindex = pd.Series(np.arange(-32, -28))
y = pd.Series(np.arange(-32, -28), index=index)
with self.test_session() as session:
with self.assertRaises(ValueError):
failing_input_fn = pandas_io.pandas_input_fn(
x, y_noindex, batch_size=2, shuffle=False, num_epochs=1)
failing_input_fn()
input_fn = pandas_io.pandas_input_fn(
x, y, batch_size=2, shuffle=False, num_epochs=1)
features, target = input_fn()
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(session, coord=coord)
res = session.run([features, target])
self.assertAllEqual(res[0]['index'], [100, 101])
self.assertAllEqual(res[0]['a'], [0, 1])
self.assertAllEqual(res[0]['b'], [32, 33])
self.assertAllEqual(res[1], [-32, -31])
session.run([features, target])
with self.assertRaises(errors.OutOfRangeError):
session.run([features, target])
coord.request_stop()
coord.join(threads)
if __name__ == '__main__':
tf.test.main()
| apache-2.0 |
Garrett-R/scikit-learn | sklearn/datasets/tests/test_20news.py | 42 | 2416 | """Test the 20news downloader, if the data is available."""
import numpy as np
import scipy.sparse as sp
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_true
from sklearn.utils.testing import SkipTest
from sklearn import datasets
def test_20news():
try:
data = datasets.fetch_20newsgroups(
subset='all', download_if_missing=False, shuffle=False)
except IOError:
raise SkipTest("Download 20 newsgroups to run this test")
# Extract a reduced dataset
data2cats = datasets.fetch_20newsgroups(
subset='all', categories=data.target_names[-1:-3:-1], shuffle=False)
# Check that the ordering of the target_names is the same
# as the ordering in the full dataset
assert_equal(data2cats.target_names,
data.target_names[-2:])
# Assert that we have only 0 and 1 as labels
assert_equal(np.unique(data2cats.target).tolist(), [0, 1])
# Check that the number of filenames is consistent with data/target
assert_equal(len(data2cats.filenames), len(data2cats.target))
assert_equal(len(data2cats.filenames), len(data2cats.data))
# Check that the first entry of the reduced dataset corresponds to
# the first entry of the corresponding category in the full dataset
entry1 = data2cats.data[0]
category = data2cats.target_names[data2cats.target[0]]
label = data.target_names.index(category)
entry2 = data.data[np.where(data.target == label)[0][0]]
assert_equal(entry1, entry2)
def test_20news_vectorized():
# This test is slow.
raise SkipTest("Test too slow.")
bunch = datasets.fetch_20newsgroups_vectorized(subset="train")
assert_true(sp.isspmatrix_csr(bunch.data))
assert_equal(bunch.data.shape, (11314, 107428))
assert_equal(bunch.target.shape[0], 11314)
assert_equal(bunch.data.dtype, np.float64)
bunch = datasets.fetch_20newsgroups_vectorized(subset="test")
assert_true(sp.isspmatrix_csr(bunch.data))
assert_equal(bunch.data.shape, (7532, 107428))
assert_equal(bunch.target.shape[0], 7532)
assert_equal(bunch.data.dtype, np.float64)
bunch = datasets.fetch_20newsgroups_vectorized(subset="all")
assert_true(sp.isspmatrix_csr(bunch.data))
assert_equal(bunch.data.shape, (11314 + 7532, 107428))
assert_equal(bunch.target.shape[0], 11314 + 7532)
assert_equal(bunch.data.dtype, np.float64)
| bsd-3-clause |
pminder/ksvd | Test/demo_ksvd.py | 1 | 2021 | #coding:utf8
"""Run very simple tests for ksvd algorithm"""
import random
import imp
ksvd = imp.load_source('ksvd', '../Source/ksvd.py')
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import orthogonal_mp
from skimage.draw import circle_perimeter, ellipse_perimeter, polygon, line
################# COLLECTION OF SHAPES #######################
cercle = np.zeros((10, 10), np.uint8)
cercle[circle_perimeter(4, 4, 3)] = 1
ellipse = np.zeros((10, 10), np.uint8)
ellipse[ellipse_perimeter(4, 4, 3, 5)] = 1
square = np.zeros((10, 10), np.uint8)
square[polygon(np.array([1, 4, 4, 1]), np.array([1, 1, 4, 4]))] = 1
dline = np.zeros((10, 10), np.uint8)
dline[line(1, 1, 8, 8)] = 1
shapes = [cercle, ellipse, square, dline]
################ GENERATING DICTIONARY ####################
D = np.zeros((100, 4))
for i in range(len(shapes)):
D[:,i] = shapes[i].ravel()
D[:,i] = D[:,i]/np.linalg.norm(D[:,i])
#################### RANDOM IMAGES #######################
def generate_img(i):
"""Generates combination of at most i shapes from D dictionnary"""
output = np.zeros(100)
for _ in range(i):
output += D[:,random.randint(0, len(shapes)-1)]
return output
X = []
for _ in range(5000):
X.append(generate_img(2))
X = np.array(X).T
####################### TEST KSVD #########################
model = ksvd.KSVD((100, 4), K = 2, precompute = True)
model.fit(X)
gamma = model.sparse_rep(X)
plt.subplot(2, 2, 1)
plt.imshow(model.D[:,0].reshape((10,10)), cmap = 'gray')
plt.subplot(2, 2, 2)
plt.imshow(model.D[:,1].reshape((10,10)), cmap = 'gray')
plt.subplot(2, 2, 3)
plt.imshow(model.D[:,2].reshape((10,10)), cmap = 'gray')
plt.subplot(2, 2, 4)
plt.imshow(model.D[:,3].reshape((10,10)), cmap = 'gray')
plt.show()
for i in range(5):
plt.subplot(5, 2, i*2 + 1)
plt.imshow(X[:,i].reshape((10,10)), cmap = 'gray')
plt.subplot(5, 2, i*2 + 2)
plt.imshow(model.D.dot(gamma[:,i]).reshape((10,10)), cmap = 'gray')
plt.show()
| mit |
aerler/HGS-Tools | Python/enkf_utils/enkf_input.py | 1 | 26739 | '''
Created on Jan 1, 2018
A collection of functions to generate EnKF input files.
@author: Andre R. Erler, GPL v3
'''
# imports
import os, yaml
import numpy as np
import pandas as pd
from glob import glob
from collections import OrderedDict
from scipy import stats as ss
from collections import namedtuple
# internal/local imports
from hgs_output import binary
from geodata.misc import ArgumentError, isNumber
# Graham's package to read HGS binary data: https://github.com/Aquanty/hgs_output
# This package requires Cython-compiled code; on Windows the easiest way to get
# this to work is via a Wheel installation file, which can be obtained here:
# \\AQUANTY-NAS\share\resources\scripts\Python\hgs_output (inside Aquanty LAN)
## helper functions
def variableScale(start, stop, nreal, NP):
''' generate scale factors for each realization and distribute evenly '''
npp = nreal/NP
pre_scale = np.linspace(start=start, stop=stop, num=nreal)
var_scale = np.zeros_like(pre_scale)
j = 0; npp_ = (len(pre_scale)-npp*NP) # need counter, because npp can be irregular
# reorder items so that each processor gets a similar range
for i in range(NP):
npp1 = npp+1 if i < npp_ else npp
for n in range(npp1):
print((j,n*NP+i))
var_scale[j] = pre_scale[n*NP+i]
j += 1 # count up
# check correctness
assert j==nreal, (j,nreal)
assert var_scale.max() == pre_scale.max(), (var_scale.max(),pre_scale.max())
assert var_scale.min() == pre_scale.min(), (var_scale.min(),pre_scale.min())
assert np.allclose(var_scale.mean(), pre_scale.mean()), (var_scale.mean(),pre_scale.mean())
assert np.allclose(var_scale.std(), pre_scale.std()), (var_scale.std(), pre_scale.std())
# return
return var_scale
def queryKister(url=None, output=None, ts_id=None, period=None, **kwargs):
''' retrieve a Kister csv file for given period from web server '''
import requests
# parse period
if isinstance(period, (list,tuple)):
from_date = period[0]; to_date = period[1]
else:
from_date = period; to_date = None
# set parameter values
params = {'service':'kisters', 'type':'queryServices', 'request':'getTimeseriesValues', 'datasource':0,
'format':'csv', 'metadata':True, 'ts_id':ts_id, 'from':from_date, 'to':to_date}
params.update(kwargs)
# issue request
r = requests.get(url, params=params)
r.raise_for_status() # raise an exception is an error occurred
# write contents to file
with open(output, 'w') as f: f.write(r.text)
# return request object
return r
def readKister(filepath=None, period=None, resample='1D', missing=None, bias=None, comment='#',
header=0, separator=';', name='value', lpad=True, lvalues=True, outliers=None):
''' read a Kister csv file and slice and resample timeseries '''
df = pd.read_csv(filepath, header=header, sep=separator, comment='#',
index_col=0, parse_dates=True, names=('time',name))
# slice
if period:
begin,end = pd.to_datetime(period[0]),pd.to_datetime(period[1])
df = df[begin:end]
if resample:
df = df.resample(resample).mean()
if period and resample and lpad:
# extend time axis/index, if necessary, and pad with missing values
df = df.reindex(pd.date_range(begin,end, freq=resample))
if outliers is not None:
# remove values that are more than 'outliers' x standard deviation away from the mean
df[( ( df[name] - df[name].mean() ) / df[name].std() ).abs() > outliers] = np.NaN
if bias is not None:
df += bias
if missing:
df[np.isnan(df)] = missing
if lvalues: data = df.values.squeeze()
else: data = df
# return data as pandas dataframe or as numpy array
return data
## functions to write EnKF input files
def writeEnKFini(enkf_folder=None, prefix=None, input_folders=None, glob_pattern='????', lfeedback=True):
''' loop over PM and OLF files using some conventions to write IC files for EnKF '''
if isinstance(input_folders,str): input_folders = [input_folders]
if not os.path.exists(enkf_folder): raise IOError(enkf_folder)
prefixo = prefix + 'o'
# loop over OM and OLF
pm_file = os.path.join(enkf_folder,'inihead.dat'); pm_data = []
olf_file = os.path.join(enkf_folder,'iniheadolf.dat'); olf_data = []
## load data
# loop over folders and timesteps
npm = None; nolf = None; sub_filelists = []
for folder in input_folders:
glob_path = os.path.join(folder,prefix+'o.head_pm.'+glob_pattern)
sub_filelist = glob(glob_path)
if not sub_filelist:
raise IOError(glob_path)
sub_filelists.append(sub_filelist)
# interleave filelists to achieve an even distribution
filelist = []
for files in zip(*sub_filelists):
for single_file in files: filelist.append(single_file)
# loop over file list and load data
for ic_file in filelist:
idx = int(ic_file[-4:]) # get index number
reader = binary.IO(prefixo,os.path.dirname(ic_file),idx)
# extract data and validate PM data
coords_pm = reader.read_coordinates_pm()
tmp = coords_pm.shape[0]
if npm is None: npm = tmp
elif npm != tmp:
raise ValueError("Total number of nodes does not match in input files: {} != {}".format(npm,tmp))
head_pm = reader.read_var("head_pm", npm)
pm_data.append(head_pm.values)
# extract data and validate PM data
tmp = reader.read_coordinates_olf(coords_pm).shape[0]
if nolf is None: nolf = tmp
elif nolf != tmp:
raise ValueError("Total number of nodes does not match in input files: {} != {}".format(nolf,tmp))
head_olf = reader.read_var("head_olf", nolf)
olf_data.append(head_olf.values)
# read number of elements for printing later
if lfeedback:
nepm = len(reader.read_elements('pm'))
neolf = len(reader.read_elements('olf'))
# print number of elements
if lfeedback:
print(("Number of PM elements: {}".format(nepm)))
print(("Number of OLF elements: {}".format(neolf)))
print('')
# assemble data into arrays and transpose
# N.B.: in the EnKF IC file the rows are nodes and the columns are realisations
pm_data = np.stack(pm_data).squeeze().transpose()
assert pm_data.shape[0] == npm, pm_data.shape
if lfeedback: print(("Number of PM nodes: {}".format(npm)))
olf_data = np.stack(olf_data).squeeze().transpose()
assert olf_data.shape[0] == nolf, olf_data.shape
if lfeedback: print(("Number of OLF nodes: {}".format(nolf)))
assert olf_data.shape[1] == pm_data.shape[1]
nreal = pm_data.shape[1]
if lfeedback: print(("Number of realizations: {}".format(nreal)))
## write output files
fmt = [' %18.0f ']+[' %.18f ']*nreal # node number (first) should be read as integer!
if lfeedback: print('')
pm_table = np.concatenate([np.arange(1,npm+1).reshape((npm,1)),pm_data], axis=1)
# N.B.: we have to add a columns with the node numbers
np.savetxt(pm_file, pm_table, fmt=fmt)
if lfeedback: print(("Wrote PM IC data to file:\n '{}'.".format(pm_file)))
olf_table = np.concatenate([np.arange(1,nolf+1).reshape((nolf,1)),olf_data], axis=1)
# N.B.: we have to add a columns with the node numbers
# N.B.: log10-transform is only done for hydraulic conductivities
np.savetxt(olf_file, olf_table, fmt=fmt)
if lfeedback: print(("Wrote OLF IC data to file:\n '{}'.".format(olf_file)))
# return file names
return pm_file, olf_file, nreal
def writeEnKFbdy(enkf_folder=None, bdy_files=None, filename='flux_bc.dat', mode='deterministic',
scalefactors=None, noisefactors=None, intermittency=None, nreal=None, lfeedback=True):
''' read flux boundary conditions from HGS/Grok .inc files and write an EnKF broundary condition file '''
if isinstance(bdy_files, dict): bdy_files = OrderedDict(bdy_files)
else: raise TypeError(bdy_files)
if scalefactors is None: scalefactors = dict()
elif not isinstance(scalefactors,dict): raise TypeError(scalefactors)
if noisefactors is None: noisefactors = dict()
elif not isinstance(noisefactors,dict): raise TypeError(noisefactors)
if intermittency is None: intermittency = dict()
elif not isinstance(intermittency,dict): raise TypeError(intermittency)
if not os.path.exists(enkf_folder): raise IOError(enkf_folder)
filepath = os.path.join(enkf_folder,filename) # assemble complete path or trunk
nbdy = len(bdy_files)
# read boundary flux data
bdy_data = None
for i,bdy_file in enumerate(bdy_files.values()):
data = np.loadtxt(bdy_file,)
assert data.shape[1] == 2, data.shape
if bdy_data is None: bdy_data = np.zeros((data.shape[0],nbdy))
bdy_data[:,i] = data[:,1]
ntime = bdy_data.shape[0]
# write boundary file(s)
header = [str(nbdy),] + bdy_files.keys() # assemble header
if lfeedback:
print(("Number of flux boundary conditions: {}".format(nbdy)))
for head in header[1:]: print(head)
print(("Number of time steps: {}".format(ntime)))
header = [line+'\n' for line in header] # add line breaks
fmt = ' '.join(['{:18e}']*nbdy)+'\n' # line format
# there are two modes: deterministic and stochastic
if mode.lower() == 'deterministic':
if lfeedback: print("\nWriting 'deterministic' boundary conditions to single file.")
# all ensemble members get the same input
# open file and write header
with open(filepath, 'w') as f:
f.writelines(header)
# loop over actual values
for i in range(ntime):
f.write(fmt.format(*bdy_data[i,:]))
if lfeedback: print(("\nWrote flux boundary condition data to file:\n '{}'".format(filepath)))
filelist = filepath
elif mode.lower() == 'stochastic':
# every ensemble member gets a different input
if lfeedback: print("\nWriting 'stochastic' boundary conditions, one file per timestep:")
if nreal is None: raise ValueError(nreal)
# variable-dependent randomization
bdy_scale = []; bdy_noise = []; bdy_nooccurence = []
for bdy_file in bdy_files.keys():
if bdy_file in scalefactors: bdy_scale.append(scalefactors[bdy_file])
else: raise ValueError(bdy_file)
if bdy_file in noisefactors: bdy_noise.append(noisefactors[bdy_file])
else: raise ValueError(bdy_file)
if bdy_file in intermittency: bdy_nooccurence.append(intermittency[bdy_file])
else: bdy_nooccurence.append(1.) # always occur
# prepare constant, realization-dependent scale factors
bdy_scales = []
for bs in bdy_scale:
if bs is None: bdy_scales.append(np.linspace(start=1., stop=1., num=nreal))
elif len(bs) == 2: bdy_scales.append(np.linspace(start=bs[0], stop=bs[1], num=nreal))
else: bdy_scales.append(bs)
bdy_scale = np.stack(bdy_scales, axis=1)
assert bdy_scale.shape == (nreal,nbdy), bdy_scale.shape
# prepare random noise
bdy_noise = np.asarray(bdy_noise).reshape((1,nbdy)).repeat(nreal, axis=0)
bf_1 = 1-bdy_noise; bf_2 = 2*bdy_noise # shortcuts used below
# prepare random intermittency
# compute actual occurence
actual_occurence = ( bdy_data > 0 ).sum(axis=0, dtype=bdy_data.dtype) / ntime
if lfeedback:
print(('Actual Occurence: {}'.format(actual_occurence)))
actual_occurence = actual_occurence.reshape((1,nbdy)).repeat(nreal, axis=0)
# probability of no occurence, given actual occurence
bdy_nooccurence = np.asarray(bdy_nooccurence).reshape((1,nbdy)).repeat(nreal, axis=0)
if lfeedback:
print(('No Occurence: {}'.format(bdy_nooccurence.mean(axis=0))))
# probability of occurence, given no actual occurence
bdy_occurence = actual_occurence * bdy_nooccurence / ( 1. - actual_occurence )
if lfeedback:
print(('New Occurence: {}'.format(bdy_occurence.mean(axis=0))))
# parameters for random values
mean = bdy_data.mean(axis=0)
std = bdy_data.std(axis=0)
# loop over timesteps
filetrunk = filepath; filelist = []
for i in range(ntime):
filepath = filetrunk + '.{:05d}'.format(i+1)
# prepare data
scalefactor = np.random.ranf((nreal,nbdy))*bf_2 + bf_1 # uniform random distribution
rnd_data = bdy_data[i,:].reshape((1,nbdy)).repeat(nreal, axis=0) * scalefactor
#random_occurence = fake_data[i,:].reshape((1,nbdy)).repeat(nreal, axis=0) * scalefactor
fake_data = [ss.expon.rvs(loc=mean[i], scale=std[i], size=nreal) for i in range(nbdy)]
random_occurence = np.stack(fake_data, axis=1)
assert random_occurence.shape == (nreal,nbdy), random_occurence.shape
# make random occurences
lsetZero = np.logical_and( rnd_data > 0, np.random.ranf((nreal,nbdy)) < bdy_nooccurence )
lcreateNew = np.logical_and( rnd_data == 0, np.random.ranf((nreal,nbdy)) < bdy_occurence )
rnd_data[lsetZero] = 0
rnd_data = np.where(lcreateNew, random_occurence, rnd_data)
# apply constant scale factors
rnd_data *= bdy_scale
# open file and write header
with open(filepath, 'w') as f:
f.writelines(header)
# loop over actual values
for j in range(nreal):
f.write(fmt.format(*rnd_data[j,:]))
if lfeedback: print((" '{}'".format(filepath)))
else:
raise ValueError(mode)
# return filepath (or list of files)
return filelist
def writeEnKFobs(enkf_folder=None, obs_wells=None, filename='obs_head.dat', lfeedback=True,
yaml_file='obs_meta.yaml', lYAML=True):
''' write an EnKF observation file with node number, observation error and time-series '''
if not isinstance(obs_wells, (list,tuple)): raise TypeError(obs_wells)
if not os.path.exists(enkf_folder): raise IOError(enkf_folder)
filepath = os.path.join(enkf_folder,filename) # assemble complete path or trunk
# prepare header
header = ''; nobs = len(obs_wells); ntime = 0
print(("Number of boundary conditions: {}".format(nobs)))
for i,obs in enumerate(obs_wells):
if 'error' in obs: error = obs['error']
else: raise ValueError(obs)
header += '{:5d} {:8d} {:18f}\n'.format(i+1, obs['node'], error**2)
ntime = max(ntime,len(obs['data']))
if lfeedback: print(header)
# assemble time series
data = np.stack([obs['data'] for obs in obs_wells], axis=1)
assert data.shape == (ntime,nobs), data.shape
print(("Number of time steps: {}".format(ntime)))
# write to file
with open(filepath, 'w') as f:
f.write(header)
np.savetxt(f, data, delimiter=' ', fmt=' %.18f ')
if lfeedback: print(("\nWrote observation well data to file:\n '{}'".format(filepath)))
# write obs meta data
if lYAML:
# remove actual binary data from dictionary (only save meta data)
obs_meta = []
for obs in obs_wells:
meta = obs.copy()
meta['data'] = filepath
obs_meta.append(meta)
# write YAML file
yaml_path = os.path.join(enkf_folder,yaml_file)
with open(yaml_path, 'w') as yf:
yaml.dump(obs_meta, yf)
if lfeedback: print(("Also wrote well meta data to YAML file:\n '{}'".format(yaml_path)))
# return filepath
return filepath
if __name__ == '__main__':
## settings
# available range
folder = 'D:/Data/HGS/SNW/EnKF/TWC/enkf_may/' # folder where files a written
date_range = ('2017-05-01', '2018-01-31', '1D') # date range for files
# glob_pattern = '00[01]?'; nreal = 40 # first 20 x 2
glob_pattern = '00[012][0-7]'; nreal = 96 # first 30 x 2
# just december
# folder = 'D:/Data/HGS/SNW/EnKF/TWC/enkf_december/' # folder where files a written
# date_range = ('2017-12-01', '2017-12-31', '1D') # date range for files
# # glob_pattern = '021?' # output timesteps to use for initial conditions; 0215 is Dec. 1st
# glob_pattern = '02??' # output timesteps to use for initial conditions; 0215 is Dec. 1st
# just november and december
# folder = 'D:/Data/HGS/SNW/EnKF/TWC/enkf_november/' # folder where files a written
# date_range = ('2017-11-01', '2017-12-31', '1D') # date range for files
# glob_pattern = '01[789]?' # output timesteps to use for initial conditions; 0215 is Dec. 1st
# work folder setup
if not os.path.exists(folder): os.mkdir(folder)
input_folder = 'input_data/'
enkf_folder = os.path.join(folder,input_folder)
if not os.path.exists(enkf_folder): os.mkdir(enkf_folder)
# execution taskes
tasks = []
# tasks += ['test_query_kister']
# tasks += ['test_read_kister']
# tasks += ['write_ic_file']
tasks += ['write_bdy_file']
# tasks += ['retrieve_kister']
# tasks += ['write_obs_file']
if 'test_query_kister' in tasks:
# test file
ts_id = 34829042 # hydrograph at Berwick
url = 'http://waterdata.quinteconservation.ca/KiWIS/KiWIS'
csv_file = 'D:/Data/HGS/SNW/EnKF/Kister/test.csv'
time_sampling = ('2017-05-01', '2018-01-31', '1D')
# query data
content = queryKister(url=url, output=csv_file, ts_id=ts_id, period=time_sampling[:2])
# load data
data = readKister(filepath=csv_file, period=time_sampling[:2], resample=time_sampling[2])
# test
datelist = pd.date_range(pd.to_datetime(time_sampling[0]), pd.to_datetime(time_sampling[1]),
freq=time_sampling[2])
assert len(datelist) == len(data), (data.shape,len(datelist))
print((data.shape,len(datelist)))
print('\n===\n')
if 'test_read_kister' in tasks:
# test file
csv_file = 'D:/Data/HGS/SNW/EnKF/Kister/W268-1.csv'
time_sampling = ('2017-05-01', '2017-12-31', '1D')
# load data
data = readKister(filepath=csv_file, period=time_sampling[:2], resample=time_sampling[2])
# test
datelist = pd.date_range(pd.to_datetime(time_sampling[0]), pd.to_datetime(time_sampling[1]),
freq=time_sampling[2])
assert len(datelist) == len(data), (data.shape,len(datelist))
print((data.shape,len(datelist)))
print('\n===\n')
if 'write_ic_file' in tasks:
# definitions
prefix = 'prw'
input_folders =['D:/Data/HGS/SNW/EnKF/TWC/open_value/',
'D:/Data/HGS/SNW/EnKF/TWC/open_raster/',
'D:/Data/HGS/SNW/EnKF/TWC/open_raster_geog/',
'D:/Data/HGS/SNW/EnKF/TWC/open_value_geog/']
#enkf_folder = 'D:/Data/HGS/SNW/EnKF/TWC/enkf_test/input_deterministic/'
# create input files
pm_file, olf_file, nr = writeEnKFini(enkf_folder=enkf_folder, prefix=prefix,
input_folders=input_folders, glob_pattern=glob_pattern)
if nreal != nr:
raise ValueError("Selected number of realizations {} does not match number on input files {}!".format(nreal,nr))
if not os.path.exists(pm_file): raise IOError(pm_file)
if not os.path.exists(olf_file): raise IOError(olf_file)
# create dummy file for initial K values (not read, but still needed)
open(os.path.join(enkf_folder,'k_dummy.dat'), 'w').close()
print('\n===\n')
if 'write_bdy_file' in tasks:
# definitions
bdy_files = {'precip.inc': os.path.join(folder,'precip_values.inc'),
'pet.inc' : os.path.join(folder,'pet_values.inc'),}
# construct pet scalefactor in a way to distribute most efficiently
NP = 24 # number of processors for MPI
# no randomization
# scalefactors = {'precip.inc':variableScale(start=1.2, stop=1.2, nreal=nreal, NP=NP),
# 'pet.inc':variableScale(start=.7, stop=0.7, nreal=nreal, NP=NP),}
noisefactors = {'precip.inc':0., 'pet.inc':0.,}
intemittency = {'precip.inc':0., 'pet.inc':0.,}
# full randomization
scalefactors = {'precip.inc':variableScale(start=1., stop=1.4, nreal=nreal, NP=NP),
'pet.inc':variableScale(start=.9, stop=0.5, nreal=nreal, NP=NP),}
# noisefactors = {'precip.inc':0.5, 'pet.inc':0.4,}
# intemittency = {'precip.inc':0.3, 'pet.inc':0.,}
# precip intermittency and noise, PET long-term bias
# scalefactors = {'precip.inc':variableScale(start=1.2, stop=1.2, nreal=nreal, NP=NP),
# 'pet.inc':variableScale(start=.9, stop=0.5, nreal=nreal, NP=NP),}
# noisefactors = {'precip.inc':0.5, 'pet.inc':0.,}
# intemittency = {'precip.inc':0.3, 'pet.inc':0.,}
#enkf_folder = 'D:/Data/HGS/SNW/EnKF/TWC/enkf_test/input_deterministic/'
# create boundary files
for mode in ('deterministic','stochastic'):
filelist = writeEnKFbdy(enkf_folder=enkf_folder, bdy_files=bdy_files, mode=mode, nreal=nreal,
scalefactors=scalefactors, noisefactors=noisefactors,
intermittency=intemittency)
if isinstance(filelist,(list,tuple)):
for bdy_file in filelist:
if not os.path.exists(bdy_file): raise IOError(bdy_file)
else:
if not os.path.exists(filelist): raise IOError(filelist)
print('\n===\n')
## some common parameters for SNCA observations
# SNCA Kister service
url = 'http://waterdata.quinteconservation.ca/KiWIS/KiWIS'
# variable-specific paramters
DataFeed = namedtuple('DataFeed', ('name','csv','ts_id'))
datafeeds = [# main gauge at Berwick
DataFeed(name='02LB022',csv='D:/Data/HGS/SNW/EnKF/Kister/02LB022.csv',ts_id=34829042),
# soil moisture
DataFeed(name='W268_sat',csv='D:/Data/HGS/SNW/EnKF/Kister/W268_sat.csv',ts_id=38178042),
# observation wells
DataFeed(name='W268-1',csv='D:/Data/HGS/SNW/EnKF/Kister/W268-1.csv',ts_id=38915042),
DataFeed(name='W350-2',csv='D:/Data/HGS/SNW/EnKF/Kister/W350-2.csv',ts_id=38908042),
DataFeed(name='W350-3',csv='D:/Data/HGS/SNW/EnKF/Kister/W350-3.csv',ts_id=38179042),]
datafeeds = {feed.name:feed for feed in datafeeds} # more useful as dict
if 'retrieve_kister' in tasks:
# query data
for feed in datafeeds.values():
print(('\nRetrieving datafeed {}...'.format(feed.name)))
r = queryKister(url=url, period=date_range[:2], output=feed.csv, ts_id=feed.ts_id)
print((" ('{}')".format(feed.csv)))
print('\n===\n')
if 'write_obs_file' in tasks:
# definitions
#enkf_folder = 'D:/Data/HGS/SNW/EnKF/TWC/enkf_test/input_deterministic/'
#date_range = ('2017-05-01', '2017-12-31', '1D')
datelist = pd.date_range(pd.to_datetime(date_range[0]), pd.to_datetime(date_range[1]),
freq=date_range[2])
ntime = len(datelist)
stderr = 0.01 # observation error
missing = 99999 # larger than 10,000 indicates missing value
# actual observation wells
obs_wells = [
# W268-1, 48.52-61.32m, sheet 2-3, possibly 1 (1-2 according to Omar)
dict(name='W268-1', z=-35.0, sheet=1, node= 2696, bias=+0.14-0.1, error=0.01,),
dict(name='W268-1', z=57.08, sheet=2, node= 5580, bias=+0.14-0.1, error=0.01,),
dict(name='W268-1', z=58.08, sheet=3, node= 8464, bias=+0.14-0.1, error=0.01,),
# dict(name='W268-1', z=-35.0, sheet=1, node= 2617, bias=0.24, error=0.02,),
# dict(name='W268-1', z=57.08, sheet=2, node= 5501, bias=0.24, error=0.02,),
# dict(name='W268-1', z=58.08, sheet=3, node= 8385, bias=0.24, error=0.02,),
# W350-2, 104.13-107.13m, sheet 3, possibly 4 (3-4 according to Omar)
dict(name='W350-2', z=106.81, sheet=3, node= 7685, bias=-0.62+3.35, error=0.01,),
dict(name='W350-2', z=109.93, sheet=4, node=10569, bias=-0.62+3.35, error=0.01,),
# # W350-3, 87.33-96.73m, sheet 2 (2-3 according to Omar)
# dict(name='W350-3', z=91.67, sheet=2, node= 4801, error=0.05, bias=0,) # very unreliable well
]
# produce open and closed loop observation files
mode = {'fake_head.dat':False, 'obs_head.dat':True,}
for filename,lreal in mode.items():
print('')
for obs_well in obs_wells:
# add defaults
if 'error' not in obs_well: obs_well['error'] = stderr
if 'bias' not in obs_well: obs_well['bias'] = None
#print(obs_well) # feedback without data
if lreal:
# load actual observation data
filepath = datafeeds[obs_well['name']].csv
print(("Reading well observations:\n '{}'".format(filepath)))
obs_well['data'] = readKister(filepath=filepath, bias=obs_well['bias'],
period=date_range[:2], resample=date_range[2],
missing=missing, lpad=True, lvalues=True,
outliers=3)
else:
# create fake/missing data (for equivalent open loop testing)
obs_well['data'] = np.ones((ntime,))*missing
# create boundary files
obs_file = writeEnKFobs(enkf_folder=enkf_folder, obs_wells=obs_wells, filename=filename,
lYAML=True, )
if not os.path.exists(obs_file): raise IOError(obs_file)
print('\n===\n')
| gpl-3.0 |
jandom/GromacsWrapper | gromacs/formats.py | 1 | 1486 | # GromacsWrapper: formats.py
# Copyright (c) 2009-2010 Oliver Beckstein <orbeckst@gmail.com>
# Released under the GNU Public License 3 (or higher, your choice)
# See the file COPYING for details.
""":mod:`gromacs.formats` -- Accessing various files
=================================================
This module contains classes that represent data files on
disk. Typically one creates an instance and
- reads from a file using a :meth:`read` method, or
- populates the instance (in the simplest case with a :meth:`set`
method) and the uses the :meth:`write` method to write the data to
disk in the appropriate format.
For function data there typically also exists a :meth:`plot` method
which produces a graph (using matplotlib).
The module defines some classes that are used in other modules; they
do *not* make use of :mod:`gromacs.tools` or :mod:`gromacs.cbook` and
can be safely imported at any time.
.. SeeAlso::
This module gives access to a selection of classes from
:mod:`gromacs.fileformats`.
Classes
-------
.. autoclass:: XVG
:members:
.. autoclass:: NDX
:members:
.. autoclass:: uniqueNDX
:members:
.. autoclass:: MDP
:members:
.. autoclass:: ITP
:members:
.. autoclass:: XPM
:members:
.. autoclass:: TOP
:members:
"""
from __future__ import absolute_import
__docformat__ = "restructuredtext en"
__all__ = ["XVG", "MDP", "NDX", "uniqueNDX", "ITP", "XPM", "TOP"]
from .fileformats import XVG, MDP, NDX, uniqueNDX, ITP, XPM, TOP
| gpl-3.0 |
chrisburr/scikit-learn | sklearn/cluster/dbscan_.py | 19 | 11713 | # -*- coding: utf-8 -*-
"""
DBSCAN: Density-Based Spatial Clustering of Applications with Noise
"""
# Author: Robert Layton <robertlayton@gmail.com>
# Joel Nothman <joel.nothman@gmail.com>
# Lars Buitinck
#
# License: BSD 3 clause
import numpy as np
from scipy import sparse
from ..base import BaseEstimator, ClusterMixin
from ..metrics import pairwise_distances
from ..utils import check_array, check_consistent_length
from ..utils.fixes import astype
from ..neighbors import NearestNeighbors
from ._dbscan_inner import dbscan_inner
def dbscan(X, eps=0.5, min_samples=5, metric='minkowski',
algorithm='auto', leaf_size=30, p=2, sample_weight=None):
"""Perform DBSCAN clustering from vector array or distance matrix.
Read more in the :ref:`User Guide <dbscan>`.
Parameters
----------
X : array or sparse (CSR) matrix of shape (n_samples, n_features), or \
array of shape (n_samples, n_samples)
A feature array, or array of distances between samples if
``metric='precomputed'``.
eps : float, optional
The maximum distance between two samples for them to be considered
as in the same neighborhood.
min_samples : int, optional
The number of samples (or total weight) in a neighborhood for a point
to be considered as a core point. This includes the point itself.
metric : string, or callable
The metric to use when calculating distance between instances in a
feature array. If metric is a string or callable, it must be one of
the options allowed by metrics.pairwise.pairwise_distances for its
metric parameter.
If metric is "precomputed", X is assumed to be a distance matrix and
must be square. X may be a sparse matrix, in which case only "nonzero"
elements may be considered neighbors for DBSCAN.
algorithm : {'auto', 'ball_tree', 'kd_tree', 'brute'}, optional
The algorithm to be used by the NearestNeighbors module
to compute pointwise distances and find nearest neighbors.
See NearestNeighbors module documentation for details.
leaf_size : int, optional (default = 30)
Leaf size passed to BallTree or cKDTree. This can affect the speed
of the construction and query, as well as the memory required
to store the tree. The optimal value depends
on the nature of the problem.
p : float, optional
The power of the Minkowski metric to be used to calculate distance
between points.
sample_weight : array, shape (n_samples,), optional
Weight of each sample, such that a sample with a weight of at least
``min_samples`` is by itself a core sample; a sample with negative
weight may inhibit its eps-neighbor from being core.
Note that weights are absolute, and default to 1.
Returns
-------
core_samples : array [n_core_samples]
Indices of core samples.
labels : array [n_samples]
Cluster labels for each point. Noisy samples are given the label -1.
Notes
-----
See examples/cluster/plot_dbscan.py for an example.
This implementation bulk-computes all neighborhood queries, which increases
the memory complexity to O(n.d) where d is the average number of neighbors,
while original DBSCAN had memory complexity O(n).
Sparse neighborhoods can be precomputed using
:func:`NearestNeighbors.radius_neighbors_graph
<sklearn.neighbors.NearestNeighbors.radius_neighbors_graph>`
with ``mode='distance'``.
References
----------
Ester, M., H. P. Kriegel, J. Sander, and X. Xu, "A Density-Based
Algorithm for Discovering Clusters in Large Spatial Databases with Noise".
In: Proceedings of the 2nd International Conference on Knowledge Discovery
and Data Mining, Portland, OR, AAAI Press, pp. 226-231. 1996
"""
if not eps > 0.0:
raise ValueError("eps must be positive.")
X = check_array(X, accept_sparse='csr')
if sample_weight is not None:
sample_weight = np.asarray(sample_weight)
check_consistent_length(X, sample_weight)
# Calculate neighborhood for all samples. This leaves the original point
# in, which needs to be considered later (i.e. point i is in the
# neighborhood of point i. While True, its useless information)
if metric == 'precomputed' and sparse.issparse(X):
neighborhoods = np.empty(X.shape[0], dtype=object)
X.sum_duplicates() # XXX: modifies X's internals in-place
X_mask = X.data <= eps
masked_indices = astype(X.indices, np.intp, copy=False)[X_mask]
masked_indptr = np.cumsum(X_mask)[X.indptr[1:] - 1]
# insert the diagonal: a point is its own neighbor, but 0 distance
# means absence from sparse matrix data
masked_indices = np.insert(masked_indices, masked_indptr,
np.arange(X.shape[0]))
masked_indptr = masked_indptr[:-1] + np.arange(1, X.shape[0])
# split into rows
neighborhoods[:] = np.split(masked_indices, masked_indptr)
else:
neighbors_model = NearestNeighbors(radius=eps, algorithm=algorithm,
leaf_size=leaf_size,
metric=metric, p=p)
neighbors_model.fit(X)
# This has worst case O(n^2) memory complexity
neighborhoods = neighbors_model.radius_neighbors(X, eps,
return_distance=False)
if sample_weight is None:
n_neighbors = np.array([len(neighbors)
for neighbors in neighborhoods])
else:
n_neighbors = np.array([np.sum(sample_weight[neighbors])
for neighbors in neighborhoods])
# Initially, all samples are noise.
labels = -np.ones(X.shape[0], dtype=np.intp)
# A list of all core samples found.
core_samples = np.asarray(n_neighbors >= min_samples, dtype=np.uint8)
dbscan_inner(core_samples, neighborhoods, labels)
return np.where(core_samples)[0], labels
class DBSCAN(BaseEstimator, ClusterMixin):
"""Perform DBSCAN clustering from vector array or distance matrix.
DBSCAN - Density-Based Spatial Clustering of Applications with Noise.
Finds core samples of high density and expands clusters from them.
Good for data which contains clusters of similar density.
Read more in the :ref:`User Guide <dbscan>`.
Parameters
----------
eps : float, optional
The maximum distance between two samples for them to be considered
as in the same neighborhood.
min_samples : int, optional
The number of samples (or total weight) in a neighborhood for a point
to be considered as a core point. This includes the point itself.
metric : string, or callable
The metric to use when calculating distance between instances in a
feature array. If metric is a string or callable, it must be one of
the options allowed by metrics.pairwise.calculate_distance for its
metric parameter.
If metric is "precomputed", X is assumed to be a distance matrix and
must be square. X may be a sparse matrix, in which case only "nonzero"
elements may be considered neighbors for DBSCAN.
.. versionadded:: 0.17
metric *precomputed* to accept precomputed sparse matrix.
algorithm : {'auto', 'ball_tree', 'kd_tree', 'brute'}, optional
The algorithm to be used by the NearestNeighbors module
to compute pointwise distances and find nearest neighbors.
See NearestNeighbors module documentation for details.
leaf_size : int, optional (default = 30)
Leaf size passed to BallTree or cKDTree. This can affect the speed
of the construction and query, as well as the memory required
to store the tree. The optimal value depends
on the nature of the problem.
Attributes
----------
core_sample_indices_ : array, shape = [n_core_samples]
Indices of core samples.
components_ : array, shape = [n_core_samples, n_features]
Copy of each core sample found by training.
labels_ : array, shape = [n_samples]
Cluster labels for each point in the dataset given to fit().
Noisy samples are given the label -1.
Notes
-----
See examples/cluster/plot_dbscan.py for an example.
This implementation bulk-computes all neighborhood queries, which increases
the memory complexity to O(n.d) where d is the average number of neighbors,
while original DBSCAN had memory complexity O(n).
Sparse neighborhoods can be precomputed using
:func:`NearestNeighbors.radius_neighbors_graph
<sklearn.neighbors.NearestNeighbors.radius_neighbors_graph>`
with ``mode='distance'``.
References
----------
Ester, M., H. P. Kriegel, J. Sander, and X. Xu, "A Density-Based
Algorithm for Discovering Clusters in Large Spatial Databases with Noise".
In: Proceedings of the 2nd International Conference on Knowledge Discovery
and Data Mining, Portland, OR, AAAI Press, pp. 226-231. 1996
"""
def __init__(self, eps=0.5, min_samples=5, metric='euclidean',
algorithm='auto', leaf_size=30, p=None):
self.eps = eps
self.min_samples = min_samples
self.metric = metric
self.algorithm = algorithm
self.leaf_size = leaf_size
self.p = p
def fit(self, X, y=None, sample_weight=None):
"""Perform DBSCAN clustering from features or distance matrix.
Parameters
----------
X : array or sparse (CSR) matrix of shape (n_samples, n_features), or \
array of shape (n_samples, n_samples)
A feature array, or array of distances between samples if
``metric='precomputed'``.
sample_weight : array, shape (n_samples,), optional
Weight of each sample, such that a sample with a weight of at least
``min_samples`` is by itself a core sample; a sample with negative
weight may inhibit its eps-neighbor from being core.
Note that weights are absolute, and default to 1.
"""
X = check_array(X, accept_sparse='csr')
clust = dbscan(X, sample_weight=sample_weight, **self.get_params())
self.core_sample_indices_, self.labels_ = clust
if len(self.core_sample_indices_):
# fix for scipy sparse indexing issue
self.components_ = X[self.core_sample_indices_].copy()
else:
# no core samples
self.components_ = np.empty((0, X.shape[1]))
return self
def fit_predict(self, X, y=None, sample_weight=None):
"""Performs clustering on X and returns cluster labels.
Parameters
----------
X : array or sparse (CSR) matrix of shape (n_samples, n_features), or \
array of shape (n_samples, n_samples)
A feature array, or array of distances between samples if
``metric='precomputed'``.
sample_weight : array, shape (n_samples,), optional
Weight of each sample, such that a sample with a weight of at least
``min_samples`` is by itself a core sample; a sample with negative
weight may inhibit its eps-neighbor from being core.
Note that weights are absolute, and default to 1.
Returns
-------
y : ndarray, shape (n_samples,)
cluster labels
"""
self.fit(X, sample_weight=sample_weight)
return self.labels_
| bsd-3-clause |
humdings/zipline | zipline/utils/security_list.py | 6 | 5399 | import warnings
from datetime import datetime
from os import listdir
import os.path
import pandas as pd
import pytz
import zipline
from zipline.errors import SymbolNotFound
from zipline.finance.asset_restrictions import SecurityListRestrictions
from zipline.zipline_warnings import ZiplineDeprecationWarning
DATE_FORMAT = "%Y%m%d"
zipline_dir = os.path.dirname(zipline.__file__)
SECURITY_LISTS_DIR = os.path.join(zipline_dir, 'resources', 'security_lists')
class SecurityList(object):
def __init__(self, data, current_date_func, asset_finder):
"""
data: a nested dictionary:
knowledge_date -> lookup_date ->
{add: [symbol list], 'delete': []}, delete: [symbol list]}
current_date_func: function taking no parameters, returning
current datetime
"""
self.data = data
self._cache = {}
self._knowledge_dates = self.make_knowledge_dates(self.data)
self.current_date = current_date_func
self.count = 0
self._current_set = set()
self.asset_finder = asset_finder
def make_knowledge_dates(self, data):
knowledge_dates = sorted(
[pd.Timestamp(k) for k in data.keys()])
return knowledge_dates
def __iter__(self):
warnings.warn(
'Iterating over security_lists is deprecated. Use '
'`for sid in <security_list>.current_securities(dt)` instead.',
category=ZiplineDeprecationWarning,
stacklevel=2
)
return iter(self.current_securities(self.current_date()))
def __contains__(self, item):
warnings.warn(
'Evaluating inclusion in security_lists is deprecated. Use '
'`sid in <security_list>.current_securities(dt)` instead.',
category=ZiplineDeprecationWarning,
stacklevel=2
)
return item in self.current_securities(self.current_date())
def current_securities(self, dt):
for kd in self._knowledge_dates:
if dt < kd:
break
if kd in self._cache:
self._current_set = self._cache[kd]
continue
for effective_date, changes in iter(self.data[kd].items()):
self.update_current(
effective_date,
changes['add'],
self._current_set.add
)
self.update_current(
effective_date,
changes['delete'],
self._current_set.remove
)
self._cache[kd] = self._current_set
return self._current_set
def update_current(self, effective_date, symbols, change_func):
for symbol in symbols:
try:
asset = self.asset_finder.lookup_symbol(
symbol,
as_of_date=effective_date
)
# Pass if no Asset exists for the symbol
except SymbolNotFound:
continue
change_func(asset.sid)
class SecurityListSet(object):
# provide a cut point to substitute other security
# list implementations.
security_list_type = SecurityList
def __init__(self, current_date_func, asset_finder):
self.current_date_func = current_date_func
self.asset_finder = asset_finder
self._leveraged_etf = None
@property
def leveraged_etf_list(self):
if self._leveraged_etf is None:
self._leveraged_etf = self.security_list_type(
load_from_directory('leveraged_etf_list'),
self.current_date_func,
asset_finder=self.asset_finder
)
return self._leveraged_etf
@property
def restrict_leveraged_etfs(self):
return SecurityListRestrictions(self.leveraged_etf_list)
def load_from_directory(list_name):
"""
To resolve the symbol in the LEVERAGED_ETF list,
the date on which the symbol was in effect is needed.
Furthermore, to maintain a point in time record of our own maintenance
of the restricted list, we need a knowledge date. Thus, restricted lists
are dictionaries of datetime->symbol lists.
new symbols should be entered as a new knowledge date entry.
This method assumes a directory structure of:
SECURITY_LISTS_DIR/listname/knowledge_date/lookup_date/add.txt
SECURITY_LISTS_DIR/listname/knowledge_date/lookup_date/delete.txt
The return value is a dictionary with:
knowledge_date -> lookup_date ->
{add: [symbol list], 'delete': [symbol list]}
"""
data = {}
dir_path = os.path.join(SECURITY_LISTS_DIR, list_name)
for kd_name in listdir(dir_path):
kd = datetime.strptime(kd_name, DATE_FORMAT).replace(
tzinfo=pytz.utc)
data[kd] = {}
kd_path = os.path.join(dir_path, kd_name)
for ld_name in listdir(kd_path):
ld = datetime.strptime(ld_name, DATE_FORMAT).replace(
tzinfo=pytz.utc)
data[kd][ld] = {}
ld_path = os.path.join(kd_path, ld_name)
for fname in listdir(ld_path):
fpath = os.path.join(ld_path, fname)
with open(fpath) as f:
symbols = f.read().splitlines()
data[kd][ld][fname] = symbols
return data
| apache-2.0 |
ahoyosid/scikit-learn | sklearn/feature_selection/__init__.py | 244 | 1088 | """
The :mod:`sklearn.feature_selection` module implements feature selection
algorithms. It currently includes univariate filter selection methods and the
recursive feature elimination algorithm.
"""
from .univariate_selection import chi2
from .univariate_selection import f_classif
from .univariate_selection import f_oneway
from .univariate_selection import f_regression
from .univariate_selection import SelectPercentile
from .univariate_selection import SelectKBest
from .univariate_selection import SelectFpr
from .univariate_selection import SelectFdr
from .univariate_selection import SelectFwe
from .univariate_selection import GenericUnivariateSelect
from .variance_threshold import VarianceThreshold
from .rfe import RFE
from .rfe import RFECV
__all__ = ['GenericUnivariateSelect',
'RFE',
'RFECV',
'SelectFdr',
'SelectFpr',
'SelectFwe',
'SelectKBest',
'SelectPercentile',
'VarianceThreshold',
'chi2',
'f_classif',
'f_oneway',
'f_regression']
| bsd-3-clause |
smccaffrey/PIRT_ASU | scripts/due_dates_example(PHY132).py | 2 | 3045 | """Import python functionality"""
import sys
"""Append Local file locations to to PYTHONPATH"""
sys.path.append('/Users/smccaffrey/Desktop/LMSA-core/src/')
import time
import pandas as pd
from selenium import webdriver
"""Append Local file locations to to PYTHONPATH"""
sys.path.append('/Users/smccaffrey/Desktop/LMSA-core/src/')
"""Import LMSA functionality"""
from lmsa.manipulation.ASU.ASU_manipulator import ASU_manipulator
from lmsa.lms.blackboard.BlackBoard import Editor
from lmsa.lms.blackboard.content.tests import Tests
from lmsa.lms.blackboard.content.assignments import Assignments
#from lmsa.lms.blackboard.content.folders import Folders
#from lmsa.lms.blackboard.Library import Logic
from lmsa.lms.blackboard.Library import Window
"""Global Variables"""
filename = '/Users/smccaffrey/Desktop/LMSA-core/scripts/input/due_dates_template.csv'
prefix1 = 'PHY 132: University Physics Lab II (2017 Fall)-'
prefix2 = 'PHY 114: General Physics Laboratory (2017 Fall)-'
data = pd.read_csv(filename, dtype=str, delimiter=',', header=None)
"""Initialize WebDriver object"""
driver = webdriver.Chrome('/Users/smccaffrey/Desktop/LMSA-core/scripts/chromedriver_233')
"""Change to FALSE when ready to save changes"""
DRYRUN = True
"""Declare objects"""
institution = ASU_manipulator(driver)
PRELABS = Tests(driver)
LAB_REPORTS = Assignments(driver)
FORM = Editor(driver)
"""Login"""
institution.login()
"""Navigate home"""
Window(driver).home(wait=3)
time.sleep(4)
"""Bulk Edit"""
i = 1
for i in range(1, len(data[0])):
"""This pause is a quick fix for elementNotFound error"""
#pause = raw_input('Scroll until desired section is in view\n\nOnce in view press: <ENTER>')
"""This is the code that finds each section number on the BlackBoard homepage"""
driver.find_element_by_link_text(prefix1 + str(data[0][i])).click()
print('Starting Section: ' + str(data[0][i]))
"""Prelabs"""
driver.find_element_by_link_text('PRELABS').click()
j = 1
for j in range(1, 11):
FORM.select_form(data[j+4][0], wait=1)
driver.find_element_by_xpath('//a[@title="Edit the Test Options"]').click()
LAB_REPORTS.due_date(state=True, date=data[j+4][i], time=data[3][i])
#form.assignment_due_date(state=True, date=df1[j+4][i], time=df1[3][i])
if not DRYRUN:
FORM.submit(wait=2)
FORM.cancel(wait=2)
"""Lab reports"""
driver.find_element_by_link_text('Submit Lab Reports').click()
k = 1
for k in range(1, 11):
FORM.select_form(data[k+14][0], wait=1)
LAB_REPORTS.edit(wait=5)
LAB_REPORTS.due_date(state=True, date=data[k+14][i], time=data[4][i])
#driver.execute_script("window.scrollTo(0, document.body.scrollHeight);") #quick fix for elementNotFound
time.sleep(2)
#form.folder_end_restrict_date(state=True, date=df1[k+14][i], time=df1[4][i])
if not DRYRUN:
FORM.submit(wait=2)
FORM.cancel(wait=2)
"""Navigate Home"""
Window(driver).home(wait=3)
time.sleep(3)
| apache-2.0 |
zangsir/sms-tools | lectures/09-Sound-description/plots-code/centroid.py | 23 | 1086 | import numpy as np
import matplotlib.pyplot as plt
import essentia.standard as ess
M = 1024
N = 1024
H = 512
fs = 44100
spectrum = ess.Spectrum(size=N)
window = ess.Windowing(size=M, type='hann')
centroid = ess.Centroid(range=fs/2.0)
x = ess.MonoLoader(filename = '../../../sounds/speech-male.wav', sampleRate = fs)()
centroids = []
for frame in ess.FrameGenerator(x, frameSize=M, hopSize=H, startFromZero=True):
mX = spectrum(window(frame))
centroid_val = centroid(mX)
centroids.append(centroid_val)
centroids = np.array(centroids)
plt.figure(1, figsize=(9.5, 5))
plt.subplot(2,1,1)
plt.plot(np.arange(x.size)/float(fs), x)
plt.axis([0, x.size/float(fs), min(x), max(x)])
plt.ylabel('amplitude')
plt.title('x (speech-male.wav)')
plt.subplot(2,1,2)
frmTime = H*np.arange(centroids.size)/float(fs)
plt.plot(frmTime, centroids, 'g', lw=1.5)
plt.axis([0, x.size/float(fs), min(centroids), max(centroids)])
plt.xlabel('time (sec)')
plt.ylabel('frequency (Hz)')
plt.title('spectral centroid')
plt.tight_layout()
plt.savefig('centroid.png')
plt.show()
| agpl-3.0 |
KEHANG/AutoFragmentModeling | ipython/3. reporting/mw_distri_comparison.py | 1 | 3467 | #~/usr/bin/env python
#-*- coding: utf-8 -*-
import matplotlib.pyplot as plt
import os
import numpy as np
# set global settings
def init_plotting():
plt.rcParams['figure.figsize'] = (4, 3)
plt.rcParams['font.size'] = 8
plt.rcParams['font.family'] = 'Helvetica'
plt.rcParams['axes.labelsize'] = plt.rcParams['font.size']
plt.rcParams['axes.titlesize'] = plt.rcParams['font.size']
plt.rcParams['legend.fontsize'] = plt.rcParams['font.size']
plt.rcParams['xtick.labelsize'] = plt.rcParams['font.size']
plt.rcParams['ytick.labelsize'] = plt.rcParams['font.size']
plt.rcParams['xtick.major.size'] = 3
plt.rcParams['xtick.minor.size'] = 3
plt.rcParams['xtick.major.width'] = 1
plt.rcParams['xtick.minor.width'] = 1
plt.rcParams['ytick.major.size'] = 3
plt.rcParams['ytick.minor.size'] = 3
plt.rcParams['ytick.major.width'] = 1
plt.rcParams['ytick.minor.width'] = 1
plt.rcParams['legend.frameon'] = True
plt.rcParams['legend.loc'] = 'best'
plt.rcParams['axes.linewidth'] = 1
plt.gca().spines['right'].set_color('none')
plt.gca().spines['top'].set_color('none')
plt.gca().xaxis.set_ticks_position('bottom')
plt.gca().yaxis.set_ticks_position('left')
def plot(detailed_mech_mwd_path, frag_mech_mwd_path, mw_cuts, model):
y1 = get_mw_distri_data(detailed_mech_mwd_path, mw_cuts)
y2 = get_mw_distri_data(frag_mech_mwd_path, mw_cuts)
x1 = np.arange(len(y1))*2+0.8
x2 = np.arange(len(y1))*2
init_plotting()
plt.figure()
fig, ax = plt.subplots()
plt.bar(x1, y1, label='Detailed Mechansim', color='#2c7fb8')
plt.bar(x2, y2, label='Fragment Mechanism', color='#66c2a4')
ax.set_xticks(x1)
# xtick_labels = ['<C5', 'C5-C15', 'C15-C25', 'C25-C35', '>C35']
xtick_labels = []
for i, mw_cut in enumerate(mw_cuts):
if i == 0:
xtick_labels.append('<{0}'.format(mw_cut))
else:
xtick_labels.append('{0}-{1}'.format(mw_cuts[i-1], mw_cut))
if i == len(mw_cuts) - 1:
xtick_labels.append('>{0}'.format(mw_cut))
ax.set_xticklabels(xtick_labels)
plt.xlabel('Molecular Weight (g/mol)')
plt.ylabel('Mole Fraction')
plt.gca().legend()
plt.tight_layout()
plt.savefig('mwd_comparison_{0}.pdf'.format(model))
def get_mw_distri_data(data_filepath, mw_cuts):
full_path = os.path.join(data_filepath)
from numpy import genfromtxt
model_data = genfromtxt(full_path, delimiter=' ')
mws, molfracs = model_data[0], model_data[1]
# initialize
agg_mw_distri = [0.0] * (len(mw_cuts)+1)
for i, mw in enumerate(mws):
mw_cut_idx = find_which_mw_cut(mw, mw_cuts)
agg_mw_distri[mw_cut_idx] += molfracs[i]
return agg_mw_distri
def find_which_mw_cut(mw, mw_cuts):
for i, mw_cut in enumerate(mw_cuts):
if mw <= mw_cut:
return i
return i + 1
model = 'two-sided_newcut1'
# mw_cuts = [70, 210, 350, 490]
mw_cuts = [30, 150, 350, 550]
detailed_mech_mwd_path = os.path.join('../', 'data', 'pdd_chemistry',
'detailed', 'pdd_2014_pruning4_s4_a3ene_c11',
'results', 'mwd.csv')
frag_mech_mwd_path = os.path.join('../', 'data', 'pdd_chemistry',
model,
'results', 'mwd.csv')
plot(detailed_mech_mwd_path, frag_mech_mwd_path, mw_cuts, model)
| mit |
colour-science/colour | colour/utilities/__init__.py | 1 | 5150 | # -*- coding: utf-8 -*-
import sys
from .data_structures import (Lookup, Structure, CaseInsensitiveMapping,
LazyCaseInsensitiveMapping)
from .common import (
handle_numpy_errors, ignore_numpy_errors, raise_numpy_errors,
print_numpy_errors, warn_numpy_errors, ignore_python_warnings, batch,
disable_multiprocessing, multiprocessing_pool, is_matplotlib_installed,
is_networkx_installed, is_openimageio_installed, is_pandas_installed,
is_tqdm_installed, required, is_iterable, is_string, is_numeric,
is_integer, is_sibling, filter_kwargs, filter_mapping, first_item,
get_domain_range_scale, set_domain_range_scale, domain_range_scale,
to_domain_1, to_domain_10, to_domain_100, to_domain_degrees, to_domain_int,
from_range_1, from_range_10, from_range_100, from_range_degrees,
from_range_int, copy_definition, validate_method)
from .verbose import (
ColourWarning, ColourUsageWarning, ColourRuntimeWarning, message_box,
show_warning, warning, runtime_warning, usage_warning, filter_warnings,
suppress_warnings, numpy_print_options, ANCILLARY_COLOUR_SCIENCE_PACKAGES,
ANCILLARY_RUNTIME_PACKAGES, ANCILLARY_DEVELOPMENT_PACKAGES,
ANCILLARY_EXTRAS_PACKAGES, describe_environment)
from .array import (as_array, as_int_array, as_float_array, as_numeric, as_int,
as_float, set_float_precision, set_int_precision,
as_namedtuple, closest_indexes, closest, interval,
is_uniform, in_array, tstack, tsplit, row_as_diagonal,
orient, centroid, fill_nan, ndarray_write, zeros, ones,
full, index_along_last_axis)
from ..algebra.common import (normalise_maximum, vector_dot, matrix_dot,
linear_conversion, linstep_function)
from .metrics import metric_mse, metric_psnr
from colour.utilities.deprecation import ModuleAPI, build_API_changes
from colour.utilities.documentation import is_documentation_building
__all__ = [
'Lookup', 'Structure', 'CaseInsensitiveMapping',
'LazyCaseInsensitiveMapping'
]
__all__ += [
'handle_numpy_errors', 'ignore_numpy_errors', 'raise_numpy_errors',
'print_numpy_errors', 'warn_numpy_errors', 'ignore_python_warnings',
'batch', 'disable_multiprocessing', 'multiprocessing_pool',
'is_matplotlib_installed', 'is_networkx_installed',
'is_openimageio_installed', 'is_pandas_installed', 'is_tqdm_installed',
'required', 'is_iterable', 'is_string', 'is_numeric', 'is_integer',
'is_sibling', 'filter_kwargs', 'filter_mapping', 'first_item',
'get_domain_range_scale', 'set_domain_range_scale', 'domain_range_scale',
'to_domain_1', 'to_domain_10', 'to_domain_100', 'to_domain_degrees',
'to_domain_int', 'from_range_1', 'from_range_10', 'from_range_100',
'from_range_degrees', 'from_range_int', 'copy_definition',
'validate_method'
]
__all__ += [
'ColourWarning', 'ColourUsageWarning', 'ColourRuntimeWarning',
'message_box', 'show_warning', 'warning', 'runtime_warning',
'usage_warning', 'filter_warnings', 'suppress_warnings',
'numpy_print_options', 'ANCILLARY_COLOUR_SCIENCE_PACKAGES',
'ANCILLARY_RUNTIME_PACKAGES', 'ANCILLARY_DEVELOPMENT_PACKAGES',
'ANCILLARY_EXTRAS_PACKAGES', 'describe_environment'
]
__all__ += [
'as_array', 'as_int_array', 'as_float_array', 'as_numeric', 'as_int',
'as_float', 'set_float_precision', 'set_int_precision', 'as_namedtuple',
'closest_indexes', 'closest', 'normalise_maximum', 'interval',
'is_uniform', 'in_array', 'tstack', 'tsplit', 'row_as_diagonal',
'vector_dot', 'matrix_dot', 'orient', 'centroid', 'linear_conversion',
'fill_nan', 'linstep_function', 'ndarray_write', 'zeros', 'ones', 'full',
'index_along_last_axis'
]
__all__ += ['metric_mse', 'metric_psnr']
# ----------------------------------------------------------------------------#
# --- API Changes and Deprecation Management ---#
# ----------------------------------------------------------------------------#
class utilities(ModuleAPI):
def __getattr__(self, attribute):
return super(utilities, self).__getattr__(attribute)
# v0.4.0
API_CHANGES = {
'ObjectFutureAccessChange': [
[
'colour.utilities.linstep_function',
'colour.algebra.linstep_function',
],
[
'colour.utilities.linear_conversion',
'colour.algebra.linear_conversion',
],
[
'colour.utilities.matrix_dot',
'colour.algebra.matrix_dot',
],
[
'colour.utilities.normalise_maximum',
'colour.algebra.normalise_maximum',
],
[
'colour.utilities.vector_dot',
'colour.algebra.vector_dot',
],
]
}
"""
Defines the *colour.utilities* sub-package API changes.
API_CHANGES : dict
"""
if not is_documentation_building():
sys.modules['colour.utilities'] = utilities(
sys.modules['colour.utilities'], build_API_changes(API_CHANGES))
del ModuleAPI, is_documentation_building, build_API_changes, sys
| bsd-3-clause |
girving/tensorflow | tensorflow/contrib/labeled_tensor/python/ops/ops.py | 27 | 46439 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Non-core ops for LabeledTensor."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import types
import numpy as np
from six import string_types
from tensorflow.contrib.labeled_tensor.python.ops import _typecheck as tc
from tensorflow.contrib.labeled_tensor.python.ops import core
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import functional_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import numerics
from tensorflow.python.ops import random_ops
from tensorflow.python.training import input # pylint: disable=redefined-builtin
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensor, ops.Tensor, core.Axis,
tc.Optional(string_types))
def _gather_1d_on_axis(labeled_tensor, indexer, axis, name=None):
with ops.name_scope(name, 'lt_take', [labeled_tensor]) as scope:
temp_axes = core.Axes([axis] + list(
labeled_tensor.axes.remove(axis.name).values()))
transposed = core.transpose(labeled_tensor, temp_axes.keys())
indexed = core.LabeledTensor(
array_ops.gather(transposed.tensor, indexer), temp_axes)
return core.transpose(indexed, labeled_tensor.axes.keys(), name=scope)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike,
tc.Mapping(string_types,
tc.Union(slice, collections.Hashable, list)),
tc.Optional(string_types))
def select(labeled_tensor, selection, name=None):
"""Slice out a subset of the tensor.
Args:
labeled_tensor: The input tensor.
selection: A dictionary mapping an axis name to a scalar, slice or list of
values to select. Currently supports two types of selections:
(a) Any number of scalar and/or slice selections.
(b) Exactly one list selection, without any scalars or slices.
name: Optional op name.
Returns:
The selection as a `LabeledTensor`.
Raises:
ValueError: If the tensor doesn't have an axis in the selection or if
that axis lacks labels.
KeyError: If any labels in a selection are not found in the original axis.
NotImplementedError: If you attempt to combine a list selection with
scalar selection or another list selection.
"""
with ops.name_scope(name, 'lt_select', [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
slices = {}
indexers = {}
for axis_name, value in selection.items():
if axis_name not in labeled_tensor.axes:
raise ValueError(
'The tensor does not have an axis named %s. Its axes are: %r' %
(axis_name, labeled_tensor.axes.keys()))
axis = labeled_tensor.axes[axis_name]
if axis.labels is None:
raise ValueError(
'The axis named %s does not have labels. The axis is: %r' %
(axis_name, axis))
if isinstance(value, slice):
# TODO(shoyer): consider deprecating using slices in favor of lists
if value.start is None:
start = None
else:
start = axis.index(value.start)
if value.stop is None:
stop = None
else:
# For now, follow the pandas convention of making labeled slices
# inclusive of both bounds.
stop = axis.index(value.stop) + 1
if value.step is not None:
raise NotImplementedError('slicing with a step is not yet supported')
slices[axis_name] = slice(start, stop)
# Needs to be after checking for slices, since slice objects claim to be
# instances of collections.Hashable but hash() on them fails.
elif isinstance(value, collections.Hashable):
slices[axis_name] = axis.index(value)
elif isinstance(value, list):
if indexers:
raise NotImplementedError(
'select does not yet support more than one list selection at '
'the same time')
indexer = [axis.index(v) for v in value]
indexers[axis_name] = ops.convert_to_tensor(indexer, dtype=dtypes.int64)
else:
# If type checking is working properly, this shouldn't be possible.
raise TypeError('cannot handle arbitrary types')
if indexers and slices:
raise NotImplementedError(
'select does not yet support combined scalar and list selection')
# For now, handle array selection separately, because tf.gather_nd does
# not support gradients yet. Later, using gather_nd will let us combine
# these paths.
if indexers:
(axis_name, indexer), = indexers.items()
axis = core.Axis(axis_name, selection[axis_name])
return _gather_1d_on_axis(labeled_tensor, indexer, axis, name=scope)
else:
return core.slice_function(labeled_tensor, slices, name=scope)
@tc.returns(core.LabeledTensor)
@tc.accepts(
tc.Collection(core.LabeledTensorLike), string_types,
tc.Optional(string_types))
def concat(labeled_tensors, axis_name, name=None):
"""Concatenate tensors along a dimension.
See tf.concat.
Args:
labeled_tensors: A list of input LabeledTensors.
axis_name: The name of the axis along which to concatenate.
name: Optional op name.
Returns:
The concatenated tensor.
The coordinate labels for the concatenation dimension are also concatenated,
if they are available for every tensor.
Raises:
ValueError: If fewer than one tensor inputs is provided, if the tensors
have incompatible axes, or if `axis_name` isn't the name of an axis.
"""
with ops.name_scope(name, 'lt_concat', labeled_tensors) as scope:
labeled_tensors = [
core.convert_to_labeled_tensor(lt) for lt in labeled_tensors
]
if len(labeled_tensors) < 1:
raise ValueError('concat expects at least 1 tensor, but received %s' %
labeled_tensors)
# All tensors must have these axes.
axes_0 = labeled_tensors[0].axes
axis_names = list(axes_0.keys())
if axis_name not in axis_names:
raise ValueError('%s not in %s' % (axis_name, axis_names))
shared_axes = axes_0.remove(axis_name)
tensors = [labeled_tensors[0].tensor]
concat_axis_list = [axes_0[axis_name]]
for labeled_tensor in labeled_tensors[1:]:
current_shared_axes = labeled_tensor.axes.remove(axis_name)
if current_shared_axes != shared_axes:
# TODO(shoyer): add more specific checks about what went wrong,
# including raising AxisOrderError when appropriate
raise ValueError('Mismatched shared axes: the first tensor '
'had axes %r but this tensor has axes %r.' %
(shared_axes, current_shared_axes))
# Accumulate the axis labels, if they're available.
concat_axis_list.append(labeled_tensor.axes[axis_name])
tensors.append(labeled_tensor.tensor)
concat_axis = core.concat_axes(concat_axis_list)
concat_dimension = axis_names.index(axis_name)
concat_tensor = array_ops.concat(tensors, concat_dimension, name=scope)
values = list(axes_0.values())
concat_axes = (values[:concat_dimension] + [concat_axis] +
values[concat_dimension + 1:])
return core.LabeledTensor(concat_tensor, concat_axes)
# TODO(shoyer): rename pack/unpack to stack/unstack
@tc.returns(core.LabeledTensor)
@tc.accepts(
tc.Collection(core.LabeledTensorLike),
tc.Union(string_types, core.AxisLike), int, tc.Optional(string_types))
def pack(labeled_tensors, new_axis, axis_position=0, name=None):
"""Pack tensors along a new axis.
See tf.pack.
Args:
labeled_tensors: The input tensors, which must have identical axes.
new_axis: The name of the new axis, or a tuple containing the name
and coordinate labels.
axis_position: Optional integer position at which to insert the new axis.
name: Optional op name.
Returns:
The packed tensors as a single LabeledTensor, with `new_axis` in the given
`axis_position`.
Raises:
ValueError: If fewer than one input tensors is provided, or if the tensors
don't have identical axes.
"""
with ops.name_scope(name, 'lt_pack', labeled_tensors) as scope:
labeled_tensors = [
core.convert_to_labeled_tensor(lt) for lt in labeled_tensors
]
if len(labeled_tensors) < 1:
raise ValueError('pack expects at least 1 tensors, but received %s' %
labeled_tensors)
axes_0 = labeled_tensors[0].axes
for t in labeled_tensors:
if t.axes != axes_0:
raise ValueError('Non-identical axes. Expected %s but got %s' %
(axes_0, t.axes))
pack_op = array_ops.stack(
[t.tensor for t in labeled_tensors], axis=axis_position, name=scope)
axes = list(axes_0.values())
axes.insert(axis_position, new_axis)
return core.LabeledTensor(pack_op, axes)
@tc.returns(tc.List(core.LabeledTensor))
@tc.accepts(core.LabeledTensorLike,
tc.Optional(string_types), tc.Optional(string_types))
def unpack(labeled_tensor, axis_name=None, name=None):
"""Unpack the tensor.
See tf.unpack.
Args:
labeled_tensor: The input tensor.
axis_name: Optional name of axis to unpack. By default, the first axis is
used.
name: Optional op name.
Returns:
The list of unpacked LabeledTensors.
Raises:
ValueError: If `axis_name` is not an axis on the input.
"""
with ops.name_scope(name, 'lt_unpack', [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
axis_names = list(labeled_tensor.axes.keys())
if axis_name is None:
axis_name = axis_names[0]
if axis_name not in axis_names:
raise ValueError('%s not in %s' % (axis_name, axis_names))
axis = axis_names.index(axis_name)
unpack_ops = array_ops.unstack(labeled_tensor.tensor, axis=axis, name=scope)
axes = [a for i, a in enumerate(labeled_tensor.axes.values()) if i != axis]
return [core.LabeledTensor(t, axes) for t in unpack_ops]
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike,
tc.Collection(string_types),
tc.Collection(tc.Union(string_types, core.AxisLike)),
tc.Optional(string_types))
def reshape(labeled_tensor, existing_axes, new_axes, name=None):
"""Reshape specific axes of a LabeledTensor.
Non-indicated axes remain in their original locations.
Args:
labeled_tensor: The input tensor.
existing_axes: List of axis names found on the input tensor. These must
appear sequentially in the list of axis names on the input. In other
words, they must be a valid slice of `list(labeled_tensor.axes.keys())`.
new_axes: List of strings, tuples of (axis_name, axis_value) or Axis objects
providing new axes with which to replace `existing_axes` in the reshaped
result. At most one element of `new_axes` may be a string, indicating an
axis with unknown size.
name: Optional op name.
Returns:
The reshaped LabeledTensor.
Raises:
ValueError: If `existing_axes` are not all axes on the input, or if more
than one of `new_axes` has unknown size.
AxisOrderError: If `existing_axes` are not a slice of axis names on the
input.
"""
with ops.name_scope(name, 'lt_reshape', [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
original_axis_names = list(labeled_tensor.axes.keys())
existing_axes = list(existing_axes)
if not set(existing_axes) <= set(original_axis_names):
raise ValueError('existing_axes %r are not contained in the set of axis '
'names %r on the input labeled tensor' %
(existing_axes, original_axis_names))
start = original_axis_names.index(existing_axes[0])
stop = original_axis_names.index(existing_axes[-1]) + 1
if existing_axes != original_axis_names[start:stop]:
# We could support existing_axes that aren't a slice by using transpose,
# but that could lead to unpredictable performance consequences because
# transposes are not free in TensorFlow. If we did transpose
# automatically, the user might never realize that their data is being
# produced with the wrong order. (The later will occur with some frequency
# because of how broadcasting automatically choose axis order.)
# So for now we've taken the strict approach.
raise core.AxisOrderError(
'existing_axes %r are not a slice of axis names %r on the input '
'labeled tensor. Use `transpose` or `impose_axis_order` to reorder '
'axes on the input explicitly.' %
(existing_axes, original_axis_names))
if sum(isinstance(axis, string_types) for axis in new_axes) > 1:
raise ValueError(
'at most one axis in new_axes can have unknown size. All other '
'axes must have an indicated integer size or labels: %r' % new_axes)
original_values = list(labeled_tensor.axes.values())
axis_size = lambda axis: -1 if axis.size is None else axis.size
shape = [axis_size(axis) for axis in original_values[:start]]
for axis_ref in new_axes:
if isinstance(axis_ref, string_types):
shape.append(-1)
else:
axis = core.as_axis(axis_ref)
shape.append(axis_size(axis))
shape.extend(axis_size(axis) for axis in original_values[stop:])
reshaped_tensor = array_ops.reshape(
labeled_tensor.tensor, shape, name=scope)
axes = original_values[:start] + list(new_axes) + original_values[stop:]
return core.LabeledTensor(reshaped_tensor, axes)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike, string_types, string_types,
tc.Optional(string_types))
def rename_axis(labeled_tensor, existing_name, new_name, name=None):
"""Rename an axis of LabeledTensor.
Args:
labeled_tensor: The input tensor.
existing_name: Name for an existing axis on the input.
new_name: Desired replacement name.
name: Optional op name.
Returns:
LabeledTensor with renamed axis.
Raises:
ValueError: If `existing_name` is not an axis on the input.
"""
with ops.name_scope(name, 'lt_rename_axis', [labeled_tensor]) as scope:
if existing_name not in labeled_tensor.axes:
raise ValueError('existing_name %r are not contained in the set of axis '
'names %r on the input labeled tensor' %
(existing_name, labeled_tensor.axes.keys()))
new_axis = core.Axis(new_name, labeled_tensor.axes[existing_name].value)
return reshape(labeled_tensor, [existing_name], [new_axis], name=scope)
@tc.returns(tc.List(core.LabeledTensor))
@tc.accepts(string_types, collections.Callable, int, bool,
tc.Collection(core.LabeledTensorLike), bool,
tc.Optional(string_types))
def _batch_helper(default_name,
batch_fn,
batch_size,
enqueue_many,
labeled_tensors,
allow_smaller_final_batch,
name=None):
with ops.name_scope(name, default_name, labeled_tensors) as scope:
labeled_tensors = [
core.convert_to_labeled_tensor(lt) for lt in labeled_tensors
]
batch_ops = batch_fn([t.tensor for t in labeled_tensors], scope)
# TODO(shoyer): Remove this when they sanitize the TF API.
if not isinstance(batch_ops, list):
assert isinstance(batch_ops, ops.Tensor)
batch_ops = [batch_ops]
if allow_smaller_final_batch:
batch_size = None
@tc.returns(core.Axes)
@tc.accepts(core.Axes)
def output_axes(axes):
if enqueue_many:
if 'batch' not in axes or list(axes.keys()).index('batch') != 0:
raise ValueError(
'When enqueue_many is True, input tensors must have an axis '
'called "batch" as their first dimension, '
'but axes were %s' % axes)
culled_axes = axes.remove('batch')
return core.Axes([('batch', batch_size)] + list(culled_axes.values()))
else:
return core.Axes([('batch', batch_size)] + list(axes.values()))
output_labeled_tensors = []
for i, tensor in enumerate(batch_ops):
axes = output_axes(labeled_tensors[i].axes)
output_labeled_tensors.append(core.LabeledTensor(tensor, axes))
return output_labeled_tensors
@tc.returns(tc.List(core.LabeledTensor))
@tc.accepts(
tc.Collection(core.LabeledTensorLike), int, int, int, bool, bool,
tc.Optional(string_types))
def batch(labeled_tensors,
batch_size,
num_threads=1,
capacity=32,
enqueue_many=False,
allow_smaller_final_batch=False,
name=None):
"""Rebatch a tensor.
See tf.batch.
Args:
labeled_tensors: The input tensors.
batch_size: The output batch size.
num_threads: See tf.batch.
capacity: See tf.batch.
enqueue_many: If true, the input tensors must contain a 'batch' axis as
their first axis.
If false, the input tensors must not contain a 'batch' axis.
See tf.batch.
allow_smaller_final_batch: See tf.batch.
name: Optional op name.
Returns:
The rebatched tensors.
If enqueue_many is false, the output tensors will have a new 'batch' axis
as their first axis.
Raises:
ValueError: If enqueue_many is True and the first axis of the tensors
isn't "batch".
"""
def fn(tensors, scope):
return input.batch(
tensors,
batch_size=batch_size,
num_threads=num_threads,
capacity=capacity,
enqueue_many=enqueue_many,
allow_smaller_final_batch=allow_smaller_final_batch,
name=scope)
return _batch_helper('lt_batch', fn, batch_size, enqueue_many,
labeled_tensors, allow_smaller_final_batch, name)
@tc.returns(tc.List(core.LabeledTensor))
@tc.accepts(
tc.Collection(core.LabeledTensorLike), int, int, int, bool, int,
tc.Optional(int), bool, tc.Optional(string_types))
def shuffle_batch(labeled_tensors,
batch_size,
num_threads=1,
capacity=32,
enqueue_many=False,
min_after_dequeue=0,
seed=None,
allow_smaller_final_batch=False,
name=None):
"""Rebatch a tensor, with shuffling.
See tf.batch.
Args:
labeled_tensors: The input tensors.
batch_size: The output batch size.
num_threads: See tf.batch.
capacity: See tf.batch.
enqueue_many: If true, the input tensors must contain a 'batch' axis as
their first axis.
If false, the input tensors must not contain a 'batch' axis.
See tf.batch.
min_after_dequeue: Minimum number of elements in the queue after a dequeue,
used to ensure mixing.
seed: Optional random seed.
allow_smaller_final_batch: See tf.batch.
name: Optional op name.
Returns:
The rebatched tensors.
If enqueue_many is false, the output tensors will have a new 'batch' axis
as their first axis.
Raises:
ValueError: If enqueue_many is True and the first axis of the tensors
isn't "batch".
"""
def fn(tensors, scope):
return input.shuffle_batch(
tensors,
batch_size=batch_size,
num_threads=num_threads,
capacity=capacity,
enqueue_many=enqueue_many,
min_after_dequeue=min_after_dequeue,
seed=seed,
allow_smaller_final_batch=allow_smaller_final_batch,
name=scope)
return _batch_helper('lt_shuffle_batch', fn, batch_size, enqueue_many,
labeled_tensors, allow_smaller_final_batch, name)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike,
tc.Mapping(string_types, int),
tc.Optional(int), tc.Optional(string_types))
def random_crop(labeled_tensor, shape_map, seed=None, name=None):
"""Randomly crops a tensor to a given size.
See tf.random_crop.
Args:
labeled_tensor: The input tensor.
shape_map: A dictionary mapping axis names to the size of the random crop
for that dimension.
seed: An optional random seed.
name: An optional op name.
Returns:
A tensor of the same rank as `labeled_tensor`, cropped randomly in the
selected dimensions.
Raises:
ValueError: If the shape map contains an axis name not in the input tensor.
"""
with ops.name_scope(name, 'lt_random_crop', [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
for axis_name in shape_map:
if axis_name not in labeled_tensor.axes:
raise ValueError('Selection axis %s not in axes %s' %
(axis_name, labeled_tensor.axes))
shape = []
axes = []
for axis in labeled_tensor.axes.values():
if axis.name in shape_map:
size = shape_map[axis.name]
shape.append(size)
# We lose labels for the axes we crop, leaving just the size.
axes.append((axis.name, size))
else:
shape.append(len(axis))
axes.append(axis)
crop_op = random_ops.random_crop(
labeled_tensor.tensor, shape, seed=seed, name=scope)
return core.LabeledTensor(crop_op, axes)
# TODO(shoyer): Allow the user to select the axis over which to map.
@tc.returns(core.LabeledTensor)
@tc.accepts(collections.Callable, core.LabeledTensorLike,
tc.Optional(string_types))
def map_fn(fn, labeled_tensor, name=None):
"""Map on the list of tensors unpacked from labeled_tensor.
See tf.map_fn.
Args:
fn: The function to apply to each unpacked LabeledTensor.
It should have type LabeledTensor -> LabeledTensor.
labeled_tensor: The input tensor.
name: Optional op name.
Returns:
A tensor that packs the results of applying fn to the list of tensors
unpacked from labeled_tensor.
"""
with ops.name_scope(name, 'lt_map_fn', [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
unpack_lts = unpack(labeled_tensor)
# TODO(ericmc): Fix this upstream.
if labeled_tensor.dtype == dtypes.string:
# We must construct the full graph here, because functional_ops.map_fn
# doesn't work for string-valued tensors.
# Constructing the full graph may be slow.
map_lts = [fn(t) for t in unpack_lts]
return pack(map_lts, list(labeled_tensor.axes.values())[0], name=scope)
else:
# Figure out what the axis labels should be, but use tf.map_fn to
# construct the graph because it's efficient.
# It may be slow to construct the full graph, so we infer the labels from
# the first element.
# TODO(ericmc): This builds a subgraph which then gets thrown away.
# Find a more elegant solution.
first_map_lt = fn(unpack_lts[0])
final_axes = list(labeled_tensor.axes.values())[:1] + list(
first_map_lt.axes.values())
@tc.returns(ops.Tensor)
@tc.accepts(ops.Tensor)
def tf_fn(tensor):
original_axes = list(labeled_tensor.axes.values())[1:]
tensor_lt = core.LabeledTensor(tensor, original_axes)
return fn(tensor_lt).tensor
map_op = functional_ops.map_fn(
tf_fn, labeled_tensor.tensor, dtype=first_map_lt.dtype)
map_lt = core.LabeledTensor(map_op, final_axes)
return core.identity(map_lt, name=scope)
@tc.returns(core.LabeledTensor)
@tc.accepts(collections.Callable, core.LabeledTensorLike,
core.LabeledTensorLike, tc.Optional(string_types))
def foldl(fn, labeled_tensor, initial_value, name=None):
"""Left fold on the list of tensors unpacked from labeled_tensor.
See tf.foldl.
Args:
fn: The function to apply to each unpacked LabeledTensor.
It should have type (LabeledTensor, LabeledTensor) -> LabeledTensor.
Its arguments are (accumulated_value, next_value).
labeled_tensor: The input tensor.
initial_value: The initial value of the accumulator.
name: Optional op name.
Returns:
The accumulated value.
"""
with ops.name_scope(name, 'lt_foldl',
[labeled_tensor, initial_value]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
initial_value = core.convert_to_labeled_tensor(initial_value)
@tc.returns(ops.Tensor)
@tc.accepts(ops.Tensor, ops.Tensor)
def tf_fn(accumulator, next_element):
accumulator_lt = core.LabeledTensor(accumulator, initial_value.axes)
next_element_lt = core.LabeledTensor(
next_element, list(labeled_tensor.axes.values())[1:])
return fn(accumulator_lt, next_element_lt).tensor
foldl_op = functional_ops.foldl(
tf_fn, labeled_tensor.tensor, initializer=initial_value.tensor)
foldl_lt = core.LabeledTensor(foldl_op, initial_value.axes)
return core.identity(foldl_lt, name=scope)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike,
tc.Optional(tc.Collection(string_types)), tc.Optional(string_types))
def squeeze(labeled_tensor, axis_names=None, name=None):
"""Remove size-1 dimensions.
See tf.squeeze.
Args:
labeled_tensor: The input tensor.
axis_names: The names of the dimensions to remove, or None to remove
all size-1 dimensions.
name: Optional op name.
Returns:
A tensor with the specified dimensions removed.
Raises:
ValueError: If the named axes are not in the tensor, or if they are
not size-1.
"""
with ops.name_scope(name, 'lt_squeeze', [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
if axis_names is None:
axis_names = [a.name for a in labeled_tensor.axes.values() if len(a) == 1]
for axis_name in axis_names:
if axis_name not in labeled_tensor.axes:
raise ValueError('axis %s is not in tensor axes %s' %
(axis_name, labeled_tensor.axes))
elif len(labeled_tensor.axes[axis_name]) != 1:
raise ValueError(
'cannot squeeze axis with size greater than 1: (%s, %s)' %
(axis_name, labeled_tensor.axes[axis_name]))
squeeze_dimensions = []
axes = []
for i, axis in enumerate(labeled_tensor.axes.values()):
if axis.name in axis_names:
squeeze_dimensions.append(i)
else:
axes.append(axis)
if squeeze_dimensions:
squeeze_op = array_ops.squeeze(
labeled_tensor.tensor, squeeze_dimensions, name=scope)
else:
squeeze_op = array_ops.identity(labeled_tensor.tensor, name=scope)
return core.LabeledTensor(squeeze_op, axes)
# pylint: disable=invalid-name
ReduceAxis = tc.Union(string_types,
tc.Tuple(string_types, collections.Hashable))
ReduceAxes = tc.Optional(tc.Union(ReduceAxis, tc.Collection(ReduceAxis)))
# pylint: enable=invalid-name
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike, core.LabeledTensorLike,
tc.Optional(string_types))
def matmul(a, b, name=None):
"""Matrix multiply two tensors with rank 1 or 2.
If both tensors have rank 2, a matrix-matrix product is performed.
If one tensor has rank 1 and the other has rank 2, then a matrix-vector
product is performed.
If both tensors have rank 1, then a vector dot-product is performed.
(This behavior matches that of `numpy.dot`.)
Both tensors must share exactly one dimension in common, which is the
dimension the operation is summed along. The inputs will be automatically
transposed if necessary as part of the matmul op.
We intend to eventually support `matmul` on higher rank input, and also
eventually support summing over any number shared dimensions (via an `axis`
argument), but neither of these features has been implemented yet.
Args:
a: First LabeledTensor.
b: Second LabeledTensor.
name: Optional op name.
Returns:
LabeledTensor with the result of matrix multiplication. Axes are ordered by
the current axis_order_scope, if set, or in or order of appearance on the
inputs.
Raises:
NotImplementedError: If inputs have rank >2 or share multiple axes.
ValueError: If the inputs have rank 0 or do not share any axes.
"""
with ops.name_scope(name, 'lt_matmul', [a, b]) as scope:
a = core.convert_to_labeled_tensor(a)
b = core.convert_to_labeled_tensor(b)
if len(a.axes) > 2 or len(b.axes) > 2:
# We could pass batched inputs to tf.matmul to make this work, but we
# would also need to use tf.tile and/or tf.transpose. These are more
# expensive than doing reshapes, so it's not clear if it's a good idea to
# do this automatically.
raise NotImplementedError(
'matmul currently requires inputs with rank 2 or less, but '
'inputs have ranks %r and %r' % (len(a.axes), len(b.axes)))
if not a.axes or not b.axes:
raise ValueError(
'matmul currently requires inputs with at least rank 1, but '
'inputs have ranks %r and %r' % (len(a.axes), len(b.axes)))
shared_axes = set(a.axes) & set(b.axes)
if len(shared_axes) > 1:
raise NotImplementedError(
'matmul does not yet support summing over multiple shared axes: %r. '
'Use transpose and reshape to create a single shared axis to sum '
'over.' % shared_axes)
if not shared_axes:
raise ValueError('there must have exactly one axis in common between '
'input to matmul: %r, %r' %
(a.axes.keys(), b.axes.keys()))
shared_axis, = shared_axes
if a.axes[shared_axis] != b.axes[shared_axis]:
raise ValueError('axis %r does not match on input arguments: %r vs %r' %
(shared_axis, a.axes[shared_axis].value,
b.axes[shared_axis].value))
result_axes = []
for axes in [a.axes, b.axes]:
for axis in axes.values():
if axis.name != shared_axis:
result_axes.append(axis)
axis_scope_order = core.get_axis_order()
if axis_scope_order is not None:
result_axis_names = [axis.name for axis in result_axes]
new_axis_names = [
name for name in axis_scope_order if name in result_axis_names
]
if new_axis_names != result_axis_names:
# switch a and b
b, a = a, b
# result_axes is a list of length 1 or 2
result_axes = result_axes[::-1]
squeeze_dims = []
if len(a.axes) == 1:
a_tensor = array_ops.reshape(a.tensor, (1, -1))
squeeze_dims.append(0)
transpose_a = False
else:
a_tensor = a.tensor
transpose_a = list(a.axes.keys()).index(shared_axis) == 0
if len(b.axes) == 1:
b_tensor = array_ops.reshape(b.tensor, (-1, 1))
squeeze_dims.append(1)
transpose_b = False
else:
b_tensor = b.tensor
transpose_b = list(b.axes.keys()).index(shared_axis) == 1
result_op = math_ops.matmul(
a_tensor, b_tensor, transpose_a=transpose_a, transpose_b=transpose_b)
if squeeze_dims:
result_op = array_ops.squeeze(result_op, squeeze_dims)
result_op = array_ops.identity(result_op, name=scope)
return core.LabeledTensor(result_op, result_axes)
@tc.returns(types.FunctionType)
@tc.accepts(string_types, collections.Callable)
def define_reduce_op(op_name, reduce_fn):
"""Define a reduction op for labeled tensors.
Args:
op_name: string name of the TensorFlow op.
reduce_fn: function to call to evaluate the op on a tf.Tensor.
Returns:
Function defining the given reduction op that acts on a LabeledTensor.
"""
default_name = 'lt_%s' % op_name
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike, ReduceAxes, tc.Optional(string_types))
def op(labeled_tensor, axes=None, name=None):
"""Computes the given reduction across the given axes of a LabeledTensor.
See `tf.{op_name}` for full details.
Args:
labeled_tensor: The input tensor.
axes: A set of axes or None.
If None, all axes will be reduced.
Axes must all be strings, in which case those dimensions will be
removed, or pairs of (name, None) or (name, label), in which case those
dimensions will be kept.
name: Optional op name.
Returns:
The reduced LabeledTensor.
Raises:
ValueError: if any of the axes to reduce over are not found on
`labeled_tensor`.
"""
with ops.name_scope(name, default_name, [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
if axes is None:
axes = labeled_tensor.axes.keys()
if isinstance(axes, (string_types, tuple)):
axes = [axes]
reduction_axes = {}
axes_to_squeeze = []
for a in axes:
if isinstance(a, string_types):
# We squeeze out this axis.
reduction_axes[a] = a
axes_to_squeeze.append(a)
else:
# We keep this axis, with the user-provided labels.
(axis_name, label) = a
if label is not None:
# The input was a single label, so make it a list so it can be
# turned into an Axis.
label = [label]
reduction_axes[axis_name] = (axis_name, label)
for axis_name in reduction_axes:
if axis_name not in labeled_tensor.axes:
raise ValueError('Axis %s not in axes %s' %
(axis_name, labeled_tensor.axes))
intermediate_axes = []
reduction_dimensions = []
for i, axis in enumerate(labeled_tensor.axes.values()):
if axis.name in reduction_axes:
intermediate_axes.append(reduction_axes[axis.name])
reduction_dimensions.append(i)
else:
intermediate_axes.append(axis)
reduce_op = reduce_fn(
labeled_tensor.tensor, reduction_dimensions, keepdims=True)
reduce_lt = core.LabeledTensor(reduce_op, intermediate_axes)
return squeeze(reduce_lt, axes_to_squeeze, name=scope)
op.__doc__ = op.__doc__.format(op_name=op_name)
op.__name__ = op_name
return op
reduce_all = define_reduce_op('reduce_all', math_ops.reduce_all)
reduce_any = define_reduce_op('reduce_any', math_ops.reduce_any)
reduce_logsumexp = define_reduce_op('reduce_logsumexp',
math_ops.reduce_logsumexp)
reduce_max = define_reduce_op('reduce_max', math_ops.reduce_max)
reduce_mean = define_reduce_op('reduce_mean', math_ops.reduce_mean)
reduce_min = define_reduce_op('reduce_min', math_ops.reduce_min)
reduce_prod = define_reduce_op('reduce_prod', math_ops.reduce_prod)
reduce_sum = define_reduce_op('reduce_sum', math_ops.reduce_sum)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike,
tc.Mapping(str, tc.Union(int, ops.Tensor)),
tc.Optional(string_types))
def tile(labeled_tensor, multiples, name=None):
"""Constructs a tensor by tiling a given tensor.
Only axes without tick-labels can be tiled. (Otherwise, axis labels on tiled
tensors would no longer be unique.)
See lt.tile.
Args:
labeled_tensor: The input tensor.
multiples: A mapping where the keys are axis names and the values are the
integer number of times to tile along that axis. Only axes with a multiple
different than 1 need be included.
name: Optional op name.
Returns:
A tensor with the indicated axes tiled.
Raises:
ValueError: If the tiled axes are not axes in the input tensor, or if any
axes in multiples have tick labels.
"""
with ops.name_scope(name, 'lt_tile', [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
if not set(multiples.keys()) <= set(labeled_tensor.axes.keys()):
raise ValueError('tile axes %r are not contained in the set of axis '
'names %r on the input labeled tensor' %
(multiples.keys(), labeled_tensor.axes))
labeled_axes = [
name for name in multiples
if labeled_tensor.axes[name].labels is not None
]
if labeled_axes:
raise ValueError('cannot tile axes with tick labels: %r' % labeled_axes)
multiples_list = [multiples.get(name, 1) for name in labeled_tensor.axes]
tile_op = array_ops.tile(labeled_tensor.tensor, multiples_list, name=scope)
new_axes = [
axis.name if axis.labels is None else axis
for axis in labeled_tensor.axes.values()
]
return core.LabeledTensor(tile_op, new_axes)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike,
tc.Mapping(str, tc.Tuple(core.AxisValue, core.AxisValue)),
string_types, tc.Optional(string_types))
def pad(labeled_tensor, paddings, mode='CONSTANT', name=None):
"""Pads a tensor.
See tf.pad.
Args:
labeled_tensor: The input tensor.
paddings: A mapping where the keys are axis names and the values are
tuples where the first element is the padding to insert at the beginning
of the axis and the second is the padding to insert at the end of the
axis.
mode: One of "CONSTANT", "REFLECT", or "SYMMETRIC".
name: Optional op name.
Returns:
A tensor with the indicated axes padded, optionally with those axes extended
with the provided labels.
Raises:
ValueError: If the padded axes are not axes in the input tensor.
"""
with ops.name_scope(name, 'lt_pad', [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
if not set(paddings.keys()) <= set(labeled_tensor.axes.keys()):
raise ValueError('pad axes %r are not contained in the set of axis '
'names %r on the input labeled tensor' %
(paddings.keys(), labeled_tensor.axes))
new_axes = []
padding_pairs = []
for name, axis in labeled_tensor.axes.items():
if name in paddings:
padding_before, padding_after = paddings[name]
axis_before = core.Axis(name, padding_before)
axis_after = core.Axis(name, padding_after)
new_axes.append(core.concat_axes([axis_before, axis, axis_after]))
padding_pairs.append((len(axis_before), len(axis_after)))
else:
new_axes.append(axis)
padding_pairs.append((0, 0))
pad_op = array_ops.pad(labeled_tensor.tensor,
padding_pairs,
mode,
name=scope)
return core.LabeledTensor(pad_op, new_axes)
@tc.returns(core.LabeledTensor)
@tc.accepts(
tc.Union(np.ndarray, list, tuple, core.Scalar),
tc.Optional(dtypes.DType),
tc.Optional(
tc.Union(core.Axes, tc.Collection(
tc.Union(string_types, core.AxisLike)))), tc.Optional(string_types))
def constant(value, dtype=None, axes=None, name=None):
"""Creates a constant tensor.
If `axes` includes any strings, shape is inferred from `value`. Otherwise,
the sizes of the given `axes` are used to set `shape` for `tf.constant`.
See tf.constant for more details.
Args:
value: The input tensor.
dtype: The type of the returned tensor.
axes: Optional Axes, list of strings or list of objects coercible to Axis
objects. By default, axes are assumed to be an empty list (i.e., `value`
is treated as a scalar).
name: Optional op name.
Returns:
The tensor with elements set to zero.
"""
with ops.name_scope(name, 'lt_constant', [value]) as scope:
if axes is None:
axes = []
if isinstance(axes, core.Axes):
axes = axes.values()
if any(isinstance(ax, string_types) for ax in axes):
# need to infer shape
shape = None
else:
# axes already indicate shape
axes = [core.as_axis(a) for a in axes]
shape = [a.size for a in axes]
op = array_ops.constant(value, dtype=dtype, shape=shape, name=scope)
return core.LabeledTensor(op, axes)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike,
tc.Optional(dtypes.DType), tc.Optional(string_types))
def zeros_like(labeled_tensor, dtype=None, name=None):
"""Creates an identical tensor with all elements set to zero.
Args:
labeled_tensor: The input tensor.
dtype: The type of the returned tensor.
name: Optional op name.
Returns:
The tensor with elements set to zero.
"""
with ops.name_scope(name, 'lt_zeros_like', [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
op = array_ops.zeros_like(labeled_tensor.tensor, dtype=dtype, name=scope)
return core.LabeledTensor(op, labeled_tensor.axes)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike,
tc.Optional(dtypes.DType), tc.Optional(string_types))
def ones_like(labeled_tensor, dtype=None, name=None):
"""Creates an identical tensor with all elements set to one.
Args:
labeled_tensor: The input tensor.
dtype: The type of the returned tensor.
name: Optional op name.
Returns:
The tensor with elements set to one.
"""
with ops.name_scope(name, 'lt_ones_like', [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
op = array_ops.ones_like(labeled_tensor.tensor, dtype=dtype, name=scope)
return core.LabeledTensor(op, labeled_tensor.axes)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike,
tc.Optional(dtypes.DType), tc.Optional(string_types))
def cast(labeled_tensor, dtype=None, name=None):
"""Casts a labeled tensor to a new type.
Args:
labeled_tensor: The input tensor.
dtype: The type of the returned tensor.
name: Optional op name.
Returns:
A labeled tensor with the new dtype.
"""
with ops.name_scope(name, 'lt_cast', [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
op = math_ops.cast(labeled_tensor.tensor, dtype=dtype, name=scope)
return core.LabeledTensor(op, labeled_tensor.axes)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike, string_types, tc.Optional(string_types))
def verify_tensor_all_finite(labeled_tensor, message, name=None):
"""Asserts a tensor doesn't contain NaNs or Infs.
See tf.verify_tensor_all_finite.
Args:
labeled_tensor: The input tensor.
message: Message to log on failure.
name: Optional op name.
Returns:
The input tensor.
"""
with ops.name_scope(name, 'lt_verify_tensor_all_finite',
[labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
op = numerics.verify_tensor_all_finite(
labeled_tensor.tensor, msg=message, name=scope)
return core.LabeledTensor(op, labeled_tensor.axes)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike, core.LabeledTensorLike,
tc.Optional(string_types))
def boolean_mask(labeled_tensor, mask, name=None):
"""Apply a boolean mask to a labeled tensor.
Unlike `tf.boolean_mask`, this currently only works on 1-dimensional masks.
The mask is applied to the first axis of `labeled_tensor`. Labels on the first
axis are removed, because True indices in `mask` may not be known dynamically.
Args:
labeled_tensor: The input tensor.
mask: The type of the returned tensor.
name: Optional op name.
Returns:
The masked labeled tensor.
Raises:
ValueError: if the first axis of the mask
"""
with ops.name_scope(name, 'lt_boolean_mask', [labeled_tensor, mask]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
mask = core.convert_to_labeled_tensor(mask)
if len(mask.axes) > 1:
raise NotImplementedError(
"LabeledTensor's boolean_mask currently only supports 1D masks")
mask_axis = list(mask.axes.values())[0]
lt_axis = list(labeled_tensor.axes.values())[0]
if mask_axis != lt_axis:
raise ValueError('the first axis of the labeled tensor and the mask '
'are not equal:\n%r\n%r' % (lt_axis, mask_axis))
op = array_ops.boolean_mask(labeled_tensor.tensor, mask.tensor, name=scope)
# TODO(shoyer): attempt to infer labels for the masked values, by calling
# tf.contrib.util.constant_value on the mask?
axes = [lt_axis.name] + list(labeled_tensor.axes.values())[1:]
return core.LabeledTensor(op, axes)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike, core.LabeledTensorLike,
core.LabeledTensorLike, tc.Optional(string_types))
def where(condition, x, y, name=None):
"""Return elements from x or y depending on condition.
See `tf.where` for more details. This function currently only implements the
three argument version of where.
Args:
condition: LabeledTensor of type `bool`.
x: LabeledTensor for values where condition is true.
y: LabeledTensor for values where condition is false.
name: Optional op name.
Returns:
The labeled tensor with values according to condition.
Raises:
ValueError: if `x` and `y` have different axes, or if the axes of `x` do not
start with the axes of `condition`.
"""
with ops.name_scope(name, 'lt_where', [condition, x, y]) as scope:
condition = core.convert_to_labeled_tensor(condition)
x = core.convert_to_labeled_tensor(x)
y = core.convert_to_labeled_tensor(y)
if not condition.axes == x.axes == y.axes:
raise ValueError('all inputs to `where` must have equal axes')
op = array_ops.where(condition.tensor, x.tensor, y.tensor, name=scope)
return core.LabeledTensor(op, x.axes)
| apache-2.0 |
the13fools/Bokeh_Examples | plotting/file/glucose.py | 3 | 1456 |
import pandas as pd
from bokeh.sampledata.glucose import data
from bokeh.plotting import *
output_file("glucose.html", title="glucose.py example")
hold()
dates = data.index.to_series()
figure(x_axis_type="datetime", tools="pan,wheel_zoom,box_zoom,reset,previewsave")
line(dates, data['glucose'], color='red', legend='glucose')
line(dates, data['isig'], color='blue', legend='isig')
curplot().title = "Glucose Measurements"
day = data.ix['2010-10-06']
highs = day[day['glucose'] > 180]
lows = day[day['glucose'] < 80]
figure(x_axis_type="datetime", tools="pan,wheel_zoom,box_zoom,reset,previewsave")
line(day.index.to_series(), day['glucose'],
line_color="gray", line_dash="4 4", line_width=1, legend="glucose")
scatter(highs.index.to_series(), highs['glucose'], size=6, color='tomato', legend="high")
scatter(lows.index.to_series(), lows['glucose'], size=6, color='navy', legend="low")
curplot().title = "Glucose Range"
xgrid()[0].grid_line_color=None
ygrid()[0].grid_line_alpha=0.5
data['inrange'] = (data['glucose'] < 180) & (data['glucose'] > 80)
window = 30.5*288 #288 is average number of samples in a month
inrange = pd.rolling_sum(data.inrange, window)
inrange = inrange.dropna()
inrange = inrange/float(window)
figure(x_axis_type="datetime", tools="pan,wheel_zoom,box_zoom,reset,previewsave")
line(inrange.index.to_series(), inrange, line_color="navy")
curplot().title = "Glucose In-Range Rolling Sum"
# open a browser
show()
| bsd-3-clause |
massgov/incubator-superset | superset/views/core.py | 1 | 84670 | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from collections import defaultdict
from datetime import datetime, timedelta
import json
import logging
import pandas as pd
import pickle
import re
import time
import traceback
import sqlalchemy as sqla
from flask import (
g, request, redirect, flash, Response, render_template, Markup,
abort, url_for)
from flask_appbuilder import expose
from flask_appbuilder.actions import action
from flask_appbuilder.models.sqla.interface import SQLAInterface
from flask_appbuilder.security.decorators import has_access_api
from flask_appbuilder.security.sqla import models as ab_models
from flask_babel import gettext as __
from flask_babel import lazy_gettext as _
from sqlalchemy import create_engine
from werkzeug.routing import BaseConverter
from superset import (
appbuilder, cache, db, viz, utils, app,
sm, sql_lab, results_backend, security,
)
from superset.legacy import cast_form_data
from superset.utils import has_access, QueryStatus
from superset.connectors.connector_registry import ConnectorRegistry
import superset.models.core as models
from superset.models.sql_lab import Query
from superset.sql_parse import SupersetQuery
from .base import (
api, SupersetModelView, BaseSupersetView, DeleteMixin,
SupersetFilter, get_user_roles, json_error_response, get_error_msg
)
config = app.config
stats_logger = config.get('STATS_LOGGER')
log_this = models.Log.log_this
can_access = utils.can_access
DAR = models.DatasourceAccessRequest
ALL_DATASOURCE_ACCESS_ERR = __(
"This endpoint requires the `all_datasource_access` permission")
DATASOURCE_MISSING_ERR = __("The datasource seems to have been deleted")
ACCESS_REQUEST_MISSING_ERR = __(
"The access requests seem to have been deleted")
USER_MISSING_ERR = __("The user seems to have been deleted")
DATASOURCE_ACCESS_ERR = __("You don't have access to this datasource")
def get_database_access_error_msg(database_name):
return __("This view requires the database %(name)s or "
"`all_datasource_access` permission", name=database_name)
def get_datasource_access_error_msg(datasource_name):
return __("This endpoint requires the datasource %(name)s, database or "
"`all_datasource_access` permission", name=datasource_name)
def json_success(json_msg, status=200):
return Response(json_msg, status=status, mimetype="application/json")
def is_owner(obj, user):
""" Check if user is owner of the slice """
return obj and user in obj.owners
def check_ownership(obj, raise_if_false=True):
"""Meant to be used in `pre_update` hooks on models to enforce ownership
Admin have all access, and other users need to be referenced on either
the created_by field that comes with the ``AuditMixin``, or in a field
named ``owners`` which is expected to be a one-to-many with the User
model. It is meant to be used in the ModelView's pre_update hook in
which raising will abort the update.
"""
if not obj:
return False
security_exception = utils.SupersetSecurityException(
"You don't have the rights to alter [{}]".format(obj))
if g.user.is_anonymous():
if raise_if_false:
raise security_exception
return False
roles = (r.name for r in get_user_roles())
if 'Admin' in roles:
return True
session = db.create_scoped_session()
orig_obj = session.query(obj.__class__).filter_by(id=obj.id).first()
owner_names = (user.username for user in orig_obj.owners)
if (
hasattr(orig_obj, 'created_by') and
orig_obj.created_by and
orig_obj.created_by.username == g.user.username):
return True
if (
hasattr(orig_obj, 'owners') and
g.user and
hasattr(g.user, 'username') and
g.user.username in owner_names):
return True
if raise_if_false:
raise security_exception
else:
return False
class SliceFilter(SupersetFilter):
def apply(self, query, func): # noqa
if self.has_all_datasource_access():
return query
perms = self.get_view_menus('datasource_access')
# TODO(bogdan): add `schema_access` support here
return query.filter(self.model.perm.in_(perms))
class DashboardFilter(SupersetFilter):
"""List dashboards for which users have access to at least one slice"""
def apply(self, query, func): # noqa
if self.has_all_datasource_access():
return query
Slice = models.Slice # noqa
Dash = models.Dashboard # noqa
# TODO(bogdan): add `schema_access` support here
datasource_perms = self.get_view_menus('datasource_access')
slice_ids_qry = (
db.session
.query(Slice.id)
.filter(Slice.perm.in_(datasource_perms))
)
query = query.filter(
Dash.id.in_(
db.session.query(Dash.id)
.distinct()
.join(Dash.slices)
.filter(Slice.id.in_(slice_ids_qry))
)
)
return query
def generate_download_headers(extension):
filename = datetime.now().strftime("%Y%m%d_%H%M%S")
content_disp = "attachment; filename={}.{}".format(filename, extension)
headers = {
"Content-Disposition": content_disp,
}
return headers
class DatabaseView(SupersetModelView, DeleteMixin): # noqa
datamodel = SQLAInterface(models.Database)
list_title = _('List Databases')
show_title = _('Show Database')
add_title = _('Add Database')
edit_title = _('Edit Database')
list_columns = [
'database_name', 'backend', 'allow_run_sync', 'allow_run_async',
'allow_dml', 'creator', 'modified']
add_columns = [
'database_name', 'sqlalchemy_uri', 'cache_timeout', 'extra',
'expose_in_sqllab', 'allow_run_sync', 'allow_run_async',
'allow_ctas', 'allow_dml', 'force_ctas_schema']
search_exclude_columns = (
'password', 'tables', 'created_by', 'changed_by', 'queries',
'saved_queries', )
edit_columns = add_columns
show_columns = [
'tables',
'cache_timeout',
'extra',
'database_name',
'sqlalchemy_uri',
'perm',
'created_by',
'created_on',
'changed_by',
'changed_on',
]
add_template = "superset/models/database/add.html"
edit_template = "superset/models/database/edit.html"
base_order = ('changed_on', 'desc')
description_columns = {
'sqlalchemy_uri': utils.markdown(
"Refer to the "
"[SqlAlchemy docs]"
"(http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html#"
"database-urls) "
"for more information on how to structure your URI.", True),
'expose_in_sqllab': _("Expose this DB in SQL Lab"),
'allow_run_sync': _(
"Allow users to run synchronous queries, this is the default "
"and should work well for queries that can be executed "
"within a web request scope (<~1 minute)"),
'allow_run_async': _(
"Allow users to run queries, against an async backend. "
"This assumes that you have a Celery worker setup as well "
"as a results backend."),
'allow_ctas': _("Allow CREATE TABLE AS option in SQL Lab"),
'allow_dml': _(
"Allow users to run non-SELECT statements "
"(UPDATE, DELETE, CREATE, ...) "
"in SQL Lab"),
'force_ctas_schema': _(
"When allowing CREATE TABLE AS option in SQL Lab, "
"this option forces the table to be created in this schema"),
'extra': utils.markdown(
"JSON string containing extra configuration elements. "
"The ``engine_params`` object gets unpacked into the "
"[sqlalchemy.create_engine]"
"(http://docs.sqlalchemy.org/en/latest/core/engines.html#"
"sqlalchemy.create_engine) call, while the ``metadata_params`` "
"gets unpacked into the [sqlalchemy.MetaData]"
"(http://docs.sqlalchemy.org/en/rel_1_0/core/metadata.html"
"#sqlalchemy.schema.MetaData) call. ", True),
}
label_columns = {
'expose_in_sqllab': _("Expose in SQL Lab"),
'allow_ctas': _("Allow CREATE TABLE AS"),
'allow_dml': _("Allow DML"),
'force_ctas_schema': _("CTAS Schema"),
'database_name': _("Database"),
'creator': _("Creator"),
'changed_on_': _("Last Changed"),
'sqlalchemy_uri': _("SQLAlchemy URI"),
'cache_timeout': _("Cache Timeout"),
'extra': _("Extra"),
'allow_run_sync': _("Allow Run Sync"),
'allow_run_async': _("Allow Run Async"),
}
def pre_add(self, db):
db.set_sqlalchemy_uri(db.sqlalchemy_uri)
security.merge_perm(sm, 'database_access', db.perm)
for schema in db.all_schema_names():
security.merge_perm(
sm, 'schema_access', utils.get_schema_perm(db, schema))
def pre_update(self, db):
self.pre_add(db)
def _delete(self, pk):
DeleteMixin._delete(self, pk)
appbuilder.add_link(
'Import Dashboards',
label=__("Import Dashboards"),
href='/superset/import_dashboards',
icon="fa-cloud-upload",
category='Manage',
category_label=__("Manage"),
category_icon='fa-wrench',)
appbuilder.add_view(
DatabaseView,
"Databases",
label=__("Databases"),
icon="fa-database",
category="Sources",
category_label=__("Sources"),
category_icon='fa-database',)
class DatabaseAsync(DatabaseView):
list_columns = [
'id', 'database_name',
'expose_in_sqllab', 'allow_ctas', 'force_ctas_schema',
'allow_run_async', 'allow_run_sync', 'allow_dml',
]
appbuilder.add_view_no_menu(DatabaseAsync)
class DatabaseTablesAsync(DatabaseView):
list_columns = ['id', 'all_table_names', 'all_schema_names']
appbuilder.add_view_no_menu(DatabaseTablesAsync)
class AccessRequestsModelView(SupersetModelView, DeleteMixin):
datamodel = SQLAInterface(DAR)
list_columns = [
'username', 'user_roles', 'datasource_link',
'roles_with_datasource', 'created_on']
order_columns = ['username', 'datasource_link']
base_order = ('changed_on', 'desc')
label_columns = {
'username': _("User"),
'user_roles': _("User Roles"),
'database': _("Database URL"),
'datasource_link': _("Datasource"),
'roles_with_datasource': _("Roles to grant"),
'created_on': _("Created On"),
}
appbuilder.add_view(
AccessRequestsModelView,
"Access requests",
label=__("Access requests"),
category="Security",
category_label=__("Security"),
icon='fa-table',)
class SliceModelView(SupersetModelView, DeleteMixin): # noqa
datamodel = SQLAInterface(models.Slice)
list_title = _('List Slices')
show_title = _('Show Slice')
add_title = _('Add Slice')
edit_title = _('Edit Slice')
can_add = False
label_columns = {
'datasource_link': _('Datasource'),
}
search_columns = (
'slice_name', 'description', 'viz_type', 'owners',
)
list_columns = [
'slice_link', 'viz_type', 'datasource_link', 'creator', 'modified']
edit_columns = [
'slice_name', 'description', 'viz_type', 'owners', 'dashboards',
'params', 'cache_timeout']
base_order = ('changed_on', 'desc')
description_columns = {
'description': Markup(
"The content here can be displayed as widget headers in the "
"dashboard view. Supports "
"<a href='https://daringfireball.net/projects/markdown/'>"
"markdown</a>"),
'params': _(
"These parameters are generated dynamically when clicking "
"the save or overwrite button in the explore view. This JSON "
"object is exposed here for reference and for power users who may "
"want to alter specific parameters."),
'cache_timeout': _(
"Duration (in seconds) of the caching timeout for this slice."
),
}
base_filters = [['id', SliceFilter, lambda: []]]
label_columns = {
'cache_timeout': _("Cache Timeout"),
'creator': _("Creator"),
'dashboards': _("Dashboards"),
'datasource_link': _("Datasource"),
'description': _("Description"),
'modified': _("Last Modified"),
'owners': _("Owners"),
'params': _("Parameters"),
'slice_link': _("Slice"),
'slice_name': _("Name"),
'table': _("Table"),
'viz_type': _("Visualization Type"),
}
def pre_update(self, obj):
check_ownership(obj)
def pre_delete(self, obj):
check_ownership(obj)
@expose('/add', methods=['GET', 'POST'])
@has_access
def add(self):
datasources = ConnectorRegistry.get_all_datasources(db.session)
datasources = [
{'value': str(d.id) + '__' + d.type, 'label': repr(d)}
for d in datasources
]
return self.render_template(
"superset/add_slice.html",
bootstrap_data=json.dumps({
'datasources': sorted(datasources, key=lambda d: d["label"]),
}),
)
appbuilder.add_view(
SliceModelView,
"Slices",
label=__("Slices"),
icon="fa-bar-chart",
category="",
category_icon='',)
class SliceAsync(SliceModelView): # noqa
list_columns = [
'slice_link', 'viz_type',
'creator', 'modified', 'icons']
label_columns = {
'icons': ' ',
'slice_link': _('Slice'),
}
appbuilder.add_view_no_menu(SliceAsync)
class SliceAddView(SliceModelView): # noqa
list_columns = [
'id', 'slice_name', 'slice_link', 'viz_type',
'owners', 'modified', 'changed_on']
appbuilder.add_view_no_menu(SliceAddView)
class DashboardModelView(SupersetModelView, DeleteMixin): # noqa
datamodel = SQLAInterface(models.Dashboard)
list_title = _('List Dashboards')
show_title = _('Show Dashboard')
add_title = _('Add Dashboard')
edit_title = _('Edit Dashboard')
list_columns = ['dashboard_link', 'creator', 'modified']
edit_columns = [
'dashboard_title', 'slug', 'slices', 'owners', 'position_json', 'css',
'json_metadata']
show_columns = edit_columns + ['table_names']
search_columns = ('dashboard_title', 'slug', 'owners')
add_columns = edit_columns
base_order = ('changed_on', 'desc')
description_columns = {
'position_json': _(
"This json object describes the positioning of the widgets in "
"the dashboard. It is dynamically generated when adjusting "
"the widgets size and positions by using drag & drop in "
"the dashboard view"),
'css': _(
"The css for individual dashboards can be altered here, or "
"in the dashboard view where changes are immediately "
"visible"),
'slug': _("To get a readable URL for your dashboard"),
'json_metadata': _(
"This JSON object is generated dynamically when clicking "
"the save or overwrite button in the dashboard view. It "
"is exposed here for reference and for power users who may "
"want to alter specific parameters."),
'owners': _("Owners is a list of users who can alter the dashboard."),
}
base_filters = [['slice', DashboardFilter, lambda: []]]
add_form_query_rel_fields = {
'slices': [['slices', SliceFilter, None]],
}
edit_form_query_rel_fields = add_form_query_rel_fields
label_columns = {
'dashboard_link': _("Dashboard"),
'dashboard_title': _("Title"),
'slug': _("Slug"),
'slices': _("Slices"),
'owners': _("Owners"),
'creator': _("Creator"),
'modified': _("Modified"),
'position_json': _("Position JSON"),
'css': _("CSS"),
'json_metadata': _("JSON Metadata"),
'table_names': _("Underlying Tables"),
}
def pre_add(self, obj):
obj.slug = obj.slug.strip() or None
if obj.slug:
obj.slug = obj.slug.replace(" ", "-")
obj.slug = re.sub(r'\W+', '', obj.slug)
if g.user not in obj.owners:
obj.owners.append(g.user)
utils.validate_json(obj.json_metadata)
utils.validate_json(obj.position_json)
owners = [o for o in obj.owners]
for slc in obj.slices:
slc.owners = list(set(owners) | set(slc.owners))
def pre_update(self, obj):
check_ownership(obj)
self.pre_add(obj)
def pre_delete(self, obj):
check_ownership(obj)
@action("mulexport", __("Export"), __("Export dashboards?"), "fa-database")
def mulexport(self, items):
if not isinstance(items, list):
items = [items]
ids = ''.join('&id={}'.format(d.id) for d in items)
return redirect(
'/dashboardmodelview/export_dashboards_form?{}'.format(ids[1:]))
@expose("/export_dashboards_form")
def download_dashboards(self):
if request.args.get('action') == 'go':
ids = request.args.getlist('id')
return Response(
models.Dashboard.export_dashboards(ids),
headers=generate_download_headers("pickle"),
mimetype="application/text")
return self.render_template(
'superset/export_dashboards.html',
dashboards_url='/dashboardmodelview/list'
)
appbuilder.add_view(
DashboardModelView,
"Dashboards",
label=__("Dashboards"),
icon="fa-dashboard",
category='',
category_icon='',)
class DashboardModelViewAsync(DashboardModelView): # noqa
list_columns = ['dashboard_link', 'creator', 'modified', 'dashboard_title']
label_columns = {
'dashboard_link': _('Dashboard'),
'dashboard_title': _('Title'),
'creator': _('Creator'),
'modified': _('Modified'),
}
appbuilder.add_view_no_menu(DashboardModelViewAsync)
class LogModelView(SupersetModelView):
datamodel = SQLAInterface(models.Log)
list_columns = ('user', 'action', 'dttm')
edit_columns = ('user', 'action', 'dttm', 'json')
base_order = ('dttm', 'desc')
label_columns = {
'user': _("User"),
'action': _("Action"),
'dttm': _("dttm"),
'json': _("JSON"),
}
appbuilder.add_view(
LogModelView,
"Action Log",
label=__("Action Log"),
category="Security",
category_label=__("Security"),
icon="fa-list-ol")
@app.route('/health')
def health():
return "OK"
@app.route('/ping')
def ping():
return "OK"
class KV(BaseSupersetView):
"""Used for storing and retrieving key value pairs"""
@log_this
@expose("/store/", methods=['POST'])
def store(self):
try:
value = request.form.get('data')
obj = models.KeyValue(value=value)
db.session.add(obj)
db.session.commit()
except Exception as e:
return json_error_response(e)
return Response(
json.dumps({'id': obj.id}),
status=200)
@log_this
@expose("/<key_id>/", methods=['GET'])
def get_value(self, key_id):
kv = None
try:
kv = db.session.query(models.KeyValue).filter_by(id=key_id).one()
except Exception as e:
return json_error_response(e)
return Response(kv.value, status=200)
appbuilder.add_view_no_menu(KV)
class R(BaseSupersetView):
"""used for short urls"""
@log_this
@expose("/<url_id>")
def index(self, url_id):
url = db.session.query(models.Url).filter_by(id=url_id).first()
if url:
return redirect('/' + url.url)
else:
flash("URL to nowhere...", "danger")
return redirect('/')
@log_this
@expose("/shortner/", methods=['POST', 'GET'])
def shortner(self):
url = request.form.get('data')
obj = models.Url(url=url)
db.session.add(obj)
db.session.commit()
return("http://{request.headers[Host]}/r/{obj.id}".format(
request=request, obj=obj))
@expose("/msg/")
def msg(self):
"""Redirects to specified url while flash a message"""
flash(Markup(request.args.get("msg")), "info")
return redirect(request.args.get("url"))
appbuilder.add_view_no_menu(R)
class Superset(BaseSupersetView):
"""The base views for Superset!"""
@api
@has_access_api
@expose("/update_role/", methods=['POST'])
def update_role(self):
"""Assigns a list of found users to the given role."""
data = request.get_json(force=True)
gamma_role = sm.find_role('Gamma')
username_set = set()
user_data_dict = {}
for user_data in data['users']:
username = user_data['username']
if not username:
continue
user_data_dict[username] = user_data
username_set.add(username)
existing_users = db.session.query(sm.user_model).filter(
sm.user_model.username.in_(username_set)).all()
missing_users = username_set.difference(
set([u.username for u in existing_users]))
logging.info('Missing users: {}'.format(missing_users))
created_users = []
for username in missing_users:
user_data = user_data_dict[username]
user = sm.find_user(email=user_data['email'])
if not user:
logging.info("Adding user: {}.".format(user_data))
sm.add_user(
username=user_data['username'],
first_name=user_data['first_name'],
last_name=user_data['last_name'],
email=user_data['email'],
role=gamma_role,
)
sm.get_session.commit()
user = sm.find_user(username=user_data['username'])
existing_users.append(user)
created_users.append(user.username)
role_name = data['role_name']
role = sm.find_role(role_name)
role.user = existing_users
sm.get_session.commit()
return self.json_response({
'role': role_name,
'# missing users': len(missing_users),
'# granted': len(existing_users),
'created_users': created_users,
}, status=201)
def json_response(self, obj, status=200):
return Response(
json.dumps(obj, default=utils.json_int_dttm_ser),
status=status,
mimetype="application/json")
@has_access_api
@expose("/datasources/")
def datasources(self):
datasources = ConnectorRegistry.get_all_datasources(db.session)
datasources = [o.short_data for o in datasources]
datasources = sorted(datasources, key=lambda o: o['name'])
return self.json_response(datasources)
@has_access_api
@expose("/override_role_permissions/", methods=['POST'])
def override_role_permissions(self):
"""Updates the role with the give datasource permissions.
Permissions not in the request will be revoked. This endpoint should
be available to admins only. Expects JSON in the format:
{
'role_name': '{role_name}',
'database': [{
'datasource_type': '{table|druid}',
'name': '{database_name}',
'schema': [{
'name': '{schema_name}',
'datasources': ['{datasource name}, {datasource name}']
}]
}]
}
"""
data = request.get_json(force=True)
role_name = data['role_name']
databases = data['database']
db_ds_names = set()
for dbs in databases:
for schema in dbs['schema']:
for ds_name in schema['datasources']:
fullname = utils.get_datasource_full_name(
dbs['name'], ds_name, schema=schema['name'])
db_ds_names.add(fullname)
existing_datasources = ConnectorRegistry.get_all_datasources(db.session)
datasources = [
d for d in existing_datasources if d.full_name in db_ds_names]
role = sm.find_role(role_name)
# remove all permissions
role.permissions = []
# grant permissions to the list of datasources
granted_perms = []
for datasource in datasources:
view_menu_perm = sm.find_permission_view_menu(
view_menu_name=datasource.perm,
permission_name='datasource_access')
# prevent creating empty permissions
if view_menu_perm and view_menu_perm.view_menu:
role.permissions.append(view_menu_perm)
granted_perms.append(view_menu_perm.view_menu.name)
db.session.commit()
return self.json_response({
'granted': granted_perms,
'requested': list(db_ds_names)
}, status=201)
@log_this
@has_access
@expose("/request_access/")
def request_access(self):
datasources = set()
dashboard_id = request.args.get('dashboard_id')
if dashboard_id:
dash = (
db.session.query(models.Dashboard)
.filter_by(id=int(dashboard_id))
.one()
)
datasources |= dash.datasources
datasource_id = request.args.get('datasource_id')
datasource_type = request.args.get('datasource_type')
if datasource_id:
ds_class = ConnectorRegistry.sources.get(datasource_type)
datasource = (
db.session.query(ds_class)
.filter_by(id=int(datasource_id))
.one()
)
datasources.add(datasource)
if request.args.get('action') == 'go':
for datasource in datasources:
access_request = DAR(
datasource_id=datasource.id,
datasource_type=datasource.type)
db.session.add(access_request)
db.session.commit()
flash(__("Access was requested"), "info")
return redirect('/')
return self.render_template(
'superset/request_access.html',
datasources=datasources,
datasource_names=", ".join([o.name for o in datasources]),
)
@log_this
@has_access
@expose("/approve")
def approve(self):
def clean_fulfilled_requests(session):
for r in session.query(DAR).all():
datasource = ConnectorRegistry.get_datasource(
r.datasource_type, r.datasource_id, session)
user = sm.get_user_by_id(r.created_by_fk)
if not datasource or \
self.datasource_access(datasource, user):
# datasource does not exist anymore
session.delete(r)
session.commit()
datasource_type = request.args.get('datasource_type')
datasource_id = request.args.get('datasource_id')
created_by_username = request.args.get('created_by')
role_to_grant = request.args.get('role_to_grant')
role_to_extend = request.args.get('role_to_extend')
session = db.session
datasource = ConnectorRegistry.get_datasource(
datasource_type, datasource_id, session)
if not datasource:
flash(DATASOURCE_MISSING_ERR, "alert")
return json_error_response(DATASOURCE_MISSING_ERR)
requested_by = sm.find_user(username=created_by_username)
if not requested_by:
flash(USER_MISSING_ERR, "alert")
return json_error_response(USER_MISSING_ERR)
requests = (
session.query(DAR)
.filter(
DAR.datasource_id == datasource_id,
DAR.datasource_type == datasource_type,
DAR.created_by_fk == requested_by.id)
.all()
)
if not requests:
flash(ACCESS_REQUEST_MISSING_ERR, "alert")
return json_error_response(ACCESS_REQUEST_MISSING_ERR)
# check if you can approve
if self.all_datasource_access() or g.user.id == datasource.owner_id:
# can by done by admin only
if role_to_grant:
role = sm.find_role(role_to_grant)
requested_by.roles.append(role)
msg = __(
"%(user)s was granted the role %(role)s that gives access "
"to the %(datasource)s",
user=requested_by.username,
role=role_to_grant,
datasource=datasource.full_name)
utils.notify_user_about_perm_udate(
g.user, requested_by, role, datasource,
'email/role_granted.txt', app.config)
flash(msg, "info")
if role_to_extend:
perm_view = sm.find_permission_view_menu(
'email/datasource_access', datasource.perm)
role = sm.find_role(role_to_extend)
sm.add_permission_role(role, perm_view)
msg = __("Role %(r)s was extended to provide the access to "
"the datasource %(ds)s", r=role_to_extend,
ds=datasource.full_name)
utils.notify_user_about_perm_udate(
g.user, requested_by, role, datasource,
'email/role_extended.txt', app.config)
flash(msg, "info")
clean_fulfilled_requests(session)
else:
flash(__("You have no permission to approve this request"),
"danger")
return redirect('/accessrequestsmodelview/list/')
for r in requests:
session.delete(r)
session.commit()
return redirect('/accessrequestsmodelview/list/')
def get_form_data(self):
# get form data from url
if request.args.get("form_data"):
form_data = request.args.get("form_data")
elif request.form.get("form_data"):
# Supporting POST as well as get
form_data = request.form.get("form_data")
else:
form_data = '{}'
d = json.loads(form_data)
if request.args.get("viz_type"):
# Converting old URLs
d = cast_form_data(request.args)
return d
def get_viz(
self,
slice_id=None,
args=None,
datasource_type=None,
datasource_id=None):
if slice_id:
slc = (
db.session.query(models.Slice)
.filter_by(id=slice_id)
.one()
)
return slc.get_viz()
else:
form_data = self.get_form_data()
viz_type = form_data.get('viz_type', 'table')
datasource = ConnectorRegistry.get_datasource(
datasource_type, datasource_id, db.session)
viz_obj = viz.viz_types[viz_type](
datasource,
form_data=form_data,
)
return viz_obj
@has_access
@expose("/slice/<slice_id>/")
def slice(self, slice_id):
viz_obj = self.get_viz(slice_id)
endpoint = (
'/superset/explore/{}/{}?form_data={}'
.format(
viz_obj.datasource.type,
viz_obj.datasource.id,
json.dumps(viz_obj.form_data)
)
)
if request.args.get("standalone") == "true":
endpoint += '&standalone=true'
return redirect(endpoint)
@log_this
@has_access_api
@expose("/explore_json/<datasource_type>/<datasource_id>/")
def explore_json(self, datasource_type, datasource_id):
try:
viz_obj = self.get_viz(
datasource_type=datasource_type,
datasource_id=datasource_id,
args=request.args)
except Exception as e:
logging.exception(e)
return json_error_response(
utils.error_msg_from_exception(e),
stacktrace=traceback.format_exc())
if not self.datasource_access(viz_obj.datasource):
return json_error_response(DATASOURCE_ACCESS_ERR, status=404)
if request.args.get("csv") == "true":
return Response(
viz_obj.get_csv(),
status=200,
headers=generate_download_headers("csv"),
mimetype="application/csv")
if request.args.get("query") == "true":
try:
query_obj = viz_obj.query_obj()
query = viz_obj.datasource.get_query_str(query_obj)
except Exception as e:
return json_error_response(e)
return Response(
json.dumps({
'query': query,
'language': viz_obj.datasource.query_language,
}),
status=200,
mimetype="application/json")
payload = {}
try:
payload = viz_obj.get_payload(
force=request.args.get('force') == 'true')
except Exception as e:
logging.exception(e)
return json_error_response(utils.error_msg_from_exception(e))
status = 200
if payload.get('status') == QueryStatus.FAILED:
status = 400
return json_success(viz_obj.json_dumps(payload), status=status)
@expose("/import_dashboards", methods=['GET', 'POST'])
@log_this
def import_dashboards(self):
"""Overrides the dashboards using pickled instances from the file."""
f = request.files.get('file')
if request.method == 'POST' and f:
current_tt = int(time.time())
data = pickle.load(f)
# TODO: import DRUID datasources
for table in data['datasources']:
ds_class = ConnectorRegistry.sources.get(table.type)
ds_class.import_obj(table, import_time=current_tt)
db.session.commit()
for dashboard in data['dashboards']:
models.Dashboard.import_obj(
dashboard, import_time=current_tt)
db.session.commit()
return redirect('/dashboardmodelview/list/')
return self.render_template('superset/import_dashboards.html')
@log_this
@has_access
@expose("/explorev2/<datasource_type>/<datasource_id>/")
def explorev2(self, datasource_type, datasource_id):
return redirect(url_for(
'Superset.explore',
datasource_type=datasource_type,
datasource_id=datasource_id,
**request.args))
@log_this
@has_access
@expose("/explore/<datasource_type>/<datasource_id>/")
def explore(self, datasource_type, datasource_id):
form_data = self.get_form_data()
datasource_id = int(datasource_id)
viz_type = form_data.get("viz_type")
slice_id = form_data.get('slice_id')
user_id = g.user.get_id() if g.user else None
slc = None
if slice_id:
slc = db.session.query(models.Slice).filter_by(id=slice_id).first()
error_redirect = '/slicemodelview/list/'
datasource = ConnectorRegistry.get_datasource(
datasource_type, datasource_id, db.session)
if not datasource:
flash(DATASOURCE_MISSING_ERR, "danger")
return redirect(error_redirect)
if not self.datasource_access(datasource):
flash(
__(get_datasource_access_error_msg(datasource.name)),
"danger")
return redirect(
'superset/request_access/?'
'datasource_type={datasource_type}&'
'datasource_id={datasource_id}&'
''.format(**locals()))
if not viz_type and datasource.default_endpoint:
return redirect(datasource.default_endpoint)
# slc perms
slice_add_perm = self.can_access('can_add', 'SliceModelView')
slice_overwrite_perm = is_owner(slc, g.user)
slice_download_perm = self.can_access('can_download', 'SliceModelView')
# handle save or overwrite
action = request.args.get('action')
if action in ('saveas', 'overwrite'):
return self.save_or_overwrite_slice(
request.args,
slc, slice_add_perm,
slice_overwrite_perm,
datasource_id,
datasource_type)
form_data['datasource'] = str(datasource_id) + '__' + datasource_type
standalone = request.args.get("standalone") == "true"
bootstrap_data = {
"can_add": slice_add_perm,
"can_download": slice_download_perm,
"can_overwrite": slice_overwrite_perm,
"datasource": datasource.data,
"form_data": form_data,
"datasource_id": datasource_id,
"datasource_type": datasource_type,
"slice": slc.data if slc else None,
"standalone": standalone,
"user_id": user_id,
"forced_height": request.args.get('height'),
'common': self.common_bootsrap_payload(),
}
table_name = datasource.table_name \
if datasource_type == 'table' \
else datasource.datasource_name
if slc:
title = "[slice] " + slc.slice_name
else:
title = "[explore] " + table_name
return self.render_template(
"superset/basic.html",
bootstrap_data=json.dumps(bootstrap_data),
entry='explore',
title=title,
standalone_mode=standalone)
@api
@has_access_api
@expose("/filter/<datasource_type>/<datasource_id>/<column>/")
def filter(self, datasource_type, datasource_id, column):
"""
Endpoint to retrieve values for specified column.
:param datasource_type: Type of datasource e.g. table
:param datasource_id: Datasource id
:param column: Column name to retrieve values for
:return:
"""
# TODO: Cache endpoint by user, datasource and column
datasource = ConnectorRegistry.get_datasource(
datasource_type, datasource_id, db.session)
if not datasource:
return json_error_response(DATASOURCE_MISSING_ERR)
if not self.datasource_access(datasource):
return json_error_response(DATASOURCE_ACCESS_ERR)
payload = json.dumps(
datasource.values_for_column(column),
default=utils.json_int_dttm_ser)
return json_success(payload)
def save_or_overwrite_slice(
self, args, slc, slice_add_perm, slice_overwrite_perm,
datasource_id, datasource_type):
"""Save or overwrite a slice"""
slice_name = args.get('slice_name')
action = args.get('action')
form_data = self.get_form_data()
if action in ('saveas'):
if 'slice_id' in form_data:
form_data.pop('slice_id') # don't save old slice_id
slc = models.Slice(owners=[g.user] if g.user else [])
slc.params = json.dumps(form_data)
slc.datasource_name = args.get('datasource_name')
slc.viz_type = form_data['viz_type']
slc.datasource_type = datasource_type
slc.datasource_id = datasource_id
slc.slice_name = slice_name
if action in ('saveas') and slice_add_perm:
self.save_slice(slc)
elif action == 'overwrite' and slice_overwrite_perm:
self.overwrite_slice(slc)
# Adding slice to a dashboard if requested
dash = None
if request.args.get('add_to_dash') == 'existing':
dash = (
db.session.query(models.Dashboard)
.filter_by(id=int(request.args.get('save_to_dashboard_id')))
.one()
)
flash(
"Slice [{}] was added to dashboard [{}]".format(
slc.slice_name,
dash.dashboard_title),
"info")
elif request.args.get('add_to_dash') == 'new':
dash = models.Dashboard(
dashboard_title=request.args.get('new_dashboard_name'),
owners=[g.user] if g.user else [])
flash(
"Dashboard [{}] just got created and slice [{}] was added "
"to it".format(
dash.dashboard_title,
slc.slice_name),
"info")
if dash and slc not in dash.slices:
dash.slices.append(slc)
db.session.commit()
if request.args.get('goto_dash') == 'true':
return dash.url
else:
return slc.slice_url
def save_slice(self, slc):
session = db.session()
msg = "Slice [{}] has been saved".format(slc.slice_name)
session.add(slc)
session.commit()
flash(msg, "info")
def overwrite_slice(self, slc):
session = db.session()
session.merge(slc)
session.commit()
msg = "Slice [{}] has been overwritten".format(slc.slice_name)
flash(msg, "info")
@api
@has_access_api
@expose("/checkbox/<model_view>/<id_>/<attr>/<value>", methods=['GET'])
def checkbox(self, model_view, id_, attr, value):
"""endpoint for checking/unchecking any boolean in a sqla model"""
modelview_to_model = {
'TableColumnInlineView':
ConnectorRegistry.sources['table'].column_class,
}
model = modelview_to_model[model_view]
obj = db.session.query(model).filter_by(id=id_).first()
if obj:
setattr(obj, attr, value == 'true')
db.session.commit()
return json_success("OK")
@api
@has_access_api
@expose("/activity_per_day")
def activity_per_day(self):
"""endpoint to power the calendar heatmap on the welcome page"""
Log = models.Log # noqa
qry = (
db.session
.query(
Log.dt,
sqla.func.count())
.group_by(Log.dt)
.all()
)
payload = {str(time.mktime(dt.timetuple())):
ccount for dt, ccount in qry if dt}
return json_success(json.dumps(payload))
@api
@has_access_api
@expose("/schemas/<db_id>/")
def schemas(self, db_id):
db_id = int(db_id)
database = (
db.session
.query(models.Database)
.filter_by(id=db_id)
.one()
)
schemas = database.all_schema_names()
schemas = self.schemas_accessible_by_user(database, schemas)
return Response(
json.dumps({'schemas': schemas}),
mimetype="application/json")
@api
@has_access_api
@expose("/tables/<db_id>/<schema>/<substr>/")
def tables(self, db_id, schema, substr):
"""Endpoint to fetch the list of tables for given database"""
db_id = int(db_id)
schema = utils.js_string_to_python(schema)
substr = utils.js_string_to_python(substr)
database = db.session.query(models.Database).filter_by(id=db_id).one()
table_names = self.accessible_by_user(
database, database.all_table_names(schema), schema)
view_names = self.accessible_by_user(
database, database.all_view_names(schema), schema)
if substr:
table_names = [tn for tn in table_names if substr in tn]
view_names = [vn for vn in view_names if substr in vn]
max_items = config.get('MAX_TABLE_NAMES') or len(table_names)
total_items = len(table_names) + len(view_names)
max_tables = len(table_names)
max_views = len(view_names)
if total_items and substr:
max_tables = max_items * len(table_names) // total_items
max_views = max_items * len(view_names) // total_items
table_options = [{'value': tn, 'label': tn}
for tn in table_names[:max_tables]]
table_options.extend([{'value': vn, 'label': '[view] {}'.format(vn)}
for vn in view_names[:max_views]])
payload = {
'tableLength': len(table_names) + len(view_names),
'options': table_options,
}
return json_success(json.dumps(payload))
@api
@has_access_api
@expose("/copy_dash/<dashboard_id>/", methods=['GET', 'POST'])
def copy_dash(self, dashboard_id):
"""Copy dashboard"""
session = db.session()
data = json.loads(request.form.get('data'))
dash = models.Dashboard()
original_dash = (
session
.query(models.Dashboard)
.filter_by(id=dashboard_id).first())
dash.owners = [g.user] if g.user else []
dash.dashboard_title = data['dashboard_title']
if data['duplicate_slices']:
# Duplicating slices as well, mapping old ids to new ones
old_to_new_sliceids = {}
for slc in original_dash.slices:
new_slice = slc.clone()
new_slice.owners = [g.user] if g.user else []
session.add(new_slice)
session.flush()
new_slice.dashboards.append(dash)
old_to_new_sliceids['{}'.format(slc.id)] =\
'{}'.format(new_slice.id)
for d in data['positions']:
d['slice_id'] = old_to_new_sliceids[d['slice_id']]
else:
dash.slices = original_dash.slices
dash.params = original_dash.params
self._set_dash_metadata(dash, data)
session.add(dash)
session.commit()
dash_json = json.dumps(dash.data)
session.close()
return json_success(dash_json)
@api
@has_access_api
@expose("/save_dash/<dashboard_id>/", methods=['GET', 'POST'])
def save_dash(self, dashboard_id):
"""Save a dashboard's metadata"""
session = db.session()
dash = (session
.query(models.Dashboard)
.filter_by(id=dashboard_id).first())
check_ownership(dash, raise_if_false=True)
data = json.loads(request.form.get('data'))
self._set_dash_metadata(dash, data)
session.merge(dash)
session.commit()
session.close()
return "SUCCESS"
@staticmethod
def _set_dash_metadata(dashboard, data):
positions = data['positions']
slice_ids = [int(d['slice_id']) for d in positions]
dashboard.slices = [o for o in dashboard.slices if o.id in slice_ids]
positions = sorted(data['positions'], key=lambda x: int(x['slice_id']))
dashboard.position_json = json.dumps(positions, indent=4, sort_keys=True)
md = dashboard.params_dict
dashboard.css = data['css']
dashboard.dashboard_title = data['dashboard_title']
if 'filter_immune_slices' not in md:
md['filter_immune_slices'] = []
if 'timed_refresh_immune_slices' not in md:
md['timed_refresh_immune_slices'] = []
if 'filter_immune_slice_fields' not in md:
md['filter_immune_slice_fields'] = {}
md['expanded_slices'] = data['expanded_slices']
md['default_filters'] = data.get('default_filters', '')
dashboard.json_metadata = json.dumps(md, indent=4)
@api
@has_access_api
@expose("/add_slices/<dashboard_id>/", methods=['POST'])
def add_slices(self, dashboard_id):
"""Add and save slices to a dashboard"""
data = json.loads(request.form.get('data'))
session = db.session()
Slice = models.Slice # noqa
dash = (
session.query(models.Dashboard).filter_by(id=dashboard_id).first())
check_ownership(dash, raise_if_false=True)
new_slices = session.query(Slice).filter(
Slice.id.in_(data['slice_ids']))
dash.slices += new_slices
session.merge(dash)
session.commit()
session.close()
return "SLICES ADDED"
@api
@has_access_api
@expose("/testconn", methods=["POST", "GET"])
def testconn(self):
"""Tests a sqla connection"""
try:
uri = request.json.get('uri')
db_name = request.json.get('name')
if db_name:
database = (
db.session
.query(models.Database)
.filter_by(database_name=db_name)
.first()
)
if database and uri == database.safe_sqlalchemy_uri():
# the password-masked uri was passed
# use the URI associated with this database
uri = database.sqlalchemy_uri_decrypted
connect_args = (
request.json
.get('extras', {})
.get('engine_params', {})
.get('connect_args', {}))
engine = create_engine(uri, connect_args=connect_args)
engine.connect()
return json_success(json.dumps(engine.table_names(), indent=4))
except Exception as e:
logging.exception(e)
return json_error_response((
"Connection failed!\n\n"
"The error message returned was:\n{}").format(e))
@api
@has_access_api
@expose("/recent_activity/<user_id>/", methods=['GET'])
def recent_activity(self, user_id):
"""Recent activity (actions) for a given user"""
M = models # noqa
qry = (
db.session.query(M.Log, M.Dashboard, M.Slice)
.outerjoin(
M.Dashboard,
M.Dashboard.id == M.Log.dashboard_id
)
.outerjoin(
M.Slice,
M.Slice.id == M.Log.slice_id
)
.filter(
sqla.and_(
~M.Log.action.in_(('queries', 'shortner', 'sql_json')),
M.Log.user_id == user_id,
)
)
.order_by(M.Log.dttm.desc())
.limit(1000)
)
payload = []
for log in qry.all():
item_url = None
item_title = None
if log.Dashboard:
item_url = log.Dashboard.url
item_title = log.Dashboard.dashboard_title
elif log.Slice:
item_url = log.Slice.slice_url
item_title = log.Slice.slice_name
payload.append({
'action': log.Log.action,
'item_url': item_url,
'item_title': item_title,
'time': log.Log.dttm,
})
return json_success(
json.dumps(payload, default=utils.json_int_dttm_ser))
@api
@has_access_api
@expose("/csrf_token/", methods=['GET'])
def csrf_token(self):
return Response(
self.render_template('superset/csrf_token.json'),
mimetype='text/json',
)
@api
@has_access_api
@expose("/fave_dashboards_by_username/<username>/", methods=['GET'])
def fave_dashboards_by_username(self, username):
"""This lets us use a user's username to pull favourite dashboards"""
user = sm.find_user(username=username)
return self.fave_dashboards(user.get_id())
@api
@has_access_api
@expose("/fave_dashboards/<user_id>/", methods=['GET'])
def fave_dashboards(self, user_id):
qry = (
db.session.query(
models.Dashboard,
models.FavStar.dttm,
)
.join(
models.FavStar,
sqla.and_(
models.FavStar.user_id == int(user_id),
models.FavStar.class_name == 'Dashboard',
models.Dashboard.id == models.FavStar.obj_id,
)
)
.order_by(
models.FavStar.dttm.desc()
)
)
payload = []
for o in qry.all():
d = {
'id': o.Dashboard.id,
'dashboard': o.Dashboard.dashboard_link(),
'title': o.Dashboard.dashboard_title,
'url': o.Dashboard.url,
'dttm': o.dttm,
}
if o.Dashboard.created_by:
user = o.Dashboard.created_by
d['creator'] = str(user)
d['creator_url'] = '/superset/profile/{}/'.format(
user.username)
payload.append(d)
return json_success(
json.dumps(payload, default=utils.json_int_dttm_ser))
@api
@has_access_api
@expose("/created_dashboards/<user_id>/", methods=['GET'])
def created_dashboards(self, user_id):
Dash = models.Dashboard # noqa
qry = (
db.session.query(
Dash,
)
.filter(
sqla.or_(
Dash.created_by_fk == user_id,
Dash.changed_by_fk == user_id,
)
)
.order_by(
Dash.changed_on.desc()
)
)
payload = [{
'id': o.id,
'dashboard': o.dashboard_link(),
'title': o.dashboard_title,
'url': o.url,
'dttm': o.changed_on,
} for o in qry.all()]
return json_success(
json.dumps(payload, default=utils.json_int_dttm_ser))
@api
@has_access_api
@expose("/created_slices/<user_id>/", methods=['GET'])
def created_slices(self, user_id):
"""List of slices created by this user"""
Slice = models.Slice # noqa
qry = (
db.session.query(Slice)
.filter(
sqla.or_(
Slice.created_by_fk == user_id,
Slice.changed_by_fk == user_id,
)
)
.order_by(Slice.changed_on.desc())
)
payload = [{
'id': o.id,
'title': o.slice_name,
'url': o.slice_url,
'dttm': o.changed_on,
} for o in qry.all()]
return json_success(
json.dumps(payload, default=utils.json_int_dttm_ser))
@api
@has_access_api
@expose("/fave_slices/<user_id>/", methods=['GET'])
def fave_slices(self, user_id):
"""Favorite slices for a user"""
qry = (
db.session.query(
models.Slice,
models.FavStar.dttm,
)
.join(
models.FavStar,
sqla.and_(
models.FavStar.user_id == int(user_id),
models.FavStar.class_name == 'slice',
models.Slice.id == models.FavStar.obj_id,
)
)
.order_by(
models.FavStar.dttm.desc()
)
)
payload = []
for o in qry.all():
d = {
'id': o.Slice.id,
'title': o.Slice.slice_name,
'url': o.Slice.slice_url,
'dttm': o.dttm,
}
if o.Slice.created_by:
user = o.Slice.created_by
d['creator'] = str(user)
d['creator_url'] = '/superset/profile/{}/'.format(
user.username)
payload.append(d)
return json_success(
json.dumps(payload, default=utils.json_int_dttm_ser))
@api
@has_access_api
@expose("/warm_up_cache/", methods=['GET'])
def warm_up_cache(self):
"""Warms up the cache for the slice or table."""
slices = None
session = db.session()
slice_id = request.args.get('slice_id')
table_name = request.args.get('table_name')
db_name = request.args.get('db_name')
if not slice_id and not (table_name and db_name):
return json_error_response(__(
"Malformed request. slice_id or table_name and db_name "
"arguments are expected"), status=400)
if slice_id:
slices = session.query(models.Slice).filter_by(id=slice_id).all()
if not slices:
return json_error_response(__(
"Slice %(id)s not found", id=slice_id), status=404)
elif table_name and db_name:
SqlaTable = ConnectorRegistry.sources['table']
table = (
session.query(SqlaTable)
.join(models.Database)
.filter(
models.Database.database_name == db_name or
SqlaTable.table_name == table_name)
).first()
if not table:
return json_error_response(__(
"Table %(t)s wasn't found in the database %(d)s",
t=table_name, s=db_name), status=404)
slices = session.query(models.Slice).filter_by(
datasource_id=table.id,
datasource_type=table.type).all()
for slc in slices:
try:
obj = slc.get_viz()
obj.get_json(force=True)
except Exception as e:
return json_error_response(utils.error_msg_from_exception(e))
return json_success(json.dumps(
[{"slice_id": slc.id, "slice_name": slc.slice_name}
for slc in slices]))
@expose("/favstar/<class_name>/<obj_id>/<action>/")
def favstar(self, class_name, obj_id, action):
"""Toggle favorite stars on Slices and Dashboard"""
session = db.session()
FavStar = models.FavStar # noqa
count = 0
favs = session.query(FavStar).filter_by(
class_name=class_name, obj_id=obj_id,
user_id=g.user.get_id()).all()
if action == 'select':
if not favs:
session.add(
FavStar(
class_name=class_name,
obj_id=obj_id,
user_id=g.user.get_id(),
dttm=datetime.now()
)
)
count = 1
elif action == 'unselect':
for fav in favs:
session.delete(fav)
else:
count = len(favs)
session.commit()
return json_success(json.dumps({'count': count}))
@has_access
@expose("/dashboard/<dashboard_id>/")
def dashboard(self, dashboard_id):
"""Server side rendering for a dashboard"""
session = db.session()
qry = session.query(models.Dashboard)
if dashboard_id.isdigit():
qry = qry.filter_by(id=int(dashboard_id))
else:
qry = qry.filter_by(slug=dashboard_id)
dash = qry.one()
datasources = set()
for slc in dash.slices:
datasource = slc.datasource
if datasource:
datasources.add(datasource)
for datasource in datasources:
if datasource and not self.datasource_access(datasource):
flash(
__(get_datasource_access_error_msg(datasource.name)),
"danger")
return redirect(
'superset/request_access/?'
'dashboard_id={dash.id}&'.format(**locals()))
# Hack to log the dashboard_id properly, even when getting a slug
@log_this
def dashboard(**kwargs): # noqa
pass
dashboard(dashboard_id=dash.id)
dash_edit_perm = check_ownership(dash, raise_if_false=False)
dash_save_perm = \
dash_edit_perm and self.can_access('can_save_dash', 'Superset')
standalone_mode = request.args.get("standalone") == "true"
dashboard_data = dash.data
dashboard_data.update({
'standalone_mode': standalone_mode,
'dash_save_perm': dash_save_perm,
'dash_edit_perm': dash_edit_perm,
})
bootstrap_data = {
'user_id': g.user.get_id(),
'dashboard_data': dashboard_data,
'datasources': {ds.uid: ds.data for ds in datasources},
'common': self.common_bootsrap_payload(),
}
return self.render_template(
"superset/dashboard.html",
entry='dashboard',
standalone_mode=standalone_mode,
title='[dashboard] ' + dash.dashboard_title,
bootstrap_data=json.dumps(bootstrap_data),
)
@has_access
@expose("/sync_druid/", methods=['POST'])
@log_this
def sync_druid_source(self):
"""Syncs the druid datasource in main db with the provided config.
The endpoint takes 3 arguments:
user - user name to perform the operation as
cluster - name of the druid cluster
config - configuration stored in json that contains:
name: druid datasource name
dimensions: list of the dimensions, they become druid columns
with the type STRING
metrics_spec: list of metrics (dictionary). Metric consists of
2 attributes: type and name. Type can be count,
etc. `count` type is stored internally as longSum
other fields will be ignored.
Example: {
"name": "test_click",
"metrics_spec": [{"type": "count", "name": "count"}],
"dimensions": ["affiliate_id", "campaign", "first_seen"]
}
"""
payload = request.get_json(force=True)
druid_config = payload['config']
user_name = payload['user']
cluster_name = payload['cluster']
user = sm.find_user(username=user_name)
DruidDatasource = ConnectorRegistry.sources['druid']
DruidCluster = DruidDatasource.cluster_class
if not user:
err_msg = __("Can't find User '%(name)s', please ask your admin "
"to create one.", name=user_name)
logging.error(err_msg)
return json_error_response(err_msg)
cluster = db.session.query(DruidCluster).filter_by(
cluster_name=cluster_name).first()
if not cluster:
err_msg = __("Can't find DruidCluster with cluster_name = "
"'%(name)s'", name=cluster_name)
logging.error(err_msg)
return json_error_response(err_msg)
try:
DruidDatasource.sync_to_db_from_config(
druid_config, user, cluster)
except Exception as e:
logging.exception(utils.error_msg_from_exception(e))
return json_error_response(utils.error_msg_from_exception(e))
return Response(status=201)
@has_access
@expose("/sqllab_viz/", methods=['POST'])
@log_this
def sqllab_viz(self):
SqlaTable = ConnectorRegistry.sources['table']
data = json.loads(request.form.get('data'))
table_name = data.get('datasourceName')
SqlaTable = ConnectorRegistry.sources['table']
table = (
db.session.query(SqlaTable)
.filter_by(table_name=table_name)
.first()
)
if not table:
table = SqlaTable(table_name=table_name)
table.database_id = data.get('dbId')
q = SupersetQuery(data.get('sql'))
table.sql = q.stripped()
db.session.add(table)
cols = []
dims = []
metrics = []
for column_name, config in data.get('columns').items():
is_dim = config.get('is_dim', False)
SqlaTable = ConnectorRegistry.sources['table']
TableColumn = SqlaTable.column_class
SqlMetric = SqlaTable.metric_class
col = TableColumn(
column_name=column_name,
filterable=is_dim,
groupby=is_dim,
is_dttm=config.get('is_date', False),
type=config.get('type', False),
)
cols.append(col)
if is_dim:
dims.append(col)
agg = config.get('agg')
if agg:
if agg == 'count_distinct':
metrics.append(SqlMetric(
metric_name="{agg}__{column_name}".format(**locals()),
expression="COUNT(DISTINCT {column_name})"
.format(**locals()),
))
else:
metrics.append(SqlMetric(
metric_name="{agg}__{column_name}".format(**locals()),
expression="{agg}({column_name})".format(**locals()),
))
if not metrics:
metrics.append(SqlMetric(
metric_name="count".format(**locals()),
expression="count(*)".format(**locals()),
))
table.columns = cols
table.metrics = metrics
db.session.commit()
return self.json_response(json.dumps({
'table_id': table.id,
}))
@has_access
@expose("/table/<database_id>/<table_name>/<schema>/")
@log_this
def table(self, database_id, table_name, schema):
schema = utils.js_string_to_python(schema)
mydb = db.session.query(models.Database).filter_by(id=database_id).one()
cols = []
indexes = []
t = mydb.get_columns(table_name, schema)
try:
t = mydb.get_columns(table_name, schema)
indexes = mydb.get_indexes(table_name, schema)
primary_key = mydb.get_pk_constraint(table_name, schema)
foreign_keys = mydb.get_foreign_keys(table_name, schema)
except Exception as e:
return json_error_response(utils.error_msg_from_exception(e))
keys = []
if primary_key and primary_key.get('constrained_columns'):
primary_key['column_names'] = primary_key.pop('constrained_columns')
primary_key['type'] = 'pk'
keys += [primary_key]
for fk in foreign_keys:
fk['column_names'] = fk.pop('constrained_columns')
fk['type'] = 'fk'
keys += foreign_keys
for idx in indexes:
idx['type'] = 'index'
keys += indexes
for col in t:
dtype = ""
try:
dtype = '{}'.format(col['type'])
except:
pass
cols.append({
'name': col['name'],
'type': dtype.split('(')[0] if '(' in dtype else dtype,
'longType': dtype,
'keys': [
k for k in keys
if col['name'] in k.get('column_names')
],
})
tbl = {
'name': table_name,
'columns': cols,
'selectStar': mydb.select_star(
table_name, schema=schema, show_cols=True, indent=True),
'primaryKey': primary_key,
'foreignKeys': foreign_keys,
'indexes': keys,
}
return json_success(json.dumps(tbl))
@has_access
@expose("/extra_table_metadata/<database_id>/<table_name>/<schema>/")
@log_this
def extra_table_metadata(self, database_id, table_name, schema):
schema = utils.js_string_to_python(schema)
mydb = db.session.query(models.Database).filter_by(id=database_id).one()
payload = mydb.db_engine_spec.extra_table_metadata(
mydb, table_name, schema)
return json_success(json.dumps(payload))
@has_access
@expose("/select_star/<database_id>/<table_name>/")
@log_this
def select_star(self, database_id, table_name):
mydb = db.session.query(
models.Database).filter_by(id=database_id).first()
return self.render_template(
"superset/ajah.html",
content=mydb.select_star(table_name, show_cols=True)
)
@expose("/theme/")
def theme(self):
return self.render_template('superset/theme.html')
@has_access_api
@expose("/cached_key/<key>/")
@log_this
def cached_key(self, key):
"""Returns a key from the cache"""
resp = cache.get(key)
if resp:
return resp
return "nope"
@has_access_api
@expose("/results/<key>/")
@log_this
def results(self, key):
"""Serves a key off of the results backend"""
if not results_backend:
return json_error_response("Results backend isn't configured")
blob = results_backend.get(key)
if not blob:
return json_error_response(
"Data could not be retrieved. "
"You may want to re-run the query.",
status=410
)
query = db.session.query(Query).filter_by(results_key=key).one()
rejected_tables = self.rejected_datasources(
query.sql, query.database, query.schema)
if rejected_tables:
return json_error_response(get_datasource_access_error_msg(
'{}'.format(rejected_tables)))
payload = utils.zlib_decompress_to_string(blob)
display_limit = app.config.get('DISPLAY_SQL_MAX_ROW', None)
if display_limit:
payload_json = json.loads(payload)
payload_json['data'] = payload_json['data'][:display_limit]
return json_success(
json.dumps(payload_json, default=utils.json_iso_dttm_ser))
@has_access_api
@expose("/stop_query/", methods=['POST'])
@log_this
def stop_query(self):
client_id = request.form.get('client_id')
try:
query = (
db.session.query(Query)
.filter_by(client_id=client_id).one()
)
query.status = utils.QueryStatus.STOPPED
db.session.commit()
except Exception as e:
pass
return self.json_response('OK')
@has_access_api
@expose("/sql_json/", methods=['POST', 'GET'])
@log_this
def sql_json(self):
"""Runs arbitrary sql and returns and json"""
async = request.form.get('runAsync') == 'true'
sql = request.form.get('sql')
database_id = request.form.get('database_id')
schema = request.form.get('schema') or None
session = db.session()
mydb = session.query(models.Database).filter_by(id=database_id).first()
if not mydb:
json_error_response(
'Database with id {} is missing.'.format(database_id))
rejected_tables = self.rejected_datasources(sql, mydb, schema)
if rejected_tables:
return json_error_response(get_datasource_access_error_msg(
'{}'.format(rejected_tables)))
session.commit()
select_as_cta = request.form.get('select_as_cta') == 'true'
tmp_table_name = request.form.get('tmp_table_name')
if select_as_cta and mydb.force_ctas_schema:
tmp_table_name = '{}.{}'.format(
mydb.force_ctas_schema,
tmp_table_name
)
query = Query(
database_id=int(database_id),
limit=int(app.config.get('SQL_MAX_ROW', None)),
sql=sql,
schema=schema,
select_as_cta=request.form.get('select_as_cta') == 'true',
start_time=utils.now_as_float(),
tab_name=request.form.get('tab'),
status=QueryStatus.PENDING if async else QueryStatus.RUNNING,
sql_editor_id=request.form.get('sql_editor_id'),
tmp_table_name=tmp_table_name,
user_id=int(g.user.get_id()),
client_id=request.form.get('client_id'),
)
session.add(query)
session.flush()
query_id = query.id
session.commit() # shouldn't be necessary
if not query_id:
raise Exception(_("Query record was not created as expected."))
logging.info("Triggering query_id: {}".format(query_id))
# Async request.
if async:
logging.info("Running query on a Celery worker")
# Ignore the celery future object and the request may time out.
try:
sql_lab.get_sql_results.delay(
query_id=query_id, return_results=False,
store_results=not query.select_as_cta)
except Exception as e:
logging.exception(e)
msg = (
"Failed to start remote query on a worker. "
"Tell your administrator to verify the availability of "
"the message queue."
)
query.status = QueryStatus.FAILED
query.error_message = msg
session.commit()
return json_error_response("{}".format(msg))
resp = json_success(json.dumps(
{'query': query.to_dict()}, default=utils.json_int_dttm_ser,
allow_nan=False), status=202)
session.commit()
return resp
# Sync request.
try:
SQLLAB_TIMEOUT = config.get("SQLLAB_TIMEOUT")
with utils.timeout(
seconds=SQLLAB_TIMEOUT,
error_message=(
"The query exceeded the {SQLLAB_TIMEOUT} seconds "
"timeout. You may want to run your query as a "
"`CREATE TABLE AS` to prevent timeouts."
).format(**locals())):
# pylint: disable=no-value-for-parameter
data = sql_lab.get_sql_results(
query_id=query_id, return_results=True)
except Exception as e:
logging.exception(e)
return json_error_response("{}".format(e))
if data.get('status') == QueryStatus.FAILED:
return json_error_response(payload=data)
return json_success(json.dumps(data, default=utils.json_iso_dttm_ser))
@has_access
@expose("/csv/<client_id>")
@log_this
def csv(self, client_id):
"""Download the query results as csv."""
logging.info("Exporting CSV file [{}]".format(client_id))
query = (
db.session.query(Query)
.filter_by(client_id=client_id)
.one()
)
rejected_tables = self.rejected_datasources(
query.sql, query.database, query.schema)
if rejected_tables:
flash(get_datasource_access_error_msg('{}'.format(rejected_tables)))
return redirect('/')
blob = None
if results_backend and query.results_key:
logging.info(
"Fetching CSV from results backend "
"[{}]".format(query.results_key))
blob = results_backend.get(query.results_key)
if blob:
logging.info("Decompressing")
json_payload = utils.zlib_decompress_to_string(blob)
obj = json.loads(json_payload)
columns = [c['name'] for c in obj['columns']]
df = pd.DataFrame.from_records(obj['data'], columns=columns)
logging.info("Using pandas to convert to CSV")
csv = df.to_csv(index=False, encoding='utf-8')
else:
logging.info("Running a query to turn into CSV")
sql = query.select_sql or query.executed_sql
df = query.database.get_df(sql, query.schema)
# TODO(bkyryliuk): add compression=gzip for big files.
csv = df.to_csv(index=False, encoding='utf-8')
response = Response(csv, mimetype='text/csv')
response.headers['Content-Disposition'] = (
'attachment; filename={}.csv'.format(query.name))
logging.info("Ready to return response")
return response
@has_access
@expose("/fetch_datasource_metadata")
@log_this
def fetch_datasource_metadata(self):
datasource_id, datasource_type = (
request.args.get('datasourceKey').split('__'))
datasource = ConnectorRegistry.get_datasource(
datasource_type, datasource_id, db.session)
# Check if datasource exists
if not datasource:
return json_error_response(DATASOURCE_MISSING_ERR)
# Check permission for datasource
if not self.datasource_access(datasource):
return json_error_response(DATASOURCE_ACCESS_ERR)
return json_success(json.dumps(datasource.data))
@expose("/queries/<last_updated_ms>")
def queries(self, last_updated_ms):
"""Get the updated queries."""
stats_logger.incr('queries')
if not g.user.get_id():
return json_error_response(
"Please login to access the queries.", status=403)
# Unix time, milliseconds.
last_updated_ms_int = int(float(last_updated_ms)) if last_updated_ms else 0
# UTC date time, same that is stored in the DB.
last_updated_dt = utils.EPOCH + timedelta(seconds=last_updated_ms_int / 1000)
sql_queries = (
db.session.query(Query)
.filter(
Query.user_id == g.user.get_id(),
Query.changed_on >= last_updated_dt,
)
.all()
)
dict_queries = {q.client_id: q.to_dict() for q in sql_queries}
return json_success(
json.dumps(dict_queries, default=utils.json_int_dttm_ser))
@has_access
@expose("/search_queries")
@log_this
def search_queries(self):
"""Search for queries."""
query = db.session.query(Query)
search_user_id = request.args.get('user_id')
database_id = request.args.get('database_id')
search_text = request.args.get('search_text')
status = request.args.get('status')
# From and To time stamp should be Epoch timestamp in seconds
from_time = request.args.get('from')
to_time = request.args.get('to')
if search_user_id:
# Filter on db Id
query = query.filter(Query.user_id == search_user_id)
if database_id:
# Filter on db Id
query = query.filter(Query.database_id == database_id)
if status:
# Filter on status
query = query.filter(Query.status == status)
if search_text:
# Filter on search text
query = query \
.filter(Query.sql.like('%{}%'.format(search_text)))
if from_time:
query = query.filter(Query.start_time > int(from_time))
if to_time:
query = query.filter(Query.start_time < int(to_time))
query_limit = config.get('QUERY_SEARCH_LIMIT', 1000)
sql_queries = (
query.order_by(Query.start_time.asc())
.limit(query_limit)
.all()
)
dict_queries = [q.to_dict() for q in sql_queries]
return Response(
json.dumps(dict_queries, default=utils.json_int_dttm_ser),
status=200,
mimetype="application/json")
@app.errorhandler(500)
def show_traceback(self):
return render_template(
'superset/traceback.html',
error_msg=get_error_msg(),
), 500
@expose("/welcome")
def welcome(self):
"""Personalized welcome page"""
if not g.user or not g.user.get_id():
return redirect(appbuilder.get_url_for_login)
return self.render_template(
'superset/welcome.html', entry='welcome', utils=utils)
@has_access
@expose("/profile/<username>/")
def profile(self, username):
"""User profile page"""
if not username and g.user:
username = g.user.username
user = (
db.session.query(ab_models.User)
.filter_by(username=username)
.one()
)
roles = {}
permissions = defaultdict(set)
for role in user.roles:
perms = set()
for perm in role.permissions:
perms.add(
(perm.permission.name, perm.view_menu.name)
)
if perm.permission.name in ('datasource_access', 'database_access'):
permissions[perm.permission.name].add(perm.view_menu.name)
roles[role.name] = [
[perm.permission.name, perm.view_menu.name]
for perm in role.permissions
]
payload = {
'user': {
'username': user.username,
'firstName': user.first_name,
'lastName': user.last_name,
'userId': user.id,
'isActive': user.is_active(),
'createdOn': user.created_on.isoformat(),
'email': user.email,
'roles': roles,
'permissions': permissions,
},
'common': self.common_bootsrap_payload(),
}
return self.render_template(
'superset/basic.html',
title=user.username + "'s profile",
navbar_container=True,
entry='profile',
bootstrap_data=json.dumps(payload, default=utils.json_iso_dttm_ser)
)
@has_access
@expose("/sqllab")
def sqllab(self):
"""SQL Editor"""
d = {
'defaultDbId': config.get('SQLLAB_DEFAULT_DBID'),
'common': self.common_bootsrap_payload(),
}
return self.render_template(
'superset/basic.html',
entry='sqllab',
bootstrap_data=json.dumps(d, default=utils.json_iso_dttm_ser)
)
appbuilder.add_view_no_menu(Superset)
class CssTemplateModelView(SupersetModelView, DeleteMixin):
datamodel = SQLAInterface(models.CssTemplate)
list_columns = ['template_name']
edit_columns = ['template_name', 'css']
add_columns = edit_columns
label_columns = {
'template_name': _('Template Name'),
}
class CssTemplateAsyncModelView(CssTemplateModelView):
list_columns = ['template_name', 'css']
appbuilder.add_separator("Sources")
appbuilder.add_view(
CssTemplateModelView,
"CSS Templates",
label=__("CSS Templates"),
icon="fa-css3",
category="Manage",
category_label=__("Manage"),
category_icon='')
appbuilder.add_view_no_menu(CssTemplateAsyncModelView)
appbuilder.add_link(
'SQL Editor',
label=_("SQL Editor"),
href='/superset/sqllab',
category_icon="fa-flask",
icon="fa-flask",
category='SQL Lab',
category_label=__("SQL Lab"),
)
appbuilder.add_link(
'Query Search',
label=_("Query Search"),
href='/superset/sqllab#search',
icon="fa-search",
category_icon="fa-flask",
category='SQL Lab',
category_label=__("SQL Lab"),
)
@app.after_request
def apply_caching(response):
"""Applies the configuration's http headers to all responses"""
for k, v in config.get('HTTP_HEADERS').items():
response.headers[k] = v
return response
# ---------------------------------------------------------------------
# Redirecting URL from previous names
class RegexConverter(BaseConverter):
def __init__(self, url_map, *items):
super(RegexConverter, self).__init__(url_map)
self.regex = items[0]
app.url_map.converters['regex'] = RegexConverter
@app.route('/<regex("panoramix\/.*"):url>')
def panoramix(url): # noqa
return redirect(request.full_path.replace('panoramix', 'superset'))
@app.route('/<regex("caravel\/.*"):url>')
def caravel(url): # noqa
return redirect(request.full_path.replace('caravel', 'superset'))
# ---------------------------------------------------------------------
| apache-2.0 |
robbymeals/scikit-learn | examples/model_selection/plot_confusion_matrix.py | 244 | 2496 | """
================
Confusion matrix
================
Example of confusion matrix usage to evaluate the quality
of the output of a classifier on the iris data set. The
diagonal elements represent the number of points for which
the predicted label is equal to the true label, while
off-diagonal elements are those that are mislabeled by the
classifier. The higher the diagonal values of the confusion
matrix the better, indicating many correct predictions.
The figures show the confusion matrix with and without
normalization by class support size (number of elements
in each class). This kind of normalization can be
interesting in case of class imbalance to have a more
visual interpretation of which class is being misclassified.
Here the results are not as good as they could be as our
choice for the regularization parameter C was not the best.
In real life applications this parameter is usually chosen
using :ref:`grid_search`.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
from sklearn.cross_validation import train_test_split
from sklearn.metrics import confusion_matrix
# import some data to play with
iris = datasets.load_iris()
X = iris.data
y = iris.target
# Split the data into a training set and a test set
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# Run classifier, using a model that is too regularized (C too low) to see
# the impact on the results
classifier = svm.SVC(kernel='linear', C=0.01)
y_pred = classifier.fit(X_train, y_train).predict(X_test)
def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(iris.target_names))
plt.xticks(tick_marks, iris.target_names, rotation=45)
plt.yticks(tick_marks, iris.target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Compute confusion matrix
cm = confusion_matrix(y_test, y_pred)
np.set_printoptions(precision=2)
print('Confusion matrix, without normalization')
print(cm)
plt.figure()
plot_confusion_matrix(cm)
# Normalize the confusion matrix by row (i.e by the number of samples
# in each class)
cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print('Normalized confusion matrix')
print(cm_normalized)
plt.figure()
plot_confusion_matrix(cm_normalized, title='Normalized confusion matrix')
plt.show()
| bsd-3-clause |
tedoreve/tools | naimaabc/naimaabc.py | 1 | 2243 | import numpy as np
import matplotlib.pyplot as plt
import naima
from naima.models import (ExponentialCutoffPowerLaw, Synchrotron,
InverseCompton)
from astropy.constants import c
import astropy.units as u
ECPL = ExponentialCutoffPowerLaw(1e36*u.Unit('1/eV'), 1*u.TeV, 2.0, 13*u.TeV)
SYN = Synchrotron(ECPL, B=100*u.uG)
# Define energy array for synchrotron seed photon field and compute
# Synchroton luminosity by setting distance to 0.
Esy = np.logspace(-6, 6, 100)*u.eV
Lsy = SYN.flux(Esy, distance=0*u.cm)
# Define source radius and compute photon density
R = 2 * u.pc
phn_sy = Lsy / (4 * np.pi * R**2 * c) * 2.26
# Create IC instance with CMB and synchrotron seed photon fields:
IC = InverseCompton(ECPL, seed_photon_fields=['CMB', 'FIR', 'NIR',
['SSC', Esy, phn_sy]])
# Compute SEDs
spectrum_energy = np.logspace(-8,14,100)*u.eV
sed_IC = IC.sed(spectrum_energy, distance=1.5*u.kpc)
sed_SYN = SYN.sed(spectrum_energy, distance=1.5*u.kpc)
# Plot
plt.figure(figsize=(8,5))
#plt.rc('font', family='sans')
#plt.rc('mathtext', fontset='custom')
ssc = IC.sed(spectrum_energy, seed='SSC', distance=1.5*u.kpc)
plt.loglog(spectrum_energy,ssc,lw=1.5,
ls='-',label='IC (SSC)',c=naima.plot.color_cycle[2])
for seed, ls in zip(['CMB','FIR','NIR'], ['-','--',':']):
sed = IC.sed(spectrum_energy, seed=seed, distance=1.5*u.kpc)
plt.loglog(spectrum_energy,sed,lw=1,
ls=ls,c='0.25')#,label='IC ({0})'.format(seed))
plt.loglog(spectrum_energy,sed_IC,lw=2,
label='IC (total)',c=naima.plot.color_cycle[0])
plt.loglog(spectrum_energy,sed_SYN,lw=2,label='Sync',c=naima.plot.color_cycle[1])
plt.xlabel('Photon energy [{0}]'.format(
spectrum_energy.unit.to_string('latex_inline')))
plt.ylabel('$E^2 dN/dE$ [{0}]'.format(
sed_SYN.unit.to_string('latex_inline')))
plt.ylim(1e-12, 1e-6)
#plt.ylim(1e-31, 1e-12)
plt.tight_layout()
plt.legend(loc='lower left')
#==============================================================================
n = 1 * u.cm**-3
l = 1 * u.pc
print((n*l**3).to(''))
#==============================================================================
| mit |
AlexanderFabisch/scikit-learn | sklearn/cross_decomposition/pls_.py | 34 | 30531 | """
The :mod:`sklearn.pls` module implements Partial Least Squares (PLS).
"""
# Author: Edouard Duchesnay <edouard.duchesnay@cea.fr>
# License: BSD 3 clause
from distutils.version import LooseVersion
from sklearn.utils.extmath import svd_flip
from ..base import BaseEstimator, RegressorMixin, TransformerMixin
from ..utils import check_array, check_consistent_length
from ..externals import six
import warnings
from abc import ABCMeta, abstractmethod
import numpy as np
from scipy import linalg
from ..utils import arpack
from ..utils.validation import check_is_fitted, FLOAT_DTYPES
__all__ = ['PLSCanonical', 'PLSRegression', 'PLSSVD']
import scipy
pinv2_args = {}
if LooseVersion(scipy.__version__) >= LooseVersion('0.12'):
# check_finite=False is an optimization available only in scipy >=0.12
pinv2_args = {'check_finite': False}
def _nipals_twoblocks_inner_loop(X, Y, mode="A", max_iter=500, tol=1e-06,
norm_y_weights=False):
"""Inner loop of the iterative NIPALS algorithm.
Provides an alternative to the svd(X'Y); returns the first left and right
singular vectors of X'Y. See PLS for the meaning of the parameters. It is
similar to the Power method for determining the eigenvectors and
eigenvalues of a X'Y.
"""
y_score = Y[:, [0]]
x_weights_old = 0
ite = 1
X_pinv = Y_pinv = None
eps = np.finfo(X.dtype).eps
# Inner loop of the Wold algo.
while True:
# 1.1 Update u: the X weights
if mode == "B":
if X_pinv is None:
# We use slower pinv2 (same as np.linalg.pinv) for stability
# reasons
X_pinv = linalg.pinv2(X, **pinv2_args)
x_weights = np.dot(X_pinv, y_score)
else: # mode A
# Mode A regress each X column on y_score
x_weights = np.dot(X.T, y_score) / np.dot(y_score.T, y_score)
# 1.2 Normalize u
x_weights /= np.sqrt(np.dot(x_weights.T, x_weights)) + eps
# 1.3 Update x_score: the X latent scores
x_score = np.dot(X, x_weights)
# 2.1 Update y_weights
if mode == "B":
if Y_pinv is None:
Y_pinv = linalg.pinv2(Y, **pinv2_args) # compute once pinv(Y)
y_weights = np.dot(Y_pinv, x_score)
else:
# Mode A regress each Y column on x_score
y_weights = np.dot(Y.T, x_score) / np.dot(x_score.T, x_score)
# 2.2 Normalize y_weights
if norm_y_weights:
y_weights /= np.sqrt(np.dot(y_weights.T, y_weights)) + eps
# 2.3 Update y_score: the Y latent scores
y_score = np.dot(Y, y_weights) / (np.dot(y_weights.T, y_weights) + eps)
# y_score = np.dot(Y, y_weights) / np.dot(y_score.T, y_score) ## BUG
x_weights_diff = x_weights - x_weights_old
if np.dot(x_weights_diff.T, x_weights_diff) < tol or Y.shape[1] == 1:
break
if ite == max_iter:
warnings.warn('Maximum number of iterations reached')
break
x_weights_old = x_weights
ite += 1
return x_weights, y_weights, ite
def _svd_cross_product(X, Y):
C = np.dot(X.T, Y)
U, s, Vh = linalg.svd(C, full_matrices=False)
u = U[:, [0]]
v = Vh.T[:, [0]]
return u, v
def _center_scale_xy(X, Y, scale=True):
""" Center X, Y and scale if the scale parameter==True
Returns
-------
X, Y, x_mean, y_mean, x_std, y_std
"""
# center
x_mean = X.mean(axis=0)
X -= x_mean
y_mean = Y.mean(axis=0)
Y -= y_mean
# scale
if scale:
x_std = X.std(axis=0, ddof=1)
x_std[x_std == 0.0] = 1.0
X /= x_std
y_std = Y.std(axis=0, ddof=1)
y_std[y_std == 0.0] = 1.0
Y /= y_std
else:
x_std = np.ones(X.shape[1])
y_std = np.ones(Y.shape[1])
return X, Y, x_mean, y_mean, x_std, y_std
class _PLS(six.with_metaclass(ABCMeta), BaseEstimator, TransformerMixin,
RegressorMixin):
"""Partial Least Squares (PLS)
This class implements the generic PLS algorithm, constructors' parameters
allow to obtain a specific implementation such as:
- PLS2 regression, i.e., PLS 2 blocks, mode A, with asymmetric deflation
and unnormalized y weights such as defined by [Tenenhaus 1998] p. 132.
With univariate response it implements PLS1.
- PLS canonical, i.e., PLS 2 blocks, mode A, with symmetric deflation and
normalized y weights such as defined by [Tenenhaus 1998] (p. 132) and
[Wegelin et al. 2000]. This parametrization implements the original Wold
algorithm.
We use the terminology defined by [Wegelin et al. 2000].
This implementation uses the PLS Wold 2 blocks algorithm based on two
nested loops:
(i) The outer loop iterate over components.
(ii) The inner loop estimates the weights vectors. This can be done
with two algo. (a) the inner loop of the original NIPALS algo. or (b) a
SVD on residuals cross-covariance matrices.
n_components : int, number of components to keep. (default 2).
scale : boolean, scale data? (default True)
deflation_mode : str, "canonical" or "regression". See notes.
mode : "A" classical PLS and "B" CCA. See notes.
norm_y_weights: boolean, normalize Y weights to one? (default False)
algorithm : string, "nipals" or "svd"
The algorithm used to estimate the weights. It will be called
n_components times, i.e. once for each iteration of the outer loop.
max_iter : an integer, the maximum number of iterations (default 500)
of the NIPALS inner loop (used only if algorithm="nipals")
tol : non-negative real, default 1e-06
The tolerance used in the iterative algorithm.
copy : boolean, default True
Whether the deflation should be done on a copy. Let the default
value to True unless you don't care about side effects.
Attributes
----------
x_weights_ : array, [p, n_components]
X block weights vectors.
y_weights_ : array, [q, n_components]
Y block weights vectors.
x_loadings_ : array, [p, n_components]
X block loadings vectors.
y_loadings_ : array, [q, n_components]
Y block loadings vectors.
x_scores_ : array, [n_samples, n_components]
X scores.
y_scores_ : array, [n_samples, n_components]
Y scores.
x_rotations_ : array, [p, n_components]
X block to latents rotations.
y_rotations_ : array, [q, n_components]
Y block to latents rotations.
coef_: array, [p, q]
The coefficients of the linear model: ``Y = X coef_ + Err``
n_iter_ : array-like
Number of iterations of the NIPALS inner loop for each
component. Not useful if the algorithm given is "svd".
References
----------
Jacob A. Wegelin. A survey of Partial Least Squares (PLS) methods, with
emphasis on the two-block case. Technical Report 371, Department of
Statistics, University of Washington, Seattle, 2000.
In French but still a reference:
Tenenhaus, M. (1998). La regression PLS: theorie et pratique. Paris:
Editions Technic.
See also
--------
PLSCanonical
PLSRegression
CCA
PLS_SVD
"""
@abstractmethod
def __init__(self, n_components=2, scale=True, deflation_mode="regression",
mode="A", algorithm="nipals", norm_y_weights=False,
max_iter=500, tol=1e-06, copy=True):
self.n_components = n_components
self.deflation_mode = deflation_mode
self.mode = mode
self.norm_y_weights = norm_y_weights
self.scale = scale
self.algorithm = algorithm
self.max_iter = max_iter
self.tol = tol
self.copy = copy
def fit(self, X, Y):
"""Fit model to data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples in the number of samples and
n_features is the number of predictors.
Y : array-like of response, shape = [n_samples, n_targets]
Target vectors, where n_samples in the number of samples and
n_targets is the number of response variables.
"""
# copy since this will contains the residuals (deflated) matrices
check_consistent_length(X, Y)
X = check_array(X, dtype=np.float64, copy=self.copy)
Y = check_array(Y, dtype=np.float64, copy=self.copy, ensure_2d=False)
if Y.ndim == 1:
Y = Y.reshape(-1, 1)
n = X.shape[0]
p = X.shape[1]
q = Y.shape[1]
if self.n_components < 1 or self.n_components > p:
raise ValueError('Invalid number of components: %d' %
self.n_components)
if self.algorithm not in ("svd", "nipals"):
raise ValueError("Got algorithm %s when only 'svd' "
"and 'nipals' are known" % self.algorithm)
if self.algorithm == "svd" and self.mode == "B":
raise ValueError('Incompatible configuration: mode B is not '
'implemented with svd algorithm')
if self.deflation_mode not in ["canonical", "regression"]:
raise ValueError('The deflation mode is unknown')
# Scale (in place)
X, Y, self.x_mean_, self.y_mean_, self.x_std_, self.y_std_ = (
_center_scale_xy(X, Y, self.scale))
# Residuals (deflated) matrices
Xk = X
Yk = Y
# Results matrices
self.x_scores_ = np.zeros((n, self.n_components))
self.y_scores_ = np.zeros((n, self.n_components))
self.x_weights_ = np.zeros((p, self.n_components))
self.y_weights_ = np.zeros((q, self.n_components))
self.x_loadings_ = np.zeros((p, self.n_components))
self.y_loadings_ = np.zeros((q, self.n_components))
self.n_iter_ = []
# NIPALS algo: outer loop, over components
for k in range(self.n_components):
if np.all(np.dot(Yk.T, Yk) < np.finfo(np.double).eps):
# Yk constant
warnings.warn('Y residual constant at iteration %s' % k)
break
# 1) weights estimation (inner loop)
# -----------------------------------
if self.algorithm == "nipals":
x_weights, y_weights, n_iter_ = \
_nipals_twoblocks_inner_loop(
X=Xk, Y=Yk, mode=self.mode, max_iter=self.max_iter,
tol=self.tol, norm_y_weights=self.norm_y_weights)
self.n_iter_.append(n_iter_)
elif self.algorithm == "svd":
x_weights, y_weights = _svd_cross_product(X=Xk, Y=Yk)
# Forces sign stability of x_weights and y_weights
# Sign undeterminacy issue from svd if algorithm == "svd"
# and from platform dependent computation if algorithm == 'nipals'
x_weights, y_weights = svd_flip(x_weights, y_weights.T)
y_weights = y_weights.T
# compute scores
x_scores = np.dot(Xk, x_weights)
if self.norm_y_weights:
y_ss = 1
else:
y_ss = np.dot(y_weights.T, y_weights)
y_scores = np.dot(Yk, y_weights) / y_ss
# test for null variance
if np.dot(x_scores.T, x_scores) < np.finfo(np.double).eps:
warnings.warn('X scores are null at iteration %s' % k)
break
# 2) Deflation (in place)
# ----------------------
# Possible memory footprint reduction may done here: in order to
# avoid the allocation of a data chunk for the rank-one
# approximations matrix which is then subtracted to Xk, we suggest
# to perform a column-wise deflation.
#
# - regress Xk's on x_score
x_loadings = np.dot(Xk.T, x_scores) / np.dot(x_scores.T, x_scores)
# - subtract rank-one approximations to obtain remainder matrix
Xk -= np.dot(x_scores, x_loadings.T)
if self.deflation_mode == "canonical":
# - regress Yk's on y_score, then subtract rank-one approx.
y_loadings = (np.dot(Yk.T, y_scores)
/ np.dot(y_scores.T, y_scores))
Yk -= np.dot(y_scores, y_loadings.T)
if self.deflation_mode == "regression":
# - regress Yk's on x_score, then subtract rank-one approx.
y_loadings = (np.dot(Yk.T, x_scores)
/ np.dot(x_scores.T, x_scores))
Yk -= np.dot(x_scores, y_loadings.T)
# 3) Store weights, scores and loadings # Notation:
self.x_scores_[:, k] = x_scores.ravel() # T
self.y_scores_[:, k] = y_scores.ravel() # U
self.x_weights_[:, k] = x_weights.ravel() # W
self.y_weights_[:, k] = y_weights.ravel() # C
self.x_loadings_[:, k] = x_loadings.ravel() # P
self.y_loadings_[:, k] = y_loadings.ravel() # Q
# Such that: X = TP' + Err and Y = UQ' + Err
# 4) rotations from input space to transformed space (scores)
# T = X W(P'W)^-1 = XW* (W* : p x k matrix)
# U = Y C(Q'C)^-1 = YC* (W* : q x k matrix)
self.x_rotations_ = np.dot(
self.x_weights_,
linalg.pinv2(np.dot(self.x_loadings_.T, self.x_weights_),
**pinv2_args))
if Y.shape[1] > 1:
self.y_rotations_ = np.dot(
self.y_weights_,
linalg.pinv2(np.dot(self.y_loadings_.T, self.y_weights_),
**pinv2_args))
else:
self.y_rotations_ = np.ones(1)
if True or self.deflation_mode == "regression":
# FIXME what's with the if?
# Estimate regression coefficient
# Regress Y on T
# Y = TQ' + Err,
# Then express in function of X
# Y = X W(P'W)^-1Q' + Err = XB + Err
# => B = W*Q' (p x q)
self.coef_ = np.dot(self.x_rotations_, self.y_loadings_.T)
self.coef_ = (1. / self.x_std_.reshape((p, 1)) * self.coef_ *
self.y_std_)
return self
def transform(self, X, Y=None, copy=True):
"""Apply the dimension reduction learned on the train data.
Parameters
----------
X : array-like of predictors, shape = [n_samples, p]
Training vectors, where n_samples in the number of samples and
p is the number of predictors.
Y : array-like of response, shape = [n_samples, q], optional
Training vectors, where n_samples in the number of samples and
q is the number of response variables.
copy : boolean, default True
Whether to copy X and Y, or perform in-place normalization.
Returns
-------
x_scores if Y is not given, (x_scores, y_scores) otherwise.
"""
check_is_fitted(self, 'x_mean_')
X = check_array(X, copy=copy, dtype=FLOAT_DTYPES)
# Normalize
X -= self.x_mean_
X /= self.x_std_
# Apply rotation
x_scores = np.dot(X, self.x_rotations_)
if Y is not None:
Y = check_array(Y, ensure_2d=False, copy=copy, dtype=FLOAT_DTYPES)
if Y.ndim == 1:
Y = Y.reshape(-1, 1)
Y -= self.y_mean_
Y /= self.y_std_
y_scores = np.dot(Y, self.y_rotations_)
return x_scores, y_scores
return x_scores
def predict(self, X, copy=True):
"""Apply the dimension reduction learned on the train data.
Parameters
----------
X : array-like of predictors, shape = [n_samples, p]
Training vectors, where n_samples in the number of samples and
p is the number of predictors.
copy : boolean, default True
Whether to copy X and Y, or perform in-place normalization.
Notes
-----
This call requires the estimation of a p x q matrix, which may
be an issue in high dimensional space.
"""
check_is_fitted(self, 'x_mean_')
X = check_array(X, copy=copy, dtype=FLOAT_DTYPES)
# Normalize
X -= self.x_mean_
X /= self.x_std_
Ypred = np.dot(X, self.coef_)
return Ypred + self.y_mean_
def fit_transform(self, X, y=None, **fit_params):
"""Learn and apply the dimension reduction on the train data.
Parameters
----------
X : array-like of predictors, shape = [n_samples, p]
Training vectors, where n_samples in the number of samples and
p is the number of predictors.
Y : array-like of response, shape = [n_samples, q], optional
Training vectors, where n_samples in the number of samples and
q is the number of response variables.
copy : boolean, default True
Whether to copy X and Y, or perform in-place normalization.
Returns
-------
x_scores if Y is not given, (x_scores, y_scores) otherwise.
"""
return self.fit(X, y, **fit_params).transform(X, y)
class PLSRegression(_PLS):
"""PLS regression
PLSRegression implements the PLS 2 blocks regression known as PLS2 or PLS1
in case of one dimensional response.
This class inherits from _PLS with mode="A", deflation_mode="regression",
norm_y_weights=False and algorithm="nipals".
Read more in the :ref:`User Guide <cross_decomposition>`.
Parameters
----------
n_components : int, (default 2)
Number of components to keep.
scale : boolean, (default True)
whether to scale the data
max_iter : an integer, (default 500)
the maximum number of iterations of the NIPALS inner loop (used
only if algorithm="nipals")
tol : non-negative real
Tolerance used in the iterative algorithm default 1e-06.
copy : boolean, default True
Whether the deflation should be done on a copy. Let the default
value to True unless you don't care about side effect
Attributes
----------
x_weights_ : array, [p, n_components]
X block weights vectors.
y_weights_ : array, [q, n_components]
Y block weights vectors.
x_loadings_ : array, [p, n_components]
X block loadings vectors.
y_loadings_ : array, [q, n_components]
Y block loadings vectors.
x_scores_ : array, [n_samples, n_components]
X scores.
y_scores_ : array, [n_samples, n_components]
Y scores.
x_rotations_ : array, [p, n_components]
X block to latents rotations.
y_rotations_ : array, [q, n_components]
Y block to latents rotations.
coef_: array, [p, q]
The coefficients of the linear model: ``Y = X coef_ + Err``
n_iter_ : array-like
Number of iterations of the NIPALS inner loop for each
component.
Notes
-----
Matrices::
T: x_scores_
U: y_scores_
W: x_weights_
C: y_weights_
P: x_loadings_
Q: y_loadings__
Are computed such that::
X = T P.T + Err and Y = U Q.T + Err
T[:, k] = Xk W[:, k] for k in range(n_components)
U[:, k] = Yk C[:, k] for k in range(n_components)
x_rotations_ = W (P.T W)^(-1)
y_rotations_ = C (Q.T C)^(-1)
where Xk and Yk are residual matrices at iteration k.
`Slides explaining PLS <http://www.eigenvector.com/Docs/Wise_pls_properties.pdf>`
For each component k, find weights u, v that optimizes:
``max corr(Xk u, Yk v) * std(Xk u) std(Yk u)``, such that ``|u| = 1``
Note that it maximizes both the correlations between the scores and the
intra-block variances.
The residual matrix of X (Xk+1) block is obtained by the deflation on
the current X score: x_score.
The residual matrix of Y (Yk+1) block is obtained by deflation on the
current X score. This performs the PLS regression known as PLS2. This
mode is prediction oriented.
This implementation provides the same results that 3 PLS packages
provided in the R language (R-project):
- "mixOmics" with function pls(X, Y, mode = "regression")
- "plspm " with function plsreg2(X, Y)
- "pls" with function oscorespls.fit(X, Y)
Examples
--------
>>> from sklearn.cross_decomposition import PLSRegression
>>> X = [[0., 0., 1.], [1.,0.,0.], [2.,2.,2.], [2.,5.,4.]]
>>> Y = [[0.1, -0.2], [0.9, 1.1], [6.2, 5.9], [11.9, 12.3]]
>>> pls2 = PLSRegression(n_components=2)
>>> pls2.fit(X, Y)
... # doctest: +NORMALIZE_WHITESPACE
PLSRegression(copy=True, max_iter=500, n_components=2, scale=True,
tol=1e-06)
>>> Y_pred = pls2.predict(X)
References
----------
Jacob A. Wegelin. A survey of Partial Least Squares (PLS) methods, with
emphasis on the two-block case. Technical Report 371, Department of
Statistics, University of Washington, Seattle, 2000.
In french but still a reference:
Tenenhaus, M. (1998). La regression PLS: theorie et pratique. Paris:
Editions Technic.
"""
def __init__(self, n_components=2, scale=True,
max_iter=500, tol=1e-06, copy=True):
super(PLSRegression, self).__init__(
n_components=n_components, scale=scale,
deflation_mode="regression", mode="A",
norm_y_weights=False, max_iter=max_iter, tol=tol,
copy=copy)
class PLSCanonical(_PLS):
""" PLSCanonical implements the 2 blocks canonical PLS of the original Wold
algorithm [Tenenhaus 1998] p.204, referred as PLS-C2A in [Wegelin 2000].
This class inherits from PLS with mode="A" and deflation_mode="canonical",
norm_y_weights=True and algorithm="nipals", but svd should provide similar
results up to numerical errors.
Read more in the :ref:`User Guide <cross_decomposition>`.
Parameters
----------
scale : boolean, scale data? (default True)
algorithm : string, "nipals" or "svd"
The algorithm used to estimate the weights. It will be called
n_components times, i.e. once for each iteration of the outer loop.
max_iter : an integer, (default 500)
the maximum number of iterations of the NIPALS inner loop (used
only if algorithm="nipals")
tol : non-negative real, default 1e-06
the tolerance used in the iterative algorithm
copy : boolean, default True
Whether the deflation should be done on a copy. Let the default
value to True unless you don't care about side effect
n_components : int, number of components to keep. (default 2).
Attributes
----------
x_weights_ : array, shape = [p, n_components]
X block weights vectors.
y_weights_ : array, shape = [q, n_components]
Y block weights vectors.
x_loadings_ : array, shape = [p, n_components]
X block loadings vectors.
y_loadings_ : array, shape = [q, n_components]
Y block loadings vectors.
x_scores_ : array, shape = [n_samples, n_components]
X scores.
y_scores_ : array, shape = [n_samples, n_components]
Y scores.
x_rotations_ : array, shape = [p, n_components]
X block to latents rotations.
y_rotations_ : array, shape = [q, n_components]
Y block to latents rotations.
n_iter_ : array-like
Number of iterations of the NIPALS inner loop for each
component. Not useful if the algorithm provided is "svd".
Notes
-----
Matrices::
T: x_scores_
U: y_scores_
W: x_weights_
C: y_weights_
P: x_loadings_
Q: y_loadings__
Are computed such that::
X = T P.T + Err and Y = U Q.T + Err
T[:, k] = Xk W[:, k] for k in range(n_components)
U[:, k] = Yk C[:, k] for k in range(n_components)
x_rotations_ = W (P.T W)^(-1)
y_rotations_ = C (Q.T C)^(-1)
where Xk and Yk are residual matrices at iteration k.
`Slides explaining PLS <http://www.eigenvector.com/Docs/Wise_pls_properties.pdf>`
For each component k, find weights u, v that optimize::
max corr(Xk u, Yk v) * std(Xk u) std(Yk u), such that ``|u| = |v| = 1``
Note that it maximizes both the correlations between the scores and the
intra-block variances.
The residual matrix of X (Xk+1) block is obtained by the deflation on the
current X score: x_score.
The residual matrix of Y (Yk+1) block is obtained by deflation on the
current Y score. This performs a canonical symmetric version of the PLS
regression. But slightly different than the CCA. This is mostly used
for modeling.
This implementation provides the same results that the "plspm" package
provided in the R language (R-project), using the function plsca(X, Y).
Results are equal or collinear with the function
``pls(..., mode = "canonical")`` of the "mixOmics" package. The difference
relies in the fact that mixOmics implementation does not exactly implement
the Wold algorithm since it does not normalize y_weights to one.
Examples
--------
>>> from sklearn.cross_decomposition import PLSCanonical
>>> X = [[0., 0., 1.], [1.,0.,0.], [2.,2.,2.], [2.,5.,4.]]
>>> Y = [[0.1, -0.2], [0.9, 1.1], [6.2, 5.9], [11.9, 12.3]]
>>> plsca = PLSCanonical(n_components=2)
>>> plsca.fit(X, Y)
... # doctest: +NORMALIZE_WHITESPACE
PLSCanonical(algorithm='nipals', copy=True, max_iter=500, n_components=2,
scale=True, tol=1e-06)
>>> X_c, Y_c = plsca.transform(X, Y)
References
----------
Jacob A. Wegelin. A survey of Partial Least Squares (PLS) methods, with
emphasis on the two-block case. Technical Report 371, Department of
Statistics, University of Washington, Seattle, 2000.
Tenenhaus, M. (1998). La regression PLS: theorie et pratique. Paris:
Editions Technic.
See also
--------
CCA
PLSSVD
"""
def __init__(self, n_components=2, scale=True, algorithm="nipals",
max_iter=500, tol=1e-06, copy=True):
super(PLSCanonical, self).__init__(
n_components=n_components, scale=scale,
deflation_mode="canonical", mode="A",
norm_y_weights=True, algorithm=algorithm,
max_iter=max_iter, tol=tol, copy=copy)
class PLSSVD(BaseEstimator, TransformerMixin):
"""Partial Least Square SVD
Simply perform a svd on the crosscovariance matrix: X'Y
There are no iterative deflation here.
Read more in the :ref:`User Guide <cross_decomposition>`.
Parameters
----------
n_components : int, default 2
Number of components to keep.
scale : boolean, default True
Whether to scale X and Y.
copy : boolean, default True
Whether to copy X and Y, or perform in-place computations.
Attributes
----------
x_weights_ : array, [p, n_components]
X block weights vectors.
y_weights_ : array, [q, n_components]
Y block weights vectors.
x_scores_ : array, [n_samples, n_components]
X scores.
y_scores_ : array, [n_samples, n_components]
Y scores.
See also
--------
PLSCanonical
CCA
"""
def __init__(self, n_components=2, scale=True, copy=True):
self.n_components = n_components
self.scale = scale
self.copy = copy
def fit(self, X, Y):
# copy since this will contains the centered data
check_consistent_length(X, Y)
X = check_array(X, dtype=np.float64, copy=self.copy)
Y = check_array(Y, dtype=np.float64, copy=self.copy, ensure_2d=False)
if Y.ndim == 1:
Y = Y.reshape(-1, 1)
if self.n_components > max(Y.shape[1], X.shape[1]):
raise ValueError("Invalid number of components n_components=%d"
" with X of shape %s and Y of shape %s."
% (self.n_components, str(X.shape), str(Y.shape)))
# Scale (in place)
X, Y, self.x_mean_, self.y_mean_, self.x_std_, self.y_std_ = (
_center_scale_xy(X, Y, self.scale))
# svd(X'Y)
C = np.dot(X.T, Y)
# The arpack svds solver only works if the number of extracted
# components is smaller than rank(X) - 1. Hence, if we want to extract
# all the components (C.shape[1]), we have to use another one. Else,
# let's use arpacks to compute only the interesting components.
if self.n_components >= np.min(C.shape):
U, s, V = linalg.svd(C, full_matrices=False)
else:
U, s, V = arpack.svds(C, k=self.n_components)
# Deterministic output
U, V = svd_flip(U, V)
V = V.T
self.x_scores_ = np.dot(X, U)
self.y_scores_ = np.dot(Y, V)
self.x_weights_ = U
self.y_weights_ = V
return self
def transform(self, X, Y=None):
"""Apply the dimension reduction learned on the train data."""
check_is_fitted(self, 'x_mean_')
X = check_array(X, dtype=np.float64)
Xr = (X - self.x_mean_) / self.x_std_
x_scores = np.dot(Xr, self.x_weights_)
if Y is not None:
if Y.ndim == 1:
Y = Y.reshape(-1, 1)
Yr = (Y - self.y_mean_) / self.y_std_
y_scores = np.dot(Yr, self.y_weights_)
return x_scores, y_scores
return x_scores
def fit_transform(self, X, y=None, **fit_params):
"""Learn and apply the dimension reduction on the train data.
Parameters
----------
X : array-like of predictors, shape = [n_samples, p]
Training vectors, where n_samples in the number of samples and
p is the number of predictors.
Y : array-like of response, shape = [n_samples, q], optional
Training vectors, where n_samples in the number of samples and
q is the number of response variables.
Returns
-------
x_scores if Y is not given, (x_scores, y_scores) otherwise.
"""
return self.fit(X, y, **fit_params).transform(X, y)
| bsd-3-clause |
KrasnitzLab/sgains | sgains/pipelines/varbin_10x_pipeline.py | 1 | 9166 | import os
import time
import glob
import shutil
from io import BytesIO
from collections import defaultdict, namedtuple
import pandas as pd
import numpy as np
from dask.distributed import Queue, worker_client, wait
from termcolor import colored
import pysam
from sgains.genome import Genome
from sgains.pipelines.extract_10x_pipeline import Base10xPipeline
class Varbin10xPipeline(Base10xPipeline):
def __init__(self, config):
super(Varbin10xPipeline, self).__init__(config)
self.bins_df = self.genome.bins_boundaries()
self.chrom2contig, self.contig2chrom = self._chrom2contig_mapping()
self.chrom_sizes = self.genome.chrom_sizes()
def _chrom2contig_mapping(self):
with pysam.AlignmentFile(self.bam_filename, 'rb') as samfile:
assert samfile.check_index(), \
(self.bam_filename, self.bai_filename)
sam_stats = samfile.get_index_statistics()
chroms = set(self.bins_df['bin.chrom'].values)
chrom2contig = {}
contig2chrom = {}
for stat in sam_stats:
contig = stat.contig
chrom_name = contig
if chrom_name not in chroms:
chrom_name = "chr{}".format(contig)
if chrom_name not in chroms:
continue
chrom2contig[chrom_name] = contig
contig2chrom[contig] = chrom_name
return chrom2contig, contig2chrom
Region = namedtuple('Region', ['chrom', 'contig', 'start', 'end'])
def split_bins(self, bins_step, bins_region=None):
total_bins = len(self.bins_df)
regions = []
bin_start = 0
bin_end = total_bins
if bins_region is not None:
bin_start, bin_end = bins_region
bin_end = min(bin_end, total_bins)
index = bin_start
while index < bin_end:
start = index
end = index + bins_step - 1
if end >= total_bins:
end = total_bins - 1
chrom_start = self.bins_df.iloc[start, 0]
chrom_end = self.bins_df.iloc[end, 0]
while chrom_end != chrom_start:
end -= 1
assert end >= start
chrom_end = self.bins_df.iloc[end, 0]
pos_start = self.bins_df.iloc[start, 1]
pos_end = self.bins_df.iloc[end, 3]
regions.append((chrom_start, pos_start, pos_end))
index = end + 1
return [
self.Region(chrom, self.chrom2contig[chrom], start, end)
for chrom, start, end in regions
]
def _cell_reads_dirname(self, cell_id):
cell_name = self._cell_name(cell_id)
dirname = os.path.join(
self.config.varbin.varbin_dir,
cell_name)
os.makedirs(dirname, exist_ok=True)
return dirname
def _cell_region_filename(self, cell_id, region_index):
cell_name = self._cell_name(cell_id)
region_name = f"{region_index:0>8}"
filename = os.path.join(
self.config.varbin.varbin_dir,
cell_name,
f"{cell_name}_{region_name}{self.config.varbin.varbin_suffix}"
)
return filename
def store_reads(self, reads, region_index):
if not reads:
return None
df = pd.DataFrame(reads, columns=['cell_id', 'chrom', 'pos'])
for cell_id, group_df in df.groupby(by="cell_id"):
cell_region_filename = self._cell_region_filename(
cell_id, region_index)
cell_dirname = os.path.dirname(cell_region_filename)
os.makedirs(cell_dirname, exist_ok=True)
group_df.to_csv(cell_region_filename, index=False, sep="\t")
return "done"
def load_reads(self, cell_id):
cell_name = self._cell_name(cell_id)
pattern = f"{cell_name}_*{self.config.varbin.varbin_suffix}"
pattern = os.path.join(
self.config.varbin.varbin_dir,
cell_name,
pattern
)
print(colored(
"merging reads cell files {} ".format(
pattern
),
"green"))
filenames = glob.glob(pattern)
filenames = sorted(filenames)
dataframes = []
for filename in filenames:
df = pd.read_csv(filename, sep="\t")
dataframes.append(df)
if len(dataframes) == 0:
return None
elif len(dataframes) == 1:
return dataframes[0]
else:
result_df = pd.concat(dataframes, ignore_index=True)
result_df = result_df.sort_values(by=["cell_id", "chrom", "pos"])
return result_df
Read = namedtuple('Read', ['cell_id', 'chrom', 'pos'])
def process_region_reads(self, region, region_index):
print(f"started region {region}")
with pysam.AlignmentFile(self.bam_filename, 'rb') as samfile:
assert samfile.check_index(), \
(self.bam_filename, self.bai_filename)
cells_reads = []
mapped = 0
for rec in samfile.fetch(region.contig, region.start, region.end):
if not rec.has_tag('CB'):
continue
mapped += 1
barcode = rec.get_tag('CB')
if barcode not in self.barcodes:
continue
cell_id = self.barcodes[barcode]
contig = rec.reference_name
if contig not in self.contig2chrom:
continue
read = self.Read(
cell_id, self.contig2chrom[contig], rec.reference_start)
cells_reads.append(read)
print(f"done region {region}; reads processed {len(cells_reads)}")
return self.store_reads(cells_reads, region_index)
def find_bin_index(self, abspos, bins):
index = np.searchsorted(
abspos, bins, side='right')
index = index - 1
return index
def find_bin_index_binsearch(self, bins, abspos):
index_up = len(bins)
index_down = 0
index_mid = int((index_up - index_down) / 2.0)
while True:
if abspos >= int(bins[index_mid]):
index_down = index_mid + 0
index_mid = int((index_up - index_down) / 2.0) + index_mid
else:
index_up = index_mid + 0
index_mid = int((index_up - index_down) / 2.0) + index_down
if index_up - index_down < 2:
break
return index_down
def varbin_cell_reads(self, reads_df):
assert self.bins_df is not None
count = 0
dups = 0
total_reads = 0
prev_pos = 0
bin_counts = defaultdict(int)
bins = self.bins_df['bin.start.abspos'].values
for _, read in reads_df.iterrows():
total_reads += 1
abspos = self.chrom_sizes[read.chrom].abspos + read.pos
if prev_pos == abspos:
dups += 1
continue
count += 1
index = self.find_bin_index_binsearch(bins, abspos)
bin_counts[index] += 1
prev_pos = abspos
number_of_reads_per_bin = float(count) / len(self.bins_df)
result = []
for index, row in self.bins_df.iterrows():
bin_count = bin_counts[index]
ratio = float(bin_count) / number_of_reads_per_bin
result.append(
[
row['bin.chrom'],
row['bin.start'],
row['bin.start.abspos'],
bin_count,
ratio
]
)
df = pd.DataFrame.from_records(
result,
columns=[
'chrom',
'chrompos',
'abspos',
'bincount',
'ratio',
])
df.sort_values(by=['abspos'], inplace=True)
return df
def process_cell_reads(self, cell_id):
reads_df = self.load_reads(cell_id)
if reads_df is None:
print(f"data for cell {cell_id} not found... skipping...")
return
df = self.varbin_cell_reads(reads_df)
cell_name = self._cell_name(cell_id)
outfile = self.config.varbin_filename(cell_name)
df.to_csv(outfile, sep='\t', index=False)
cell_dirname = self._cell_reads_dirname(cell_id)
print(f"going to remove {cell_dirname}...")
shutil.rmtree(cell_dirname)
return cell_id
def run(self, dask_client, bins_step=20, bins_region=None, outdir='.'):
regions = self.split_bins(bins_step=bins_step, bins_region=bins_region)
delayed_tasks = dask_client.map(
lambda region_tuple:
self.process_region_reads(region_tuple[1], region_tuple[0]),
list(enumerate(regions))
)
wait(delayed_tasks)
delayed_tasks = dask_client.map(
lambda cell_id: self.process_cell_reads(cell_id),
list(self.barcodes.values())
)
wait(delayed_tasks)
| mit |
BryanCutler/spark | python/pyspark/sql/pandas/typehints.py | 26 | 6324 | #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from pyspark.sql.pandas.utils import require_minimum_pandas_version
def infer_eval_type(sig):
"""
Infers the evaluation type in :class:`pyspark.rdd.PythonEvalType` from
:class:`inspect.Signature` instance.
"""
from pyspark.sql.pandas.functions import PandasUDFType
require_minimum_pandas_version()
import pandas as pd
annotations = {}
for param in sig.parameters.values():
if param.annotation is not param.empty:
annotations[param.name] = param.annotation
# Check if all arguments have type hints
parameters_sig = [annotations[parameter] for parameter
in sig.parameters if parameter in annotations]
if len(parameters_sig) != len(sig.parameters):
raise ValueError(
"Type hints for all parameters should be specified; however, got %s" % sig)
# Check if the return has a type hint
return_annotation = sig.return_annotation
if sig.empty is return_annotation:
raise ValueError(
"Type hint for the return type should be specified; however, got %s" % sig)
# Series, Frame or Union[DataFrame, Series], ... -> Series or Frame
is_series_or_frame = (
all(a == pd.Series or # Series
a == pd.DataFrame or # DataFrame
check_union_annotation( # Union[DataFrame, Series]
a,
parameter_check_func=lambda na: na == pd.Series or na == pd.DataFrame)
for a in parameters_sig) and
(return_annotation == pd.Series or return_annotation == pd.DataFrame))
# Iterator[Tuple[Series, Frame or Union[DataFrame, Series], ...] -> Iterator[Series or Frame]
is_iterator_tuple_series_or_frame = (
len(parameters_sig) == 1 and
check_iterator_annotation( # Iterator
parameters_sig[0],
parameter_check_func=lambda a: check_tuple_annotation( # Tuple
a,
parameter_check_func=lambda ta: (
ta == Ellipsis or # ...
ta == pd.Series or # Series
ta == pd.DataFrame or # DataFrame
check_union_annotation( # Union[DataFrame, Series]
ta,
parameter_check_func=lambda na: (
na == pd.Series or na == pd.DataFrame))))) and
check_iterator_annotation(
return_annotation,
parameter_check_func=lambda a: a == pd.DataFrame or a == pd.Series))
# Iterator[Series, Frame or Union[DataFrame, Series]] -> Iterator[Series or Frame]
is_iterator_series_or_frame = (
len(parameters_sig) == 1 and
check_iterator_annotation(
parameters_sig[0],
parameter_check_func=lambda a: (
a == pd.Series or # Series
a == pd.DataFrame or # DataFrame
check_union_annotation( # Union[DataFrame, Series]
a,
parameter_check_func=lambda ua: ua == pd.Series or ua == pd.DataFrame))) and
check_iterator_annotation(
return_annotation,
parameter_check_func=lambda a: a == pd.DataFrame or a == pd.Series))
# Series, Frame or Union[DataFrame, Series], ... -> Any
is_series_or_frame_agg = (
all(a == pd.Series or # Series
a == pd.DataFrame or # DataFrame
check_union_annotation( # Union[DataFrame, Series]
a,
parameter_check_func=lambda ua: ua == pd.Series or ua == pd.DataFrame)
for a in parameters_sig) and (
# It's tricky to include only types which pd.Series constructor can take.
# Simply exclude common types used here for now (which becomes object
# types Spark can't recognize).
return_annotation != pd.Series and
return_annotation != pd.DataFrame and
not check_iterator_annotation(return_annotation) and
not check_tuple_annotation(return_annotation)
))
if is_series_or_frame:
return PandasUDFType.SCALAR
elif is_iterator_tuple_series_or_frame or is_iterator_series_or_frame:
return PandasUDFType.SCALAR_ITER
elif is_series_or_frame_agg:
return PandasUDFType.GROUPED_AGG
else:
raise NotImplementedError("Unsupported signature: %s." % sig)
def check_tuple_annotation(annotation, parameter_check_func=None):
# Python 3.6 has `__name__`. Python 3.7 and 3.8 have `_name`.
# Check if the name is Tuple first. After that, check the generic types.
name = getattr(annotation, "_name", getattr(annotation, "__name__", None))
return name == "Tuple" and (
parameter_check_func is None or all(map(parameter_check_func, annotation.__args__)))
def check_iterator_annotation(annotation, parameter_check_func=None):
name = getattr(annotation, "_name", getattr(annotation, "__name__", None))
return name == "Iterator" and (
parameter_check_func is None or all(map(parameter_check_func, annotation.__args__)))
def check_union_annotation(annotation, parameter_check_func=None):
import typing
# Note that we cannot rely on '__origin__' in other type hints as it has changed from version
# to version. For example, it's abc.Iterator in Python 3.7 but typing.Iterator in Python 3.6.
origin = getattr(annotation, "__origin__", None)
return origin == typing.Union and (
parameter_check_func is None or all(map(parameter_check_func, annotation.__args__)))
| apache-2.0 |
gfyoung/pandas | pandas/core/tools/times.py | 2 | 4601 | from datetime import datetime, time
from typing import List, Optional
import numpy as np
from pandas._libs.lib import is_list_like
from pandas.core.dtypes.generic import ABCIndex, ABCSeries
from pandas.core.dtypes.missing import notna
def to_time(arg, format=None, infer_time_format=False, errors="raise"):
"""
Parse time strings to time objects using fixed strptime formats ("%H:%M",
"%H%M", "%I:%M%p", "%I%M%p", "%H:%M:%S", "%H%M%S", "%I:%M:%S%p",
"%I%M%S%p")
Use infer_time_format if all the strings are in the same format to speed
up conversion.
Parameters
----------
arg : string in time format, datetime.time, list, tuple, 1-d array, Series
format : str, default None
Format used to convert arg into a time object. If None, fixed formats
are used.
infer_time_format: bool, default False
Infer the time format based on the first non-NaN element. If all
strings are in the same format, this will speed up conversion.
errors : {'ignore', 'raise', 'coerce'}, default 'raise'
- If 'raise', then invalid parsing will raise an exception
- If 'coerce', then invalid parsing will be set as None
- If 'ignore', then invalid parsing will return the input
Returns
-------
datetime.time
"""
def _convert_listlike(arg, format):
if isinstance(arg, (list, tuple)):
arg = np.array(arg, dtype="O")
elif getattr(arg, "ndim", 1) > 1:
raise TypeError(
"arg must be a string, datetime, list, tuple, 1-d array, or Series"
)
arg = np.asarray(arg, dtype="O")
if infer_time_format and format is None:
format = _guess_time_format_for_array(arg)
times: List[Optional[time]] = []
if format is not None:
for element in arg:
try:
times.append(datetime.strptime(element, format).time())
except (ValueError, TypeError) as err:
if errors == "raise":
msg = (
f"Cannot convert {element} to a time with given "
f"format {format}"
)
raise ValueError(msg) from err
elif errors == "ignore":
return arg
else:
times.append(None)
else:
formats = _time_formats[:]
format_found = False
for element in arg:
time_object = None
for time_format in formats:
try:
time_object = datetime.strptime(element, time_format).time()
if not format_found:
# Put the found format in front
fmt = formats.pop(formats.index(time_format))
formats.insert(0, fmt)
format_found = True
break
except (ValueError, TypeError):
continue
if time_object is not None:
times.append(time_object)
elif errors == "raise":
raise ValueError(f"Cannot convert arg {arg} to a time")
elif errors == "ignore":
return arg
else:
times.append(None)
return times
if arg is None:
return arg
elif isinstance(arg, time):
return arg
elif isinstance(arg, ABCSeries):
values = _convert_listlike(arg._values, format)
return arg._constructor(values, index=arg.index, name=arg.name)
elif isinstance(arg, ABCIndex):
return _convert_listlike(arg, format)
elif is_list_like(arg):
return _convert_listlike(arg, format)
return _convert_listlike(np.array([arg]), format)[0]
# Fixed time formats for time parsing
_time_formats = [
"%H:%M",
"%H%M",
"%I:%M%p",
"%I%M%p",
"%H:%M:%S",
"%H%M%S",
"%I:%M:%S%p",
"%I%M%S%p",
]
def _guess_time_format_for_array(arr):
# Try to guess the format based on the first non-NaN element
non_nan_elements = notna(arr).nonzero()[0]
if len(non_nan_elements):
element = arr[non_nan_elements[0]]
for time_format in _time_formats:
try:
datetime.strptime(element, time_format)
return time_format
except ValueError:
pass
return None
| bsd-3-clause |
jphall663/bellarmine_py_intro | exercises.py | 1 | 13625 | # -*- coding: utf-8 -*-
"""
Copyright (c) 2015 by Patrick Hall, jpatrickhall@gmail.com
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-------------------------------------------------------------------------------
Python exercises for Bellarmine Analytics program
1.) Working With Strings
2.) Loops and File I/O
3.) Lists, Dictionaries and Sets
4.) Scraping Data from the Web
5.) Numpy: Kaggle Titanic Competition
6.) IPython: Graphing Results
"""
### PREFACE ###################################################################
### A quick note about comments ..
### Lines like this are called comments - they are not executed.
#%%
# Comments like this create separate cells in the Spyder IDE.
#%%
"""
You can have multi-line comments by using triple quotes ...
like this.
"""
### A quick note about working directories ...
# All the exercises below assume you have set your working directory
# to the directory you downloaded for the class. Use the working directory
# dialog in the upper right hand corner of the Spyder IDE to set your working
# directory.
### EXERCISE 1: WORKING WITH STRINGS ##########################################
### This string represents a user being logged by a web server.
### It contains information such as the user's operating system and browser.
user_string1 = 'Mozilla/5.0 (Windows NT 6.0; WOW64) App3leWebKit/54.1 (KHTML, like Gecko) Version/4.0 Safari/539.1'
print user_string1
# SLICING
# REFERENCE: https://docs.python.org/2/tutorial/introduction.html#strings
# +---+---+---+---+---+---+
# | P | y | t | h | o | n |
# +---+---+---+---+---+---+
# 0 1 2 3 4 5 6
#-6 -5 -4 -3 -2 -1
### EXERCISE 1.1: Print only the user's operating system.
### EXERCISE 1.2: Print only the user's browser.
# ESCAPE CHARACTERS
# REFERENCE: https://docs.python.org/2/tutorial/introduction.html#strings
### EXERCISE 1.3: Print the user's operating system and browser as a single
# tab-delimited string.
### EXERCISE 1.4: Print the user's operating system and browser on separate
# lines using only one python print statement.
### EXERCISE 1.5: What are some other common escape characters?
# STRING FUNCTIONS
# REFERENCE: https://docs.python.org/2/library/stdtypes.html#string-methods
user_string2 = 'Mozilla/5.0 (Linux; Android 3.2) AppleWebKit/536.4 \
(KHTML, like Gecko) Safari/536.4'
user_string3 = 'Mozilla/5.0 (Windows NT 6.0; WOW64) AppleWebKit/536.4 \
(KHTML, like Gecko) Safari/536.4'
### EXERCISE 1.5: Convert user_string2 and user_string3 to lowercase and print
# them. Why is this a good idea?
### EXERCISE 1.6: Use a simple string method to decide whether users 2 and 3
# use the same operating system as user 1.
### EXERCISE 1.7: Print a tab-separated table of each user's operating system
# and broswer.
### Why does it matter what operating system a user is using?
### EXERCISE 2: LOOPS AND FILE I/O ############################################
### In exercises 2 and 3 we will clean up several example dating profiles.
# Once we have the text ready for analysis, we will try to make the best match
# between profiles in exercise 3.
# REFERENCES:
# https://docs.python.org/2/tutorial/controlflow.html
# https://docs.python.org/2/tutorial/inputoutput.html#reading-and-writing-files
# https://docs.python.org/2/library/stdtypes.html#string-methods
### EXERCISE 2.1: Count the number of lines in the profiles_raw.txt file using
# a for loop.
### EXERCISE 2.2: Remove all non-alphabetic characters from the
# profiles_raw.txt file. Save the new file as profiles_clean.txt.
### EXERCISE 2.3: Modify your code from exercise 2.2 (remove all non-alphabetic
# characters from the profiles_raw.txt file) AND to convert all lines of new
# the 'profiles_clean.txt' file to lowercase.
### EXERCISE 2.4: Use the solutions of exercises 2.1 (counting the number of
# lines in profiles_raw.txt) and 2.3 (removing all non-alphabetic character and
# converting to lowercase) to add a progress indicator to our file cleaning
# code. The progress indicator should tell you which line of the file your
# program is working on.
### Why do you think progress indicator's are important?
### BONUS EXERCISE 2.5: Execute the solution_2.py file from the OS command
# prompt.
### EXERCISE 3: LISTS, DICTIONARIES, AND SETS #################################
### In exercises 2 and 3 we will clean up several example dating profiles. Once
# we have the text ready for analysis, we will try to make the best match
# between profiles in exercise 3.
# REFERENCES:
# https://docs.python.org/2/tutorial/datastructures.html
# https://docs.python.org/2/library/collections.html
### EXERCISE 3.1: Turn each line into a list to count the number of words in
# the profiles_raw.txt file.
### EXERCISE 3.2: Use list comprehensions to re-write the solution to exercise
# 2.4 (remove all non-alphabetic characters from the profiles_raw.txt and
# convert to lowercase) in a more "pythonic" fashion.
### Exercise 3.3: Modify the list comprehension in exercise 3.2 (remove all
# non-alphabetic characters from the profiles_raw.txt and convert to lowercase)
# to also remove words with 3 or less characters in addition to the other
# cleaning tasks.
### Exercise 3.4: Use collections, lists, sets, and dictionaries to create a
# set of frequently (10+) occuring terms in the cleaned profiles and keep only
# them in the profiles. Create a new file called 'profiles_cleaned_freq.txt'.
# (HINT) Import the collections module.
# (HINT) Create a list of every word in the cleaned profiles.
# (HINT) Create a dictionary of term counts using the Counter collection.
# (HINT) Create a set that keeps only terms in the dictionary that occured more
# than ten times.
# (HINT) Compare a list of the words in a line to the set of frequent terms to
# keep only the frequently occuring terms.
### Exercise 3.5: Use collections and dictionaries to create a dictionary of
# term counts for each profile. Use these dictionaries of term frequencies to
# decide which of the profiles are the best match. Print the term counts to a
# new file called 'profiles_term_counts.txt'.
# (HINT) Import collections module.
# (HINT) Print a dictionary of term counts for each individual cleaned profile.
# Which of these profiles looks the most compatible to you?
# (HINT) Compare the term counts of important words.
### EXERCISE 4: SCRAPING DATA FROM THE WEB ####################################
# REFERENCES:
#https://docs.python.org/2/howto/urllib2.html
#http://www.crummy.com/software/BeautifulSoup/bs4/doc/
#https://docs.python.org/2/tutorial/controlflow.html#more-on-defining-functions
url = 'http://www.bellarmine.edu/analytics/'
### EXERCISE 4.1: Use urllib2 and BeautifulSoup to scrape information from the
# Bellarmine MSA homepage and print it to the console.
### EXERCISE 4.2: Define a function to extract only the text within paragraphs
# from a web page and print that text to the console. Use the Bellarmine MSA
# page to test your function.
# (HINT) Define the get_pretty_text_from_url function by ...
# (HINT) Connecting to the url.
# (HINT) Using the find_all function from BeautifulSoup to locate the
# paragraphs.
# (HINT) find_all('p')
# (HINT) Using the get_text function from BeautifulSoup to extract the text
# from each paragraph.
# (HINT) Execute the function - don't forget!
### EXERCISE 4.3: Use the get_pretty_text_from_url function to cycle through
# every link on the Bellarmine MSA page and print the text from every paragraph
# of every link to the console.
# REFERENCE: https://docs.python.org/2/tutorial/errors.html
# (HINT) Connect to the url
# (HINT) Use a for loop and the BeautifulSoup find_all() function to cycle
# through every link on the Bellarmine MSA page.
# (HINT) find_all('a') will locate hyperlinks.
# (HINT) get('href') will extract links.
# (HINT) Check that a link was extracted successfully.
# (HINT) We can skip internal links.
# (HINT) Use a try-catch block to continue to new links even if a given link
# causes an error.
# (HINT) This may take a few minutes to run.
### EXERCISE 4.4. Scrape the famous Abolone data set from the UCI repository
# and write it to a CSV file.
# (HINT) Read a bit about the file.
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/abalone/\
abalone.names'
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/abalone/\
abalone.data'
# (HINT) Since the structure of the page is very simple, we can use the
# urllib2 read() function to parse the table. We don't need BeautifulSoup.
# (HINT) Open a new CSV file, 'abolone.csv', and write the table to the CSV
# using a for loop.
# EXERCISE 4.5: Name some other places on the web to find data.
### EXERCISE 5: KAGGLE TITANIC COMPETITION ####################################
### REFERENCES:
# http://www.kaggle.com/c/titanic-gettingStarted
# http://wiki.scipy.org/Tentative_NumPy_Tutorial
# https://docs.python.org/2/library/csv.html
# gendermodel.py - by Kaggle user AstroDave
### EXERCISE 5.1: Use the csv module to read the titanic training data
# (train.csv) into a numpy array called data.
### EXERCISE 5.2
# A: How many people were on the titanic?
# B: How many people survived?
# C: What proportion of people survived?
# EXERCISE 5.3: "WOMEN AND CHILDREN FIRST!"
# What proportion of women survived? What proportion of men survived?
# EXERCISE 5.4: Let's see how accurate our model is on the training data.
# For every person in the training data, predict that men will die and
# women will survive. What percentage of the time are we correct? Is this
# more accurate than predicting everyone dies?
# EXERCISE 5.5: Use the results of this analysis to make predictions about
# the passengers in the test set. Basically, if the passenger was a male, we
# predict he will die. If she was female, we predict she will survive.
#
# Make a CSV file containing only two columns:
# First column: the passenger IDs from the test.csv file.
# Second column: a 0 if the passender was male; a 1 if the passenger was
# female.The first row (header) should read: "PassengerId", "Survived"
# (HINT) First, read in test.csv, skipping the first row.
# (HINT) Then, write the predictions file ...
# (HINT) Write the column headers. Then, for each row in the test file, if it
# is a female, then write the PassengerId, and predict 1. Otherwise the
# passenger is male, and write the PassengerId, and predict 0.
# BONUS EXERCISE 5.5: Submit your model to the Kaggle competition. What was
# your score? What does this number mean?
### EXERCISE 6: IPYTHON: PLOTTING RESULTS #####################################
### REFERENCES:
# http://ipython.org/
# http://matplotlib.org/
# http://nbviewer.ipython.org/github/jphall663/bellarmine_py_intro/blob/master/Titanic.ipynb
### To start an Ipython session:
# 1.) Open a command prompt and change directories to the class working
# directory.
# 2.) Start an Ipython session by typing something like:
# C:\Anaconda\ipython.exe notebook
# 2.) Open a browser and navigate to the given url,
# probably something like: http://localhost:8888/
# 3.) Press 'New Notebook' in the upper righthand corner.
# 4.) Enter the python statements in this exercise into the notebook prompt
# one-by-one.
# We are going to construct a simple stacked bar chart ...
# Import the training data.
import csv as csv
import numpy as np
nfile_ref = open('train.csv', 'r')
csv_file = csv.reader(nfile_ref) # Load the csv file.
header = csv_file.next() # Skip the first line as it is a header.
data = [] # Create a variable to hold the data.
for row in csv_file: # Skip through each row in the csv file,
data.append(row[0:]) # adding each row to the data variable.
data = np.array(data) # Then convert from a list to a Numpy array.
nfile_ref.close()
# Import matplotlib and allow it to plot in the notebook.
import matplotlib.pyplot as plt
%matplotlib inline
# Import Numpy
import numpy as np
# Make some magic numbers for the plot ... like:
# The location along the x-axis where the bars will sit.
# And the width of the bars.
bottom_locs = np.array([1., 2.])
width = 0.3
# Define the actual quanities to plot:
# The numbers of men who died and who survived.
# The numbers of women who died and who survived.
# This finds all the men.
men_only_stats = data[0::, 4] != "female"
# 1st column of data (survived= 0,1), but only men.
men_onboard = data[men_only_stats, 1].astype(np.float)
men = (np.size(men_onboard)-np.sum(men_onboard), np.sum(men_onboard))
# This finds all the women.
women_only_stats = data[0::, 4] == "female"
# 1st column of data (survived= 0,1), but only women.
women_onboard = data[women_only_stats, 1].astype(np.float)
women = (np.size(women_onboard)-np.sum(women_onboard), np.sum(women_onboard))
# Add the values to the plot.
plt.bar(bottom_locs, men, label='Male', width=width)
plt.bar(bottom_locs, women, color='m', label='Female', width=width, bottom=men)
# Decorate the plot.
plt.ylabel('Count')
plt.title('Who Survived the Titanic?')
plt.legend(loc='best')
plt.xticks(bottom_locs+width/2., ('Died', 'Survived'))
| apache-2.0 |
elkingtonmcb/nupic | external/linux32/lib/python2.6/site-packages/matplotlib/colors.py | 69 | 31676 | """
A module for converting numbers or color arguments to *RGB* or *RGBA*
*RGB* and *RGBA* are sequences of, respectively, 3 or 4 floats in the
range 0-1.
This module includes functions and classes for color specification
conversions, and for mapping numbers to colors in a 1-D array of
colors called a colormap. Colormapping typically involves two steps:
a data array is first mapped onto the range 0-1 using an instance
of :class:`Normalize` or of a subclass; then this number in the 0-1
range is mapped to a color using an instance of a subclass of
:class:`Colormap`. Two are provided here:
:class:`LinearSegmentedColormap`, which is used to generate all
the built-in colormap instances, but is also useful for making
custom colormaps, and :class:`ListedColormap`, which is used for
generating a custom colormap from a list of color specifications.
The module also provides a single instance, *colorConverter*, of the
:class:`ColorConverter` class providing methods for converting single
color specifications or sequences of them to *RGB* or *RGBA*.
Commands which take color arguments can use several formats to specify
the colors. For the basic builtin colors, you can use a single letter
- b : blue
- g : green
- r : red
- c : cyan
- m : magenta
- y : yellow
- k : black
- w : white
Gray shades can be given as a string encoding a float in the 0-1
range, e.g.::
color = '0.75'
For a greater range of colors, you have two options. You can specify
the color using an html hex string, as in::
color = '#eeefff'
or you can pass an *R* , *G* , *B* tuple, where each of *R* , *G* , *B*
are in the range [0,1].
Finally, legal html names for colors, like 'red', 'burlywood' and
'chartreuse' are supported.
"""
import re
import numpy as np
from numpy import ma
import matplotlib.cbook as cbook
parts = np.__version__.split('.')
NP_MAJOR, NP_MINOR = map(int, parts[:2])
# true if clip supports the out kwarg
NP_CLIP_OUT = NP_MAJOR>=1 and NP_MINOR>=2
cnames = {
'aliceblue' : '#F0F8FF',
'antiquewhite' : '#FAEBD7',
'aqua' : '#00FFFF',
'aquamarine' : '#7FFFD4',
'azure' : '#F0FFFF',
'beige' : '#F5F5DC',
'bisque' : '#FFE4C4',
'black' : '#000000',
'blanchedalmond' : '#FFEBCD',
'blue' : '#0000FF',
'blueviolet' : '#8A2BE2',
'brown' : '#A52A2A',
'burlywood' : '#DEB887',
'cadetblue' : '#5F9EA0',
'chartreuse' : '#7FFF00',
'chocolate' : '#D2691E',
'coral' : '#FF7F50',
'cornflowerblue' : '#6495ED',
'cornsilk' : '#FFF8DC',
'crimson' : '#DC143C',
'cyan' : '#00FFFF',
'darkblue' : '#00008B',
'darkcyan' : '#008B8B',
'darkgoldenrod' : '#B8860B',
'darkgray' : '#A9A9A9',
'darkgreen' : '#006400',
'darkkhaki' : '#BDB76B',
'darkmagenta' : '#8B008B',
'darkolivegreen' : '#556B2F',
'darkorange' : '#FF8C00',
'darkorchid' : '#9932CC',
'darkred' : '#8B0000',
'darksalmon' : '#E9967A',
'darkseagreen' : '#8FBC8F',
'darkslateblue' : '#483D8B',
'darkslategray' : '#2F4F4F',
'darkturquoise' : '#00CED1',
'darkviolet' : '#9400D3',
'deeppink' : '#FF1493',
'deepskyblue' : '#00BFFF',
'dimgray' : '#696969',
'dodgerblue' : '#1E90FF',
'firebrick' : '#B22222',
'floralwhite' : '#FFFAF0',
'forestgreen' : '#228B22',
'fuchsia' : '#FF00FF',
'gainsboro' : '#DCDCDC',
'ghostwhite' : '#F8F8FF',
'gold' : '#FFD700',
'goldenrod' : '#DAA520',
'gray' : '#808080',
'green' : '#008000',
'greenyellow' : '#ADFF2F',
'honeydew' : '#F0FFF0',
'hotpink' : '#FF69B4',
'indianred' : '#CD5C5C',
'indigo' : '#4B0082',
'ivory' : '#FFFFF0',
'khaki' : '#F0E68C',
'lavender' : '#E6E6FA',
'lavenderblush' : '#FFF0F5',
'lawngreen' : '#7CFC00',
'lemonchiffon' : '#FFFACD',
'lightblue' : '#ADD8E6',
'lightcoral' : '#F08080',
'lightcyan' : '#E0FFFF',
'lightgoldenrodyellow' : '#FAFAD2',
'lightgreen' : '#90EE90',
'lightgrey' : '#D3D3D3',
'lightpink' : '#FFB6C1',
'lightsalmon' : '#FFA07A',
'lightseagreen' : '#20B2AA',
'lightskyblue' : '#87CEFA',
'lightslategray' : '#778899',
'lightsteelblue' : '#B0C4DE',
'lightyellow' : '#FFFFE0',
'lime' : '#00FF00',
'limegreen' : '#32CD32',
'linen' : '#FAF0E6',
'magenta' : '#FF00FF',
'maroon' : '#800000',
'mediumaquamarine' : '#66CDAA',
'mediumblue' : '#0000CD',
'mediumorchid' : '#BA55D3',
'mediumpurple' : '#9370DB',
'mediumseagreen' : '#3CB371',
'mediumslateblue' : '#7B68EE',
'mediumspringgreen' : '#00FA9A',
'mediumturquoise' : '#48D1CC',
'mediumvioletred' : '#C71585',
'midnightblue' : '#191970',
'mintcream' : '#F5FFFA',
'mistyrose' : '#FFE4E1',
'moccasin' : '#FFE4B5',
'navajowhite' : '#FFDEAD',
'navy' : '#000080',
'oldlace' : '#FDF5E6',
'olive' : '#808000',
'olivedrab' : '#6B8E23',
'orange' : '#FFA500',
'orangered' : '#FF4500',
'orchid' : '#DA70D6',
'palegoldenrod' : '#EEE8AA',
'palegreen' : '#98FB98',
'palevioletred' : '#AFEEEE',
'papayawhip' : '#FFEFD5',
'peachpuff' : '#FFDAB9',
'peru' : '#CD853F',
'pink' : '#FFC0CB',
'plum' : '#DDA0DD',
'powderblue' : '#B0E0E6',
'purple' : '#800080',
'red' : '#FF0000',
'rosybrown' : '#BC8F8F',
'royalblue' : '#4169E1',
'saddlebrown' : '#8B4513',
'salmon' : '#FA8072',
'sandybrown' : '#FAA460',
'seagreen' : '#2E8B57',
'seashell' : '#FFF5EE',
'sienna' : '#A0522D',
'silver' : '#C0C0C0',
'skyblue' : '#87CEEB',
'slateblue' : '#6A5ACD',
'slategray' : '#708090',
'snow' : '#FFFAFA',
'springgreen' : '#00FF7F',
'steelblue' : '#4682B4',
'tan' : '#D2B48C',
'teal' : '#008080',
'thistle' : '#D8BFD8',
'tomato' : '#FF6347',
'turquoise' : '#40E0D0',
'violet' : '#EE82EE',
'wheat' : '#F5DEB3',
'white' : '#FFFFFF',
'whitesmoke' : '#F5F5F5',
'yellow' : '#FFFF00',
'yellowgreen' : '#9ACD32',
}
# add british equivs
for k, v in cnames.items():
if k.find('gray')>=0:
k = k.replace('gray', 'grey')
cnames[k] = v
def is_color_like(c):
'Return *True* if *c* can be converted to *RGB*'
try:
colorConverter.to_rgb(c)
return True
except ValueError:
return False
def rgb2hex(rgb):
'Given a len 3 rgb tuple of 0-1 floats, return the hex string'
return '#%02x%02x%02x' % tuple([round(val*255) for val in rgb])
hexColorPattern = re.compile("\A#[a-fA-F0-9]{6}\Z")
def hex2color(s):
"""
Take a hex string *s* and return the corresponding rgb 3-tuple
Example: #efefef -> (0.93725, 0.93725, 0.93725)
"""
if not isinstance(s, basestring):
raise TypeError('hex2color requires a string argument')
if hexColorPattern.match(s) is None:
raise ValueError('invalid hex color string "%s"' % s)
return tuple([int(n, 16)/255.0 for n in (s[1:3], s[3:5], s[5:7])])
class ColorConverter:
"""
Provides methods for converting color specifications to *RGB* or *RGBA*
Caching is used for more efficient conversion upon repeated calls
with the same argument.
Ordinarily only the single instance instantiated in this module,
*colorConverter*, is needed.
"""
colors = {
'b' : (0.0, 0.0, 1.0),
'g' : (0.0, 0.5, 0.0),
'r' : (1.0, 0.0, 0.0),
'c' : (0.0, 0.75, 0.75),
'm' : (0.75, 0, 0.75),
'y' : (0.75, 0.75, 0),
'k' : (0.0, 0.0, 0.0),
'w' : (1.0, 1.0, 1.0),
}
cache = {}
def to_rgb(self, arg):
"""
Returns an *RGB* tuple of three floats from 0-1.
*arg* can be an *RGB* or *RGBA* sequence or a string in any of
several forms:
1) a letter from the set 'rgbcmykw'
2) a hex color string, like '#00FFFF'
3) a standard name, like 'aqua'
4) a float, like '0.4', indicating gray on a 0-1 scale
if *arg* is *RGBA*, the *A* will simply be discarded.
"""
try: return self.cache[arg]
except KeyError: pass
except TypeError: # could be unhashable rgb seq
arg = tuple(arg)
try: return self.cache[arg]
except KeyError: pass
except TypeError:
raise ValueError(
'to_rgb: arg "%s" is unhashable even inside a tuple'
% (str(arg),))
try:
if cbook.is_string_like(arg):
color = self.colors.get(arg, None)
if color is None:
str1 = cnames.get(arg, arg)
if str1.startswith('#'):
color = hex2color(str1)
else:
fl = float(arg)
if fl < 0 or fl > 1:
raise ValueError(
'gray (string) must be in range 0-1')
color = tuple([fl]*3)
elif cbook.iterable(arg):
if len(arg) > 4 or len(arg) < 3:
raise ValueError(
'sequence length is %d; must be 3 or 4'%len(arg))
color = tuple(arg[:3])
if [x for x in color if (float(x) < 0) or (x > 1)]:
# This will raise TypeError if x is not a number.
raise ValueError('number in rbg sequence outside 0-1 range')
else:
raise ValueError('cannot convert argument to rgb sequence')
self.cache[arg] = color
except (KeyError, ValueError, TypeError), exc:
raise ValueError('to_rgb: Invalid rgb arg "%s"\n%s' % (str(arg), exc))
# Error messages could be improved by handling TypeError
# separately; but this should be rare and not too hard
# for the user to figure out as-is.
return color
def to_rgba(self, arg, alpha=None):
"""
Returns an *RGBA* tuple of four floats from 0-1.
For acceptable values of *arg*, see :meth:`to_rgb`.
If *arg* is an *RGBA* sequence and *alpha* is not *None*,
*alpha* will replace the original *A*.
"""
try:
if not cbook.is_string_like(arg) and cbook.iterable(arg):
if len(arg) == 4:
if [x for x in arg if (float(x) < 0) or (x > 1)]:
# This will raise TypeError if x is not a number.
raise ValueError('number in rbga sequence outside 0-1 range')
if alpha is None:
return tuple(arg)
if alpha < 0.0 or alpha > 1.0:
raise ValueError("alpha must be in range 0-1")
return arg[0], arg[1], arg[2], arg[3] * alpha
r,g,b = arg[:3]
if [x for x in (r,g,b) if (float(x) < 0) or (x > 1)]:
raise ValueError('number in rbg sequence outside 0-1 range')
else:
r,g,b = self.to_rgb(arg)
if alpha is None:
alpha = 1.0
return r,g,b,alpha
except (TypeError, ValueError), exc:
raise ValueError('to_rgba: Invalid rgba arg "%s"\n%s' % (str(arg), exc))
def to_rgba_array(self, c, alpha=None):
"""
Returns a numpy array of *RGBA* tuples.
Accepts a single mpl color spec or a sequence of specs.
Special case to handle "no color": if *c* is "none" (case-insensitive),
then an empty array will be returned. Same for an empty list.
"""
try:
if c.lower() == 'none':
return np.zeros((0,4), dtype=np.float_)
except AttributeError:
pass
if len(c) == 0:
return np.zeros((0,4), dtype=np.float_)
try:
result = np.array([self.to_rgba(c, alpha)], dtype=np.float_)
except ValueError:
if isinstance(c, np.ndarray):
if c.ndim != 2 and c.dtype.kind not in 'SU':
raise ValueError("Color array must be two-dimensional")
result = np.zeros((len(c), 4))
for i, cc in enumerate(c):
result[i] = self.to_rgba(cc, alpha) # change in place
return np.asarray(result, np.float_)
colorConverter = ColorConverter()
def makeMappingArray(N, data):
"""Create an *N* -element 1-d lookup table
*data* represented by a list of x,y0,y1 mapping correspondences.
Each element in this list represents how a value between 0 and 1
(inclusive) represented by x is mapped to a corresponding value
between 0 and 1 (inclusive). The two values of y are to allow
for discontinuous mapping functions (say as might be found in a
sawtooth) where y0 represents the value of y for values of x
<= to that given, and y1 is the value to be used for x > than
that given). The list must start with x=0, end with x=1, and
all values of x must be in increasing order. Values between
the given mapping points are determined by simple linear interpolation.
The function returns an array "result" where ``result[x*(N-1)]``
gives the closest value for values of x between 0 and 1.
"""
try:
adata = np.array(data)
except:
raise TypeError("data must be convertable to an array")
shape = adata.shape
if len(shape) != 2 and shape[1] != 3:
raise ValueError("data must be nx3 format")
x = adata[:,0]
y0 = adata[:,1]
y1 = adata[:,2]
if x[0] != 0. or x[-1] != 1.0:
raise ValueError(
"data mapping points must start with x=0. and end with x=1")
if np.sometrue(np.sort(x)-x):
raise ValueError(
"data mapping points must have x in increasing order")
# begin generation of lookup table
x = x * (N-1)
lut = np.zeros((N,), np.float)
xind = np.arange(float(N))
ind = np.searchsorted(x, xind)[1:-1]
lut[1:-1] = ( ((xind[1:-1] - x[ind-1]) / (x[ind] - x[ind-1]))
* (y0[ind] - y1[ind-1]) + y1[ind-1])
lut[0] = y1[0]
lut[-1] = y0[-1]
# ensure that the lut is confined to values between 0 and 1 by clipping it
np.clip(lut, 0.0, 1.0)
#lut = where(lut > 1., 1., lut)
#lut = where(lut < 0., 0., lut)
return lut
class Colormap:
"""Base class for all scalar to rgb mappings
Important methods:
* :meth:`set_bad`
* :meth:`set_under`
* :meth:`set_over`
"""
def __init__(self, name, N=256):
"""
Public class attributes:
:attr:`N` : number of rgb quantization levels
:attr:`name` : name of colormap
"""
self.name = name
self.N = N
self._rgba_bad = (0.0, 0.0, 0.0, 0.0) # If bad, don't paint anything.
self._rgba_under = None
self._rgba_over = None
self._i_under = N
self._i_over = N+1
self._i_bad = N+2
self._isinit = False
def __call__(self, X, alpha=1.0, bytes=False):
"""
*X* is either a scalar or an array (of any dimension).
If scalar, a tuple of rgba values is returned, otherwise
an array with the new shape = oldshape+(4,). If the X-values
are integers, then they are used as indices into the array.
If they are floating point, then they must be in the
interval (0.0, 1.0).
Alpha must be a scalar.
If bytes is False, the rgba values will be floats on a
0-1 scale; if True, they will be uint8, 0-255.
"""
if not self._isinit: self._init()
alpha = min(alpha, 1.0) # alpha must be between 0 and 1
alpha = max(alpha, 0.0)
self._lut[:-3, -1] = alpha
mask_bad = None
if not cbook.iterable(X):
vtype = 'scalar'
xa = np.array([X])
else:
vtype = 'array'
xma = ma.asarray(X)
xa = xma.filled(0)
mask_bad = ma.getmask(xma)
if xa.dtype.char in np.typecodes['Float']:
np.putmask(xa, xa==1.0, 0.9999999) #Treat 1.0 as slightly less than 1.
# The following clip is fast, and prevents possible
# conversion of large positive values to negative integers.
if NP_CLIP_OUT:
np.clip(xa * self.N, -1, self.N, out=xa)
else:
xa = np.clip(xa * self.N, -1, self.N)
xa = xa.astype(int)
# Set the over-range indices before the under-range;
# otherwise the under-range values get converted to over-range.
np.putmask(xa, xa>self.N-1, self._i_over)
np.putmask(xa, xa<0, self._i_under)
if mask_bad is not None and mask_bad.shape == xa.shape:
np.putmask(xa, mask_bad, self._i_bad)
if bytes:
lut = (self._lut * 255).astype(np.uint8)
else:
lut = self._lut
rgba = np.empty(shape=xa.shape+(4,), dtype=lut.dtype)
lut.take(xa, axis=0, mode='clip', out=rgba)
# twice as fast as lut[xa];
# using the clip or wrap mode and providing an
# output array speeds it up a little more.
if vtype == 'scalar':
rgba = tuple(rgba[0,:])
return rgba
def set_bad(self, color = 'k', alpha = 1.0):
'''Set color to be used for masked values.
'''
self._rgba_bad = colorConverter.to_rgba(color, alpha)
if self._isinit: self._set_extremes()
def set_under(self, color = 'k', alpha = 1.0):
'''Set color to be used for low out-of-range values.
Requires norm.clip = False
'''
self._rgba_under = colorConverter.to_rgba(color, alpha)
if self._isinit: self._set_extremes()
def set_over(self, color = 'k', alpha = 1.0):
'''Set color to be used for high out-of-range values.
Requires norm.clip = False
'''
self._rgba_over = colorConverter.to_rgba(color, alpha)
if self._isinit: self._set_extremes()
def _set_extremes(self):
if self._rgba_under:
self._lut[self._i_under] = self._rgba_under
else:
self._lut[self._i_under] = self._lut[0]
if self._rgba_over:
self._lut[self._i_over] = self._rgba_over
else:
self._lut[self._i_over] = self._lut[self.N-1]
self._lut[self._i_bad] = self._rgba_bad
def _init():
'''Generate the lookup table, self._lut'''
raise NotImplementedError("Abstract class only")
def is_gray(self):
if not self._isinit: self._init()
return (np.alltrue(self._lut[:,0] == self._lut[:,1])
and np.alltrue(self._lut[:,0] == self._lut[:,2]))
class LinearSegmentedColormap(Colormap):
"""Colormap objects based on lookup tables using linear segments.
The lookup table is generated using linear interpolation for each
primary color, with the 0-1 domain divided into any number of
segments.
"""
def __init__(self, name, segmentdata, N=256):
"""Create color map from linear mapping segments
segmentdata argument is a dictionary with a red, green and blue
entries. Each entry should be a list of *x*, *y0*, *y1* tuples,
forming rows in a table.
Example: suppose you want red to increase from 0 to 1 over
the bottom half, green to do the same over the middle half,
and blue over the top half. Then you would use::
cdict = {'red': [(0.0, 0.0, 0.0),
(0.5, 1.0, 1.0),
(1.0, 1.0, 1.0)],
'green': [(0.0, 0.0, 0.0),
(0.25, 0.0, 0.0),
(0.75, 1.0, 1.0),
(1.0, 1.0, 1.0)],
'blue': [(0.0, 0.0, 0.0),
(0.5, 0.0, 0.0),
(1.0, 1.0, 1.0)]}
Each row in the table for a given color is a sequence of
*x*, *y0*, *y1* tuples. In each sequence, *x* must increase
monotonically from 0 to 1. For any input value *z* falling
between *x[i]* and *x[i+1]*, the output value of a given color
will be linearly interpolated between *y1[i]* and *y0[i+1]*::
row i: x y0 y1
/
/
row i+1: x y0 y1
Hence y0 in the first row and y1 in the last row are never used.
.. seealso::
:func:`makeMappingArray`
"""
self.monochrome = False # True only if all colors in map are identical;
# needed for contouring.
Colormap.__init__(self, name, N)
self._segmentdata = segmentdata
def _init(self):
self._lut = np.ones((self.N + 3, 4), np.float)
self._lut[:-3, 0] = makeMappingArray(self.N, self._segmentdata['red'])
self._lut[:-3, 1] = makeMappingArray(self.N, self._segmentdata['green'])
self._lut[:-3, 2] = makeMappingArray(self.N, self._segmentdata['blue'])
self._isinit = True
self._set_extremes()
class ListedColormap(Colormap):
"""Colormap object generated from a list of colors.
This may be most useful when indexing directly into a colormap,
but it can also be used to generate special colormaps for ordinary
mapping.
"""
def __init__(self, colors, name = 'from_list', N = None):
"""
Make a colormap from a list of colors.
*colors*
a list of matplotlib color specifications,
or an equivalent Nx3 floating point array (*N* rgb values)
*name*
a string to identify the colormap
*N*
the number of entries in the map. The default is *None*,
in which case there is one colormap entry for each
element in the list of colors. If::
N < len(colors)
the list will be truncated at *N*. If::
N > len(colors)
the list will be extended by repetition.
"""
self.colors = colors
self.monochrome = False # True only if all colors in map are identical;
# needed for contouring.
if N is None:
N = len(self.colors)
else:
if cbook.is_string_like(self.colors):
self.colors = [self.colors] * N
self.monochrome = True
elif cbook.iterable(self.colors):
self.colors = list(self.colors) # in case it was a tuple
if len(self.colors) == 1:
self.monochrome = True
if len(self.colors) < N:
self.colors = list(self.colors) * N
del(self.colors[N:])
else:
try: gray = float(self.colors)
except TypeError: pass
else: self.colors = [gray] * N
self.monochrome = True
Colormap.__init__(self, name, N)
def _init(self):
rgb = np.array([colorConverter.to_rgb(c)
for c in self.colors], np.float)
self._lut = np.zeros((self.N + 3, 4), np.float)
self._lut[:-3, :-1] = rgb
self._lut[:-3, -1] = 1
self._isinit = True
self._set_extremes()
class Normalize:
"""
Normalize a given value to the 0-1 range
"""
def __init__(self, vmin=None, vmax=None, clip=False):
"""
If *vmin* or *vmax* is not given, they are taken from the input's
minimum and maximum value respectively. If *clip* is *True* and
the given value falls outside the range, the returned value
will be 0 or 1, whichever is closer. Returns 0 if::
vmin==vmax
Works with scalars or arrays, including masked arrays. If
*clip* is *True*, masked values are set to 1; otherwise they
remain masked. Clipping silently defeats the purpose of setting
the over, under, and masked colors in the colormap, so it is
likely to lead to surprises; therefore the default is
*clip* = *False*.
"""
self.vmin = vmin
self.vmax = vmax
self.clip = clip
def __call__(self, value, clip=None):
if clip is None:
clip = self.clip
if cbook.iterable(value):
vtype = 'array'
val = ma.asarray(value).astype(np.float)
else:
vtype = 'scalar'
val = ma.array([value]).astype(np.float)
self.autoscale_None(val)
vmin, vmax = self.vmin, self.vmax
if vmin > vmax:
raise ValueError("minvalue must be less than or equal to maxvalue")
elif vmin==vmax:
return 0.0 * val
else:
if clip:
mask = ma.getmask(val)
val = ma.array(np.clip(val.filled(vmax), vmin, vmax),
mask=mask)
result = (val-vmin) * (1.0/(vmax-vmin))
if vtype == 'scalar':
result = result[0]
return result
def inverse(self, value):
if not self.scaled():
raise ValueError("Not invertible until scaled")
vmin, vmax = self.vmin, self.vmax
if cbook.iterable(value):
val = ma.asarray(value)
return vmin + val * (vmax - vmin)
else:
return vmin + value * (vmax - vmin)
def autoscale(self, A):
'''
Set *vmin*, *vmax* to min, max of *A*.
'''
self.vmin = ma.minimum(A)
self.vmax = ma.maximum(A)
def autoscale_None(self, A):
' autoscale only None-valued vmin or vmax'
if self.vmin is None: self.vmin = ma.minimum(A)
if self.vmax is None: self.vmax = ma.maximum(A)
def scaled(self):
'return true if vmin and vmax set'
return (self.vmin is not None and self.vmax is not None)
class LogNorm(Normalize):
"""
Normalize a given value to the 0-1 range on a log scale
"""
def __call__(self, value, clip=None):
if clip is None:
clip = self.clip
if cbook.iterable(value):
vtype = 'array'
val = ma.asarray(value).astype(np.float)
else:
vtype = 'scalar'
val = ma.array([value]).astype(np.float)
self.autoscale_None(val)
vmin, vmax = self.vmin, self.vmax
if vmin > vmax:
raise ValueError("minvalue must be less than or equal to maxvalue")
elif vmin<=0:
raise ValueError("values must all be positive")
elif vmin==vmax:
return 0.0 * val
else:
if clip:
mask = ma.getmask(val)
val = ma.array(np.clip(val.filled(vmax), vmin, vmax),
mask=mask)
result = (ma.log(val)-np.log(vmin))/(np.log(vmax)-np.log(vmin))
if vtype == 'scalar':
result = result[0]
return result
def inverse(self, value):
if not self.scaled():
raise ValueError("Not invertible until scaled")
vmin, vmax = self.vmin, self.vmax
if cbook.iterable(value):
val = ma.asarray(value)
return vmin * ma.power((vmax/vmin), val)
else:
return vmin * pow((vmax/vmin), value)
class BoundaryNorm(Normalize):
'''
Generate a colormap index based on discrete intervals.
Unlike :class:`Normalize` or :class:`LogNorm`,
:class:`BoundaryNorm` maps values to integers instead of to the
interval 0-1.
Mapping to the 0-1 interval could have been done via
piece-wise linear interpolation, but using integers seems
simpler, and reduces the number of conversions back and forth
between integer and floating point.
'''
def __init__(self, boundaries, ncolors, clip=False):
'''
*boundaries*
a monotonically increasing sequence
*ncolors*
number of colors in the colormap to be used
If::
b[i] <= v < b[i+1]
then v is mapped to color j;
as i varies from 0 to len(boundaries)-2,
j goes from 0 to ncolors-1.
Out-of-range values are mapped to -1 if low and ncolors
if high; these are converted to valid indices by
:meth:`Colormap.__call__` .
'''
self.clip = clip
self.vmin = boundaries[0]
self.vmax = boundaries[-1]
self.boundaries = np.asarray(boundaries)
self.N = len(self.boundaries)
self.Ncmap = ncolors
if self.N-1 == self.Ncmap:
self._interp = False
else:
self._interp = True
def __call__(self, x, clip=None):
if clip is None:
clip = self.clip
x = ma.asarray(x)
mask = ma.getmaskarray(x)
xx = x.filled(self.vmax+1)
if clip:
np.clip(xx, self.vmin, self.vmax)
iret = np.zeros(x.shape, dtype=np.int16)
for i, b in enumerate(self.boundaries):
iret[xx>=b] = i
if self._interp:
iret = (iret * (float(self.Ncmap-1)/(self.N-2))).astype(np.int16)
iret[xx<self.vmin] = -1
iret[xx>=self.vmax] = self.Ncmap
ret = ma.array(iret, mask=mask)
if ret.shape == () and not mask:
ret = int(ret) # assume python scalar
return ret
def inverse(self, value):
return ValueError("BoundaryNorm is not invertible")
class NoNorm(Normalize):
'''
Dummy replacement for Normalize, for the case where we
want to use indices directly in a
:class:`~matplotlib.cm.ScalarMappable` .
'''
def __call__(self, value, clip=None):
return value
def inverse(self, value):
return value
# compatibility with earlier class names that violated convention:
normalize = Normalize
no_norm = NoNorm
| agpl-3.0 |
petosegan/scikit-learn | examples/model_selection/plot_roc_crossval.py | 247 | 3253 | """
=============================================================
Receiver Operating Characteristic (ROC) with cross validation
=============================================================
Example of Receiver Operating Characteristic (ROC) metric to evaluate
classifier output quality using cross-validation.
ROC curves typically feature true positive rate on the Y axis, and false
positive rate on the X axis. This means that the top left corner of the plot is
the "ideal" point - a false positive rate of zero, and a true positive rate of
one. This is not very realistic, but it does mean that a larger area under the
curve (AUC) is usually better.
The "steepness" of ROC curves is also important, since it is ideal to maximize
the true positive rate while minimizing the false positive rate.
This example shows the ROC response of different datasets, created from K-fold
cross-validation. Taking all of these curves, it is possible to calculate the
mean area under curve, and see the variance of the curve when the
training set is split into different subsets. This roughly shows how the
classifier output is affected by changes in the training data, and how
different the splits generated by K-fold cross-validation are from one another.
.. note::
See also :func:`sklearn.metrics.auc_score`,
:func:`sklearn.cross_validation.cross_val_score`,
:ref:`example_model_selection_plot_roc.py`,
"""
print(__doc__)
import numpy as np
from scipy import interp
import matplotlib.pyplot as plt
from sklearn import svm, datasets
from sklearn.metrics import roc_curve, auc
from sklearn.cross_validation import StratifiedKFold
###############################################################################
# Data IO and generation
# import some data to play with
iris = datasets.load_iris()
X = iris.data
y = iris.target
X, y = X[y != 2], y[y != 2]
n_samples, n_features = X.shape
# Add noisy features
random_state = np.random.RandomState(0)
X = np.c_[X, random_state.randn(n_samples, 200 * n_features)]
###############################################################################
# Classification and ROC analysis
# Run classifier with cross-validation and plot ROC curves
cv = StratifiedKFold(y, n_folds=6)
classifier = svm.SVC(kernel='linear', probability=True,
random_state=random_state)
mean_tpr = 0.0
mean_fpr = np.linspace(0, 1, 100)
all_tpr = []
for i, (train, test) in enumerate(cv):
probas_ = classifier.fit(X[train], y[train]).predict_proba(X[test])
# Compute ROC curve and area the curve
fpr, tpr, thresholds = roc_curve(y[test], probas_[:, 1])
mean_tpr += interp(mean_fpr, fpr, tpr)
mean_tpr[0] = 0.0
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, lw=1, label='ROC fold %d (area = %0.2f)' % (i, roc_auc))
plt.plot([0, 1], [0, 1], '--', color=(0.6, 0.6, 0.6), label='Luck')
mean_tpr /= len(cv)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
plt.plot(mean_fpr, mean_tpr, 'k--',
label='Mean ROC (area = %0.2f)' % mean_auc, lw=2)
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
| bsd-3-clause |
lewislone/mStocks | packets-analysis/lib/XlsxWriter-0.7.3/examples/pandas_chart_stock.py | 9 | 1931 | ##############################################################################
#
# An example of converting a Pandas dataframe with stock data taken from the
# web to an xlsx file with a line chart using Pandas and XlsxWriter.
#
# Copyright 2013-2015, John McNamara, jmcnamara@cpan.org
#
import pandas as pd
import pandas.io.data as web
# Create some sample data to plot.
all_data = {}
for ticker in ['AAPL', 'GOOGL', 'IBM', 'YHOO', 'MSFT']:
all_data[ticker] = web.get_data_yahoo(ticker, '5/1/2014', '5/1/2015')
# Create a Pandas dataframe from the data.
df = pd.DataFrame({tic: data['Adj Close']
for tic, data in all_data.items()})
# Create a Pandas Excel writer using XlsxWriter as the engine.
sheet_name = 'Sheet1'
writer = pd.ExcelWriter('pandas_chart_stock.xlsx', engine='xlsxwriter')
df.to_excel(writer, sheet_name=sheet_name)
# Access the XlsxWriter workbook and worksheet objects from the dataframe.
workbook = writer.book
worksheet = writer.sheets[sheet_name]
# Adjust the width of the first column to make the date values clearer.
worksheet.set_column('A:A', 20)
# Create a chart object.
chart = workbook.add_chart({'type': 'line'})
# Configure the series of the chart from the dataframe data.
max_row = len(df) + 1
for i in range(len(['AAPL', 'GOOGL'])):
col = i + 1
chart.add_series({
'name': ['Sheet1', 0, col],
'categories': ['Sheet1', 2, 0, max_row, 0],
'values': ['Sheet1', 2, col, max_row, col],
'line': {'width': 1.00},
})
# Configure the chart axes.
chart.set_x_axis({'name': 'Date', 'date_axis': True})
chart.set_y_axis({'name': 'Price', 'major_gridlines': {'visible': False}})
# Position the legend at the top of the chart.
chart.set_legend({'position': 'top'})
# Insert the chart into the worksheet.
worksheet.insert_chart('H2', chart)
# Close the Pandas Excel writer and output the Excel file.
writer.save()
| mit |
twotwo/tools-python | pandas-sample/save-stock-info.py | 1 | 1444 | # fetch remove data to local excel: AAPL.xls/MSFT.xls
# https://github.com/pydata/pandas-datareader/blob/master/pandas_datareader/data.py
import datetime
import os
import pandas as pd
import pandas_datareader.data as web
import sys
import warnings
if not sys.warnoptions:
warnings.simplefilter("ignore")
warnings.filterwarnings("ignore", category=FutureWarning)
print('use export PYTHONWARNINGS="ignore" to disable warning')
start = datetime.datetime(2018, 1, 1)
end = datetime.date.today()
if os.path.exists('data/AAPL.xls'):
print('data/AAPL.xls exist')
else:
apple = web.DataReader("AAPL", "yahoo", start, end)
# pandas.core.frame.DataFrame
print(f"type(apple)={type(apple)}")
stocks = ['AAPL', "GOOG", 'MSFT']
for stock in stocks:
if os.path.exists(f'./data/{stock}.xls'):
print(f'./data/{stock}.xls exist')
continue
# save to excel
print(f"saving {stock}.xls ...")
web.DataReader(stock, 'yahoo', start, end).to_excel(
f'./data/{stock}.xls')
# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html
# index_col: int, default None. Column (0-indexed) to use as the row labels.
apple = pd.read_excel("./data/AAPL.xls", index_col=0)
ms = pd.read_excel("./data/MSFT.xls", index_col=0)
print(f"\n=== head of stock ===\n{apple.head()}\n")
print(f"\n=== index of stock ===\n{apple.index}\n")
print(f"=== apple.describe ===\n{apple.describe()}")
| mit |
rickdberg/mgmodel | bottom_temp_vs_depth_estimator.py | 1 | 2702 | # -*- coding: utf-8 -*-
"""
Created on Fri Mar 10 12:42:04 2017
@author: rickdberg
Data explorer
"""
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
import matplotlib.pyplot as plt
from scipy import stats
engine = create_engine("mysql://root:neogene227@localhost/iodp_compiled")
# Load metadata
sql = "SELECT * FROM metadata_mg_flux;"
metadata = pd.read_sql(sql, engine)
# Load site data
sql = "SELECT * FROM site_info;"
sitedata = pd.read_sql(sql, engine)
# Load hole data
sql = "SELECT * FROM summary_all;"
holedata = pd.read_sql(sql, engine)
# Group and average hole data for sites
hole_grouped = holedata.loc[:,('site_key', 'lat','lon','water_depth','total_penetration')].groupby("site_key").mean().reset_index()
# Combine all tables
site_meta_data = pd.merge(metadata, sitedata, how='outer', on=('site_key', 'leg', 'site'))
data = pd.merge(site_meta_data, hole_grouped, how='outer', on=('site_key')).fillna(np.nan)
data = data[data['leg'] != '161'] # Mediterranean
data = data[data['leg'] != '160'] # Mediterranean
data = data[data['site'] != '768'] # Sulu sea
data = data[data['site'] != '769'] # Sulu Sea
data = data[data['water_depth'].notnull()]
data = data[data['bottom_water_temp'].notnull()]
deep_data = data[data['water_depth'] > 1500]
shallow_data = data[data['water_depth'] <= 1500]
# Plot deep data
plt.scatter(deep_data['bottom_water_temp'], deep_data['water_depth'], s=abs(deep_data['lat']), c=deep_data['bottom_water_temp'])
deep_mean = np.mean(deep_data['bottom_water_temp'])
deep_med = np.median(deep_data['bottom_water_temp'])
deep_stdev = np.std(deep_data['bottom_water_temp'])
# Plot shallow data and fit linear curve
plt.scatter(shallow_data['bottom_water_temp'], shallow_data['water_depth'], s=abs(shallow_data['lat']), c=shallow_data['bottom_water_temp'])
[slope, intercept, r, p, std] = stats.linregress(shallow_data['water_depth'], shallow_data['bottom_water_temp'])
y = np.linspace(0, 1500)
plt.plot(slope*y+intercept, y, 'k-')
# Error calculated as relative root mean squared error of curve fit to reported values
def rmse(model_values, measured_values):
return np.sqrt(((model_values-measured_values)**2).mean())
shallow_rmse = rmse((slope*shallow_data['water_depth']+intercept), shallow_data['bottom_water_temp'])
# shallow_rmse = pd.Series.std((slope*shallow_data['water_depth']+intercept) - shallow_data['bottom_water_temp'])
# Plot all data
plt.scatter(data['bottom_water_temp'], data['water_depth'], s=abs(data['lat']), c=data['bottom_water_temp'])
plt.plot(slope*y+intercept, y, 'k-')
# Plot histogram of results
plt.hist(deep_data['bottom_water_temp'], normed=1, bins=30, facecolor='orange')
plt.show()
# eof
| mit |
afgaron/rgz-analysis | python/processing.py | 2 | 13922 | import logging, time
from astropy import coordinates as coord, units as u
import mechanize, httplib, StringIO
import astroquery, requests
from astroquery.irsa import Irsa
import numpy as np
import pandas as pd
import itertools
from astropy.cosmology import Planck13 as cosmo
#custom modules for the RGZ catalog
import catalog_functions as fn #contains miscellaneous helper functions
import contour_node as c #contains Node class
def getWISE(entry):
'''
get IR data from AllWISE Source Catalog
attempts to query Irsa 5 times; if they keep failing, abort
returns updated entry
'''
ir_pos = coord.SkyCoord(entry['consensus']['ir_ra'], entry['consensus']['ir_dec'], unit=(u.deg,u.deg), frame='icrs')
tryCount = 0
while(True): #in case of error, wait 10 sec and try again; give up after 5 tries
tryCount += 1
try:
table = Irsa.query_region(ir_pos, catalog='allwise_p3as_psd', radius=3.*u.arcsec)
break
except (astroquery.exceptions.TimeoutError, astroquery.exceptions.TableParseError) as e:
if tryCount>5:
message = 'Unable to connect to IRSA; trying again in 10 min'
logging.exception(message)
print message
raise fn.DataAccessError(message)
logging.exception(e)
time.sleep(10)
except Exception as e:
if 'Query failed' in str(e) or 'timed out' in str(e):
if tryCount>5:
message = 'Unable to connect to IRSA; trying again in 10 min'
logging.exception(message)
print message
raise fn.DataAccessError(message)
logging.exception(e)
time.sleep(10)
else:
raise
if len(table):
number_matches = 0
if table[0]['w1snr']>5:
match = table[0]
dist = match['dist']
number_matches += 1
else:
match = None
dist = np.inf
if len(table)>1:
for row in table:
if row['dist']<dist and row['w1snr']>5:
match = row
dist = match['dist']
number_matches += 1
if match:
wise_match = {'designation':'WISEA'+match['designation'], 'ra':match['ra'], 'dec':match['dec'], \
'number_matches':np.int16(number_matches), \
'w1mpro':match['w1mpro'], 'w1sigmpro':match['w1sigmpro'], 'w1snr':match['w1snr'], \
'w2mpro':match['w2mpro'], 'w2sigmpro':match['w2sigmpro'], 'w2snr':match['w2snr'], \
'w3mpro':match['w3mpro'], 'w3sigmpro':match['w3sigmpro'], 'w3snr':match['w3snr'], \
'w4mpro':match['w4mpro'], 'w4sigmpro':match['w4sigmpro'], 'w4snr':match['w4snr']}
else:
wise_match = None
else:
wise_match = None
if wise_match:
logging.info('AllWISE match found')
for key in wise_match.keys():
if wise_match[key] is np.ma.masked:
wise_match.pop(key)
elif wise_match[key] and type(wise_match[key]) is not str:
wise_match[key] = wise_match[key].item()
elif wise_match[key] == 0:
wise_match[key] = 0
else:
logging.info('No AllWISE match found')
return wise_match
def SDSS_select(sql):
'''pass an SQL query to SDSS and return a pandas dataframe
in case of error, wait 10 seconds and try again; give up after 5 tries'''
br = mechanize.Browser()
br.set_handle_robots(False)
tryCount = 0
while(True):
tryCount += 1
try:
br.open('http://skyserver.sdss.org/dr13/en/tools/search/sql.aspx', timeout=4)
br.select_form(name='sql')
br['cmd'] = sql
br['format'] = ['csv']
response = br.submit()
file_like = StringIO.StringIO(response.get_data())
df = pd.read_csv(file_like, skiprows=1)
break
except (mechanize.URLError, mechanize.HTTPError, httplib.BadStatusLine, pd.parser.CParserError) as e:
if tryCount>5:
message = 'Unable to connect to SkyServer; trying again in 10 min'
logging.exception(message)
print message
raise fn.DataAccessError(message)
logging.exception(e)
time.sleep(10)
return df
def getSDSS(entry):
'''
get optical magnitude data from PhotoPrimary table in SDSS
if a positional match exists, get spectral class and zsp and uncertainty from SpecObj table
if match is extended, get zph and uncertainty from Photoz table and spectral lines from GalSpecLine table
'''
ir_pos = coord.SkyCoord(entry['consensus']['ir_ra'], entry['consensus']['ir_dec'], unit=(u.deg,u.deg), frame='icrs')
query = '''select objID, ra, dec, u, r, g, i, z, err_u, err_r, err_g, err_i, err_z,
case type when 3 then 'G'
when 6 then 'S'
else 'U' end as class
from PhotoPrimary
where (ra between %f-6./3600 and %f+6./3600) and (dec between %f-6./3600 and %f+6./3600)''' \
% (ir_pos.ra.deg, ir_pos.ra.deg, ir_pos.dec.deg, ir_pos.dec.deg)
df = SDSS_select(query)
if len(df):
number_matches = 0
match_pos = coord.SkyCoord(df.iloc[0]['ra'], df.iloc[0]['dec'], unit=(u.deg, u.deg))
temp_dist = ir_pos.separation(match_pos).arcsecond
if temp_dist<3.:
match = df.iloc[0]
dist = temp_dist
number_matches += 1
else:
match = None
dist = np.inf
if len(df)>1:
for i in range(len(df)):
match_pos = coord.SkyCoord(df.iloc[i]['ra'], df.iloc[i]['dec'], unit=(u.deg, u.deg))
temp_dist = ir_pos.separation(match_pos).arcsecond
if temp_dist<3. and temp_dist<dist:
match = df.iloc[i]
dist = temp_dist
number_matches += 1
if match is not None:
sdss_match = {'objID':df['objID'][match.name], 'ra':match['ra'], 'dec':match['dec'], 'number_matches':np.int16(number_matches), \
'morphological_class':match['class'], 'u':match['u'], 'r':match['r'], 'g':match['g'], 'i':match['i'], 'z':match['z'], \
'u_err':match['err_u'], 'r_err':match['err_r'], 'g_err':match['err_g'], 'i_err':match['err_i'], 'z_err':match['err_z']}
else:
sdss_match = None
else:
sdss_match = None
if sdss_match and sdss_match['morphological_class'] == 'G': #query the galaxy tables
query = '''select p.z as photo_redshift, p.zErr as photo_redshift_err, s.z as spec_redshift, s.zErr as spec_redshift_err,
oiii_5007_flux, oiii_5007_flux_err, h_beta_flux, h_beta_flux_err,
nii_6584_flux, nii_6584_flux_err, h_alpha_flux, h_alpha_flux_err, class
from Photoz as p
full outer join SpecObj as s on p.objID = s.bestObjID
full outer join GalSpecLine as g on s.specobjid = g.specobjid
where p.objID = %i''' % sdss_match['objID']
df = SDSS_select(query)
if len(df):
more_data = {}
if not np.isnan(df['spec_redshift'][0]):
more_data['spec_redshift'] = df['spec_redshift'][0]
more_data['spec_redshift_err'] = df['spec_redshift_err'][0]
if df['photo_redshift'][0] != -9999:
more_data['photo_redshift'] = df['photo_redshift'][0]
more_data['photo_redshift_err'] = df['photo_redshift_err'][0]
if type(df['class'][0]) is not np.float64 or not np.isnan(df['class'][0]):
more_data['spectral_class'] = df['class'][0][0]
for key in ['oiii_5007_flux', 'oiii_5007_flux_err', 'h_beta_flux', 'h_beta_flux_err', \
'nii_6584_flux', 'nii_6584_flux_err', 'h_alpha_flux', 'h_alpha_flux_err']:
if not np.isnan(df[key][0]):
more_data[key] = df[key][0]
sdss_match.update(more_data)
elif sdss_match and sdss_match['morphological_class'] == 'S': #query the star tables
query = '''select so.z as spec_redshift, so.zErr as spec_redshift_err, class
from Star as s
full outer join SpecObj as so on s.objID=so.bestObjID
where s.objID = %i''' % sdss_match['objID']
df = SDSS_select(query)
if len(df):
more_data = {}
if not np.isnan(df['spec_redshift'][0]):
more_data['spec_redshift'] = df['spec_redshift'][0]
more_data['spec_redshift_err'] = df['spec_redshift_err'][0]
if type(df['class'][0]) is not np.float64 or not np.isnan(df['class'][0]):
more_data['spectral_class'] = df['class'][0][0]
sdss_match.update(more_data)
if sdss_match:
logging.info('SDSS match found')
for key in sdss_match.keys():
if sdss_match[key] is None:
sdss_match.pop(key)
elif sdss_match[key] and type(sdss_match[key]) is not str:
sdss_match[key] = sdss_match[key].item()
elif sdss_match[key] == 0:
sdss_match[key] = 0
elif sdss_match[key] == -9999:
sdss_match.pop(key)
else:
logging.info('No SDSS match found')
return sdss_match
def getRadio(data, fits_loc, source):
'''
calculates all of the radio parameters from the fits file
data is a JSON object downloaded from the online RGZ interface
fits_loc is the fits file on the physical drive
'''
#create list of trees, each containing a contour and its contents
contourTrees = []
for contour, bbox in itertools.product(data['contours'], source['bbox']):
if fn.approx(contour[0]['bbox'][0], bbox[0]) and fn.approx(contour[0]['bbox'][1], bbox[1]) and \
fn.approx(contour[0]['bbox'][2], bbox[2]) and fn.approx(contour[0]['bbox'][3], bbox[3]):
tree = c.Node(contour=contour, fits_loc=fits_loc)
contourTrees.append(tree)
#get component fluxes and sizes
components = []
for tree in contourTrees:
bboxP = fn.bboxToDS9(fn.findBox(tree.value['arr']), tree.imgSize)[0] #bbox in DS9 coordinate pixels
bboxCornersRD = tree.w.wcs_pix2world( np.array( [[bboxP[0],bboxP[1]], [bboxP[2],bboxP[3]] ]), 1) #two opposite corners of bbox in ra and dec
raRange = [ min(bboxCornersRD[0][0], bboxCornersRD[1][0]), max(bboxCornersRD[0][0], bboxCornersRD[1][0]) ]
decRange = [ min(bboxCornersRD[0][1], bboxCornersRD[1][1]), max(bboxCornersRD[0][1], bboxCornersRD[1][1]) ]
pos1 = coord.SkyCoord(raRange[0], decRange[0], unit=(u.deg, u.deg))
pos2 = coord.SkyCoord(raRange[1], decRange[1], unit=(u.deg, u.deg))
extentArcsec = pos1.separation(pos2).arcsecond
solidAngleArcsec2 = tree.areaArcsec2
components.append({'flux':tree.fluxmJy, 'flux_err':tree.fluxErrmJy, 'angular_extent':extentArcsec, \
'solid_angle':solidAngleArcsec2, 'ra_range':raRange, 'dec_range':decRange})
#adds up total flux of all components
totalFluxmJy = 0
totalFluxErrmJy2 = 0
for component in components:
totalFluxmJy += component['flux']
totalFluxErrmJy2 += np.square(component['flux_err'])
totalFluxErrmJy = np.sqrt(totalFluxErrmJy2)
#finds total area enclosed by contours in arcminutes
totalSolidAngleArcsec2 = 0
for component in components:
totalSolidAngleArcsec2 += component['solid_angle']
#find maximum extent of component bboxes in arcseconds
maxAngularExtentArcsec = 0
if len(components)==1:
maxAngularExtentArcsec = components[0]['angular_extent']
else:
for i, j in itertools.combinations(range(len(components)), 2):
corners1 = np.array([ [components[i]['ra_range'][0], components[i]['dec_range'][0]], \
[components[i]['ra_range'][0], components[i]['dec_range'][1]], \
[components[i]['ra_range'][1], components[i]['dec_range'][0]], \
[components[i]['ra_range'][1], components[i]['dec_range'][1]] ])
corners2 = np.array([ [components[j]['ra_range'][0], components[j]['dec_range'][0]], \
[components[j]['ra_range'][0], components[j]['dec_range'][1]], \
[components[j]['ra_range'][1], components[j]['dec_range'][0]], \
[components[j]['ra_range'][1], components[j]['dec_range'][1]] ])
pos1 = coord.SkyCoord(corners1.T[0], corners1.T[1], unit=(u.deg, u.deg))
pos2 = coord.SkyCoord(corners2.T[0], corners2.T[1], unit=(u.deg, u.deg))
for c1, c2 in itertools.product(pos1, pos2):
angularExtentArcsec = c1.separation(c2).arcsecond
maxAngularExtentArcsec = max(np.append(angularExtentArcsec, maxAngularExtentArcsec))
#add all peaks up into single list
peakList = []
for tree in contourTrees:
for peak in tree.peaks:
peak.pop('x', None)
peak.pop('y', None)
peakList.append(peak)
peakFluxErrmJy = contourTrees[0].sigmamJy
#find center of radio source
raMin, raMax, decMin, decMax = np.inf, 0, np.inf, 0
for comp in components:
if comp['ra_range'][0] < raMin:
raMin = comp['ra_range'][0]
if comp['ra_range'][1] > raMax:
raMax = comp['ra_range'][1]
if comp['dec_range'][0] < decMin:
decMin = comp['dec_range'][0]
if comp['dec_range'][1] > decMax:
decMax = comp['dec_range'][1]
meanRa = (raMax+raMin)/2.
meanDec = (decMax+decMin)/2.
radio_data = {'radio':{'total_flux':totalFluxmJy, 'total_flux_err':totalFluxErrmJy, \
'outermost_level':data['contours'][0][0]['level']*1000, 'number_components':len(contourTrees), \
'number_peaks':len(peakList), 'max_angular_extent':maxAngularExtentArcsec, \
'total_solid_angle':totalSolidAngleArcsec2, 'peak_flux_err':peakFluxErrmJy, 'peaks':peakList, \
'components':components, 'ra':meanRa, 'dec':meanDec}}
return radio_data
def getPhysical(z, radio_data):
DAkpc = float(cosmo.angular_diameter_distance(z)/u.kpc) #angular diameter distance in kpc
DLm = float(cosmo.luminosity_distance(z)/u.m) #luminosity distance in m
maxPhysicalExtentKpc = DAkpc*radio_data['radio']['max_angular_extent']*np.pi/180/3600 #arcseconds to radians
totalCrossSectionKpc2 = np.square(DAkpc)*radio_data['radio']['total_solid_angle']*np.square(np.pi/180/3600) #arcseconds^2 to radians^2
totalLuminosityWHz = radio_data['radio']['total_flux']*1e-29*4*np.pi*np.square(DLm) #mJy to W/(m^2 Hz), kpc to m
totalLuminosityErrWHz = radio_data['radio']['total_flux_err']*1e-29*4*np.pi*np.square(DLm)
peakLuminosityErrWHz = radio_data['radio']['peak_flux_err']*1e-29*4*np.pi*np.square(DLm)
for component in radio_data['radio']['components']:
component['physical_extent'] = DAkpc*component['angular_extent']*np.pi/180/3600
component['cross_section'] = np.square(DAkpc)*component['solid_angle']*np.square(np.pi/180/3600)
component['luminosity'] = component['flux']*1e-29*4*np.pi*np.square(DLm)
component['luminosity_err'] = component['flux_err']*1e-29*4*np.pi*np.square(DLm)
for peak in radio_data['radio']['peaks']:
peak['luminosity'] = peak['flux']*1e-29*4*np.pi*np.square(DLm)
physical = {'max_physical_extent':maxPhysicalExtentKpc, 'total_cross_section':totalCrossSectionKpc2, \
'total_luminosity':totalLuminosityWHz, 'total_luminosity_err':totalLuminosityErrWHz, \
'peak_luminosity_err':peakLuminosityErrWHz}
return physical
| mit |
gokalpdemirci/momentum | Readstq.py | 1 | 2052 | # File: Readstq.py
import datetime
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.neural_network import MLPRegressor
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVR
from sklearn.svm import LinearSVR
class readSTQFormat:
def __init__(self, allFiles):
totalList = []
dates = []
self.adj_table = []
self.per_adj_table = []
values = [pd.read_csv(filename, usecols=[0,1,2,5,6]) for filename in allFiles]
pivotted_values = [df.pivot(index='Date', columns='Time') for df in values]
table = pd.concat(pivotted_values)
for (s,i) in table:
if s == 'Open' and i != '15:35:00':
del table[('Open', i)]
#print(table)
if table[('Open', '15:35:00')].isnull().values.any():
print('Readstq Warning: Opening values have non!')
table.dropna(inplace = True)
#table.fillna(-99999, inplace=True)
for (s,i) in table:
if s == 'Close':
table[('Close', i)] = 10000*(table[('Close', i)] - table[('Open', '15:35:00')])/table[('Open', '15:35:00')]
self.table = table
if __name__ == '__main__':
red1 = readSTQFormat(allFiles = [r'data\5min-03may2017-19may2017\us\nasdaq stocks\2\tsla.us.txt',
r'data\5min-22may2017-07june2017\us\nasdaq stocks\2\tsla.us.txt',
r'data\5min-08june2017-23june2017\us\nasdaq stocks\2\tsla.us.txt',
r'data\5min-26june2017-13july2017\us\nasdaq stocks\2\tsla.us.txt',
r'data\5min-13july2017-28july2017\us\nasdaq stocks\2\tsla.us.txt',
r'data\5min-31july2017-15august2017\us\nasdaq stocks\2\tsla.us.txt',
r'data\5min-17august2017-01september2017\us\nasdaq stocks\2\tsla.us.txt',
r'data\5min-05september2017-19september2017\us\nasdaq stocks\2\tsla.us.txt',
r'data\5min-21september2017-06october2017\us\nasdaq stocks\2\tsla.us.txt',
r'data\5min-09october2017-24october2017\us\nasdaq stocks\2\tsla.us.txt',
#starts with 14:35:00 instead of 15:35:00 r'data\5min-25october2017-09november2017\us\nasdaq stocks\2\tsla.us.txt',
])
| gpl-3.0 |
ysekky/GPy | travis_tests.py | 5 | 1919 | #===============================================================================
# Copyright (c) 2015, Max Zwiessele
#
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# * Neither the name of GPy nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#===============================================================================
#!/usr/bin/env python
import matplotlib
matplotlib.use('agg')
import nose, warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
nose.main('GPy', defaultTest='GPy/testing', argv=['', '--show-skipped'])
| bsd-3-clause |
treycausey/scikit-learn | examples/tree/plot_tree_regression.py | 8 | 1405 | """
===================================================================
Decision Tree Regression
===================================================================
A 1D regression with decision tree.
The :ref:`decision trees <tree>` is
used to fit a sine curve with addition noisy observation. As a result, it
learns local linear regressions approximating the sine curve.
We can see that if the maximum depth of the tree (controlled by the
`max_depth` parameter) is set too high, the decision trees learn too fine
details of the training data and learn from the noise, i.e. they overfit.
"""
print(__doc__)
import numpy as np
# Create a random dataset
rng = np.random.RandomState(1)
X = np.sort(5 * rng.rand(80, 1), axis=0)
y = np.sin(X).ravel()
y[::5] += 3 * (0.5 - rng.rand(16))
# Fit regression model
from sklearn.tree import DecisionTreeRegressor
clf_1 = DecisionTreeRegressor(max_depth=2)
clf_2 = DecisionTreeRegressor(max_depth=5)
clf_1.fit(X, y)
clf_2.fit(X, y)
# Predict
X_test = np.arange(0.0, 5.0, 0.01)[:, np.newaxis]
y_1 = clf_1.predict(X_test)
y_2 = clf_2.predict(X_test)
# Plot the results
import pylab as pl
pl.figure()
pl.scatter(X, y, c="k", label="data")
pl.plot(X_test, y_1, c="g", label="max_depth=2", linewidth=2)
pl.plot(X_test, y_2, c="r", label="max_depth=5", linewidth=2)
pl.xlabel("data")
pl.ylabel("target")
pl.title("Decision Tree Regression")
pl.legend()
pl.show()
| bsd-3-clause |
louispotok/pandas | pandas/tests/indexing/test_indexing.py | 3 | 37492 | # -*- coding: utf-8 -*-
# pylint: disable-msg=W0612,E1101
""" test fancy indexing & misc """
import pytest
import weakref
from warnings import catch_warnings
from datetime import datetime
from pandas.core.dtypes.common import (
is_integer_dtype,
is_float_dtype)
from pandas.compat import range, lrange, lzip, StringIO
import numpy as np
import pandas as pd
from pandas.core.indexing import (_non_reducing_slice, _maybe_numeric_slice,
validate_indices)
from pandas import NaT, DataFrame, Index, Series, MultiIndex
import pandas.util.testing as tm
from pandas.compat import PY2
from pandas.tests.indexing.common import Base, _mklbl
# ------------------------------------------------------------------------
# Indexing test cases
class TestFancy(Base):
""" pure get/set item & fancy indexing """
def test_setitem_ndarray_1d(self):
# GH5508
# len of indexer vs length of the 1d ndarray
df = DataFrame(index=Index(lrange(1, 11)))
df['foo'] = np.zeros(10, dtype=np.float64)
df['bar'] = np.zeros(10, dtype=np.complex)
# invalid
def f():
df.loc[df.index[2:5], 'bar'] = np.array([2.33j, 1.23 + 0.1j,
2.2, 1.0])
pytest.raises(ValueError, f)
# valid
df.loc[df.index[2:6], 'bar'] = np.array([2.33j, 1.23 + 0.1j,
2.2, 1.0])
result = df.loc[df.index[2:6], 'bar']
expected = Series([2.33j, 1.23 + 0.1j, 2.2, 1.0], index=[3, 4, 5, 6],
name='bar')
tm.assert_series_equal(result, expected)
# dtype getting changed?
df = DataFrame(index=Index(lrange(1, 11)))
df['foo'] = np.zeros(10, dtype=np.float64)
df['bar'] = np.zeros(10, dtype=np.complex)
def f():
df[2:5] = np.arange(1, 4) * 1j
pytest.raises(ValueError, f)
def test_inf_upcast(self):
# GH 16957
# We should be able to use np.inf as a key
# np.inf should cause an index to convert to float
# Test with np.inf in rows
df = DataFrame(columns=[0])
df.loc[1] = 1
df.loc[2] = 2
df.loc[np.inf] = 3
# make sure we can look up the value
assert df.loc[np.inf, 0] == 3
result = df.index
expected = pd.Float64Index([1, 2, np.inf])
tm.assert_index_equal(result, expected)
# Test with np.inf in columns
df = DataFrame()
df.loc[0, 0] = 1
df.loc[1, 1] = 2
df.loc[0, np.inf] = 3
result = df.columns
expected = pd.Float64Index([0, 1, np.inf])
tm.assert_index_equal(result, expected)
def test_setitem_dtype_upcast(self):
# GH3216
df = DataFrame([{"a": 1}, {"a": 3, "b": 2}])
df['c'] = np.nan
assert df['c'].dtype == np.float64
df.loc[0, 'c'] = 'foo'
expected = DataFrame([{"a": 1, "c": 'foo'},
{"a": 3, "b": 2, "c": np.nan}])
tm.assert_frame_equal(df, expected)
# GH10280
df = DataFrame(np.arange(6, dtype='int64').reshape(2, 3),
index=list('ab'),
columns=['foo', 'bar', 'baz'])
for val in [3.14, 'wxyz']:
left = df.copy()
left.loc['a', 'bar'] = val
right = DataFrame([[0, val, 2], [3, 4, 5]], index=list('ab'),
columns=['foo', 'bar', 'baz'])
tm.assert_frame_equal(left, right)
assert is_integer_dtype(left['foo'])
assert is_integer_dtype(left['baz'])
left = DataFrame(np.arange(6, dtype='int64').reshape(2, 3) / 10.0,
index=list('ab'),
columns=['foo', 'bar', 'baz'])
left.loc['a', 'bar'] = 'wxyz'
right = DataFrame([[0, 'wxyz', .2], [.3, .4, .5]], index=list('ab'),
columns=['foo', 'bar', 'baz'])
tm.assert_frame_equal(left, right)
assert is_float_dtype(left['foo'])
assert is_float_dtype(left['baz'])
def test_dups_fancy_indexing(self):
# GH 3455
from pandas.util.testing import makeCustomDataframe as mkdf
df = mkdf(10, 3)
df.columns = ['a', 'a', 'b']
result = df[['b', 'a']].columns
expected = Index(['b', 'a', 'a'])
tm.assert_index_equal(result, expected)
# across dtypes
df = DataFrame([[1, 2, 1., 2., 3., 'foo', 'bar']],
columns=list('aaaaaaa'))
df.head()
str(df)
result = DataFrame([[1, 2, 1., 2., 3., 'foo', 'bar']])
result.columns = list('aaaaaaa')
# TODO(wesm): unused?
df_v = df.iloc[:, 4] # noqa
res_v = result.iloc[:, 4] # noqa
tm.assert_frame_equal(df, result)
# GH 3561, dups not in selected order
df = DataFrame(
{'test': [5, 7, 9, 11],
'test1': [4., 5, 6, 7],
'other': list('abcd')}, index=['A', 'A', 'B', 'C'])
rows = ['C', 'B']
expected = DataFrame(
{'test': [11, 9],
'test1': [7., 6],
'other': ['d', 'c']}, index=rows)
result = df.loc[rows]
tm.assert_frame_equal(result, expected)
result = df.loc[Index(rows)]
tm.assert_frame_equal(result, expected)
rows = ['C', 'B', 'E']
expected = DataFrame(
{'test': [11, 9, np.nan],
'test1': [7., 6, np.nan],
'other': ['d', 'c', np.nan]}, index=rows)
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
result = df.loc[rows]
tm.assert_frame_equal(result, expected)
# see GH5553, make sure we use the right indexer
rows = ['F', 'G', 'H', 'C', 'B', 'E']
expected = DataFrame({'test': [np.nan, np.nan, np.nan, 11, 9, np.nan],
'test1': [np.nan, np.nan, np.nan, 7., 6, np.nan],
'other': [np.nan, np.nan, np.nan,
'd', 'c', np.nan]},
index=rows)
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
result = df.loc[rows]
tm.assert_frame_equal(result, expected)
# List containing only missing label
dfnu = DataFrame(np.random.randn(5, 3), index=list('AABCD'))
with pytest.raises(KeyError):
dfnu.loc[['E']]
# ToDo: check_index_type can be True after GH 11497
# GH 4619; duplicate indexer with missing label
df = DataFrame({"A": [0, 1, 2]})
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
result = df.loc[[0, 8, 0]]
expected = DataFrame({"A": [0, np.nan, 0]}, index=[0, 8, 0])
tm.assert_frame_equal(result, expected, check_index_type=False)
df = DataFrame({"A": list('abc')})
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
result = df.loc[[0, 8, 0]]
expected = DataFrame({"A": ['a', np.nan, 'a']}, index=[0, 8, 0])
tm.assert_frame_equal(result, expected, check_index_type=False)
# non unique with non unique selector
df = DataFrame({'test': [5, 7, 9, 11]}, index=['A', 'A', 'B', 'C'])
expected = DataFrame(
{'test': [5, 7, 5, 7, np.nan]}, index=['A', 'A', 'A', 'A', 'E'])
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
result = df.loc[['A', 'A', 'E']]
tm.assert_frame_equal(result, expected)
@pytest.mark.skipif(PY2,
reason="GH-20770. Py2 unreliable warnings catching.")
def test_dups_fancy_indexing2(self):
# GH 5835
# dups on index and missing values
df = DataFrame(
np.random.randn(5, 5), columns=['A', 'B', 'B', 'B', 'A'])
expected = pd.concat(
[df.loc[:, ['A', 'B']], DataFrame(np.nan, columns=['C'],
index=df.index)], axis=1)
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
result = df.loc[:, ['A', 'B', 'C']]
tm.assert_frame_equal(result, expected)
# GH 6504, multi-axis indexing
df = DataFrame(np.random.randn(9, 2),
index=[1, 1, 1, 2, 2, 2, 3, 3, 3], columns=['a', 'b'])
expected = df.iloc[0:6]
result = df.loc[[1, 2]]
tm.assert_frame_equal(result, expected)
expected = df
result = df.loc[:, ['a', 'b']]
tm.assert_frame_equal(result, expected)
expected = df.iloc[0:6, :]
result = df.loc[[1, 2], ['a', 'b']]
tm.assert_frame_equal(result, expected)
def test_indexing_mixed_frame_bug(self):
# GH3492
df = DataFrame({'a': {1: 'aaa', 2: 'bbb', 3: 'ccc'},
'b': {1: 111, 2: 222, 3: 333}})
# this works, new column is created correctly
df['test'] = df['a'].apply(lambda x: '_' if x == 'aaa' else x)
# this does not work, ie column test is not changed
idx = df['test'] == '_'
temp = df.loc[idx, 'a'].apply(lambda x: '-----' if x == 'aaa' else x)
df.loc[idx, 'test'] = temp
assert df.iloc[0, 2] == '-----'
# if I look at df, then element [0,2] equals '_'. If instead I type
# df.ix[idx,'test'], I get '-----', finally by typing df.iloc[0,2] I
# get '_'.
def test_multitype_list_index_access(self):
# GH 10610
df = DataFrame(np.random.random((10, 5)),
columns=["a"] + [20, 21, 22, 23])
with pytest.raises(KeyError):
df[[22, 26, -8]]
assert df[21].shape[0] == df.shape[0]
def test_set_index_nan(self):
# GH 3586
df = DataFrame({'PRuid': {17: 'nonQC',
18: 'nonQC',
19: 'nonQC',
20: '10',
21: '11',
22: '12',
23: '13',
24: '24',
25: '35',
26: '46',
27: '47',
28: '48',
29: '59',
30: '10'},
'QC': {17: 0.0,
18: 0.0,
19: 0.0,
20: np.nan,
21: np.nan,
22: np.nan,
23: np.nan,
24: 1.0,
25: np.nan,
26: np.nan,
27: np.nan,
28: np.nan,
29: np.nan,
30: np.nan},
'data': {17: 7.9544899999999998,
18: 8.0142609999999994,
19: 7.8591520000000008,
20: 0.86140349999999999,
21: 0.87853110000000001,
22: 0.8427041999999999,
23: 0.78587700000000005,
24: 0.73062459999999996,
25: 0.81668560000000001,
26: 0.81927080000000008,
27: 0.80705009999999999,
28: 0.81440240000000008,
29: 0.80140849999999997,
30: 0.81307740000000006},
'year': {17: 2006,
18: 2007,
19: 2008,
20: 1985,
21: 1985,
22: 1985,
23: 1985,
24: 1985,
25: 1985,
26: 1985,
27: 1985,
28: 1985,
29: 1985,
30: 1986}}).reset_index()
result = df.set_index(['year', 'PRuid', 'QC']).reset_index().reindex(
columns=df.columns)
tm.assert_frame_equal(result, df)
def test_multi_nan_indexing(self):
# GH 3588
df = DataFrame({"a": ['R1', 'R2', np.nan, 'R4'],
'b': ["C1", "C2", "C3", "C4"],
"c": [10, 15, np.nan, 20]})
result = df.set_index(['a', 'b'], drop=False)
expected = DataFrame({"a": ['R1', 'R2', np.nan, 'R4'],
'b': ["C1", "C2", "C3", "C4"],
"c": [10, 15, np.nan, 20]},
index=[Index(['R1', 'R2', np.nan, 'R4'],
name='a'),
Index(['C1', 'C2', 'C3', 'C4'], name='b')])
tm.assert_frame_equal(result, expected)
def test_multi_assign(self):
# GH 3626, an assignment of a sub-df to a df
df = DataFrame({'FC': ['a', 'b', 'a', 'b', 'a', 'b'],
'PF': [0, 0, 0, 0, 1, 1],
'col1': lrange(6),
'col2': lrange(6, 12)})
df.iloc[1, 0] = np.nan
df2 = df.copy()
mask = ~df2.FC.isna()
cols = ['col1', 'col2']
dft = df2 * 2
dft.iloc[3, 3] = np.nan
expected = DataFrame({'FC': ['a', np.nan, 'a', 'b', 'a', 'b'],
'PF': [0, 0, 0, 0, 1, 1],
'col1': Series([0, 1, 4, 6, 8, 10]),
'col2': [12, 7, 16, np.nan, 20, 22]})
# frame on rhs
df2.loc[mask, cols] = dft.loc[mask, cols]
tm.assert_frame_equal(df2, expected)
df2.loc[mask, cols] = dft.loc[mask, cols]
tm.assert_frame_equal(df2, expected)
# with an ndarray on rhs
# coerces to float64 because values has float64 dtype
# GH 14001
expected = DataFrame({'FC': ['a', np.nan, 'a', 'b', 'a', 'b'],
'PF': [0, 0, 0, 0, 1, 1],
'col1': [0., 1., 4., 6., 8., 10.],
'col2': [12, 7, 16, np.nan, 20, 22]})
df2 = df.copy()
df2.loc[mask, cols] = dft.loc[mask, cols].values
tm.assert_frame_equal(df2, expected)
df2.loc[mask, cols] = dft.loc[mask, cols].values
tm.assert_frame_equal(df2, expected)
# broadcasting on the rhs is required
df = DataFrame(dict(A=[1, 2, 0, 0, 0], B=[0, 0, 0, 10, 11], C=[
0, 0, 0, 10, 11], D=[3, 4, 5, 6, 7]))
expected = df.copy()
mask = expected['A'] == 0
for col in ['A', 'B']:
expected.loc[mask, col] = df['D']
df.loc[df['A'] == 0, ['A', 'B']] = df['D']
tm.assert_frame_equal(df, expected)
def test_setitem_list(self):
# GH 6043
# ix with a list
df = DataFrame(index=[0, 1], columns=[0])
with catch_warnings(record=True):
df.ix[1, 0] = [1, 2, 3]
df.ix[1, 0] = [1, 2]
result = DataFrame(index=[0, 1], columns=[0])
with catch_warnings(record=True):
result.ix[1, 0] = [1, 2]
tm.assert_frame_equal(result, df)
# ix with an object
class TO(object):
def __init__(self, value):
self.value = value
def __str__(self):
return "[{0}]".format(self.value)
__repr__ = __str__
def __eq__(self, other):
return self.value == other.value
def view(self):
return self
df = DataFrame(index=[0, 1], columns=[0])
with catch_warnings(record=True):
df.ix[1, 0] = TO(1)
df.ix[1, 0] = TO(2)
result = DataFrame(index=[0, 1], columns=[0])
with catch_warnings(record=True):
result.ix[1, 0] = TO(2)
tm.assert_frame_equal(result, df)
# remains object dtype even after setting it back
df = DataFrame(index=[0, 1], columns=[0])
with catch_warnings(record=True):
df.ix[1, 0] = TO(1)
df.ix[1, 0] = np.nan
result = DataFrame(index=[0, 1], columns=[0])
tm.assert_frame_equal(result, df)
def test_string_slice(self):
# GH 14424
# string indexing against datetimelike with object
# dtype should properly raises KeyError
df = DataFrame([1], Index([pd.Timestamp('2011-01-01')], dtype=object))
assert df.index.is_all_dates
with pytest.raises(KeyError):
df['2011']
with pytest.raises(KeyError):
df.loc['2011', 0]
df = DataFrame()
assert not df.index.is_all_dates
with pytest.raises(KeyError):
df['2011']
with pytest.raises(KeyError):
df.loc['2011', 0]
def test_mi_access(self):
# GH 4145
data = """h1 main h3 sub h5
0 a A 1 A1 1
1 b B 2 B1 2
2 c B 3 A1 3
3 d A 4 B2 4
4 e A 5 B2 5
5 f B 6 A2 6
"""
df = pd.read_csv(StringIO(data), sep=r'\s+', index_col=0)
df2 = df.set_index(['main', 'sub']).T.sort_index(1)
index = Index(['h1', 'h3', 'h5'])
columns = MultiIndex.from_tuples([('A', 'A1')], names=['main', 'sub'])
expected = DataFrame([['a', 1, 1]], index=columns, columns=index).T
result = df2.loc[:, ('A', 'A1')]
tm.assert_frame_equal(result, expected)
result = df2[('A', 'A1')]
tm.assert_frame_equal(result, expected)
# GH 4146, not returning a block manager when selecting a unique index
# from a duplicate index
# as of 4879, this returns a Series (which is similar to what happens
# with a non-unique)
expected = Series(['a', 1, 1], index=['h1', 'h3', 'h5'], name='A1')
result = df2['A']['A1']
tm.assert_series_equal(result, expected)
# selecting a non_unique from the 2nd level
expected = DataFrame([['d', 4, 4], ['e', 5, 5]],
index=Index(['B2', 'B2'], name='sub'),
columns=['h1', 'h3', 'h5'], ).T
result = df2['A']['B2']
tm.assert_frame_equal(result, expected)
def test_astype_assignment(self):
# GH4312 (iloc)
df_orig = DataFrame([['1', '2', '3', '.4', 5, 6., 'foo']],
columns=list('ABCDEFG'))
df = df_orig.copy()
df.iloc[:, 0:2] = df.iloc[:, 0:2].astype(np.int64)
expected = DataFrame([[1, 2, '3', '.4', 5, 6., 'foo']],
columns=list('ABCDEFG'))
tm.assert_frame_equal(df, expected)
df = df_orig.copy()
df.iloc[:, 0:2] = df.iloc[:, 0:2]._convert(datetime=True, numeric=True)
expected = DataFrame([[1, 2, '3', '.4', 5, 6., 'foo']],
columns=list('ABCDEFG'))
tm.assert_frame_equal(df, expected)
# GH5702 (loc)
df = df_orig.copy()
df.loc[:, 'A'] = df.loc[:, 'A'].astype(np.int64)
expected = DataFrame([[1, '2', '3', '.4', 5, 6., 'foo']],
columns=list('ABCDEFG'))
tm.assert_frame_equal(df, expected)
df = df_orig.copy()
df.loc[:, ['B', 'C']] = df.loc[:, ['B', 'C']].astype(np.int64)
expected = DataFrame([['1', 2, 3, '.4', 5, 6., 'foo']],
columns=list('ABCDEFG'))
tm.assert_frame_equal(df, expected)
# full replacements / no nans
df = DataFrame({'A': [1., 2., 3., 4.]})
df.iloc[:, 0] = df['A'].astype(np.int64)
expected = DataFrame({'A': [1, 2, 3, 4]})
tm.assert_frame_equal(df, expected)
df = DataFrame({'A': [1., 2., 3., 4.]})
df.loc[:, 'A'] = df['A'].astype(np.int64)
expected = DataFrame({'A': [1, 2, 3, 4]})
tm.assert_frame_equal(df, expected)
def test_astype_assignment_with_dups(self):
# GH 4686
# assignment with dups that has a dtype change
cols = MultiIndex.from_tuples([('A', '1'), ('B', '1'), ('A', '2')])
df = DataFrame(np.arange(3).reshape((1, 3)),
columns=cols, dtype=object)
index = df.index.copy()
df['A'] = df['A'].astype(np.float64)
tm.assert_index_equal(df.index, index)
# TODO(wesm): unused variables
# result = df.get_dtype_counts().sort_index()
# expected = Series({'float64': 2, 'object': 1}).sort_index()
@pytest.mark.parametrize("index,val", [
(Index([0, 1, 2]), 2),
(Index([0, 1, '2']), '2'),
(Index([0, 1, 2, np.inf, 4]), 4),
(Index([0, 1, 2, np.nan, 4]), 4),
(Index([0, 1, 2, np.inf]), np.inf),
(Index([0, 1, 2, np.nan]), np.nan),
])
def test_index_contains(self, index, val):
assert val in index
@pytest.mark.parametrize("index,val", [
(Index([0, 1, 2]), '2'),
(Index([0, 1, '2']), 2),
(Index([0, 1, 2, np.inf]), 4),
(Index([0, 1, 2, np.nan]), 4),
(Index([0, 1, 2, np.inf]), np.nan),
(Index([0, 1, 2, np.nan]), np.inf),
# Checking if np.inf in Int64Index should not cause an OverflowError
# Related to GH 16957
(pd.Int64Index([0, 1, 2]), np.inf),
(pd.Int64Index([0, 1, 2]), np.nan),
(pd.UInt64Index([0, 1, 2]), np.inf),
(pd.UInt64Index([0, 1, 2]), np.nan),
])
def test_index_not_contains(self, index, val):
assert val not in index
def test_index_type_coercion(self):
with catch_warnings(record=True):
# GH 11836
# if we have an index type and set it with something that looks
# to numpy like the same, but is actually, not
# (e.g. setting with a float or string '0')
# then we need to coerce to object
# integer indexes
for s in [Series(range(5)),
Series(range(5), index=range(1, 6))]:
assert s.index.is_integer()
for indexer in [lambda x: x.ix,
lambda x: x.loc,
lambda x: x]:
s2 = s.copy()
indexer(s2)[0.1] = 0
assert s2.index.is_floating()
assert indexer(s2)[0.1] == 0
s2 = s.copy()
indexer(s2)[0.0] = 0
exp = s.index
if 0 not in s:
exp = Index(s.index.tolist() + [0])
tm.assert_index_equal(s2.index, exp)
s2 = s.copy()
indexer(s2)['0'] = 0
assert s2.index.is_object()
for s in [Series(range(5), index=np.arange(5.))]:
assert s.index.is_floating()
for idxr in [lambda x: x.ix,
lambda x: x.loc,
lambda x: x]:
s2 = s.copy()
idxr(s2)[0.1] = 0
assert s2.index.is_floating()
assert idxr(s2)[0.1] == 0
s2 = s.copy()
idxr(s2)[0.0] = 0
tm.assert_index_equal(s2.index, s.index)
s2 = s.copy()
idxr(s2)['0'] = 0
assert s2.index.is_object()
class TestMisc(Base):
def test_indexer_caching(self):
# GH5727
# make sure that indexers are in the _internal_names_set
n = 1000001
arrays = [lrange(n), lrange(n)]
index = MultiIndex.from_tuples(lzip(*arrays))
s = Series(np.zeros(n), index=index)
str(s)
# setitem
expected = Series(np.ones(n), index=index)
s = Series(np.zeros(n), index=index)
s[s == 0] = 1
tm.assert_series_equal(s, expected)
def test_float_index_to_mixed(self):
df = DataFrame({0.0: np.random.rand(10), 1.0: np.random.rand(10)})
df['a'] = 10
tm.assert_frame_equal(DataFrame({0.0: df[0.0],
1.0: df[1.0],
'a': [10] * 10}),
df)
def test_float_index_non_scalar_assignment(self):
df = DataFrame({'a': [1, 2, 3], 'b': [3, 4, 5]}, index=[1., 2., 3.])
df.loc[df.index[:2]] = 1
expected = DataFrame({'a': [1, 1, 3], 'b': [1, 1, 5]}, index=df.index)
tm.assert_frame_equal(expected, df)
df = DataFrame({'a': [1, 2, 3], 'b': [3, 4, 5]}, index=[1., 2., 3.])
df2 = df.copy()
df.loc[df.index] = df.loc[df.index]
tm.assert_frame_equal(df, df2)
def test_float_index_at_iat(self):
s = Series([1, 2, 3], index=[0.1, 0.2, 0.3])
for el, item in s.iteritems():
assert s.at[el] == item
for i in range(len(s)):
assert s.iat[i] == i + 1
def test_rhs_alignment(self):
# GH8258, tests that both rows & columns are aligned to what is
# assigned to. covers both uniform data-type & multi-type cases
def run_tests(df, rhs, right):
# label, index, slice
r, i, s = list('bcd'), [1, 2, 3], slice(1, 4)
c, j, l = ['joe', 'jolie'], [1, 2], slice(1, 3)
left = df.copy()
left.loc[r, c] = rhs
tm.assert_frame_equal(left, right)
left = df.copy()
left.iloc[i, j] = rhs
tm.assert_frame_equal(left, right)
left = df.copy()
with catch_warnings(record=True):
left.ix[s, l] = rhs
tm.assert_frame_equal(left, right)
left = df.copy()
with catch_warnings(record=True):
left.ix[i, j] = rhs
tm.assert_frame_equal(left, right)
left = df.copy()
with catch_warnings(record=True):
left.ix[r, c] = rhs
tm.assert_frame_equal(left, right)
xs = np.arange(20).reshape(5, 4)
cols = ['jim', 'joe', 'jolie', 'joline']
df = DataFrame(xs, columns=cols, index=list('abcde'))
# right hand side; permute the indices and multiplpy by -2
rhs = -2 * df.iloc[3:0:-1, 2:0:-1]
# expected `right` result; just multiply by -2
right = df.copy()
right.iloc[1:4, 1:3] *= -2
# run tests with uniform dtypes
run_tests(df, rhs, right)
# make frames multi-type & re-run tests
for frame in [df, rhs, right]:
frame['joe'] = frame['joe'].astype('float64')
frame['jolie'] = frame['jolie'].map('@{0}'.format)
run_tests(df, rhs, right)
def test_str_label_slicing_with_negative_step(self):
SLC = pd.IndexSlice
def assert_slices_equivalent(l_slc, i_slc):
tm.assert_series_equal(s.loc[l_slc], s.iloc[i_slc])
if not idx.is_integer:
# For integer indices, ix and plain getitem are position-based.
tm.assert_series_equal(s[l_slc], s.iloc[i_slc])
tm.assert_series_equal(s.loc[l_slc], s.iloc[i_slc])
for idx in [_mklbl('A', 20), np.arange(20) + 100,
np.linspace(100, 150, 20)]:
idx = Index(idx)
s = Series(np.arange(20), index=idx)
assert_slices_equivalent(SLC[idx[9]::-1], SLC[9::-1])
assert_slices_equivalent(SLC[:idx[9]:-1], SLC[:8:-1])
assert_slices_equivalent(SLC[idx[13]:idx[9]:-1], SLC[13:8:-1])
assert_slices_equivalent(SLC[idx[9]:idx[13]:-1], SLC[:0])
def test_slice_with_zero_step_raises(self):
s = Series(np.arange(20), index=_mklbl('A', 20))
tm.assert_raises_regex(ValueError, 'slice step cannot be zero',
lambda: s[::0])
tm.assert_raises_regex(ValueError, 'slice step cannot be zero',
lambda: s.loc[::0])
with catch_warnings(record=True):
tm.assert_raises_regex(ValueError,
'slice step cannot be zero',
lambda: s.ix[::0])
def test_indexing_assignment_dict_already_exists(self):
df = DataFrame({'x': [1, 2, 6],
'y': [2, 2, 8],
'z': [-5, 0, 5]}).set_index('z')
expected = df.copy()
rhs = dict(x=9, y=99)
df.loc[5] = rhs
expected.loc[5] = [9, 99]
tm.assert_frame_equal(df, expected)
def test_indexing_dtypes_on_empty(self):
# Check that .iloc and .ix return correct dtypes GH9983
df = DataFrame({'a': [1, 2, 3], 'b': ['b', 'b2', 'b3']})
with catch_warnings(record=True):
df2 = df.ix[[], :]
assert df2.loc[:, 'a'].dtype == np.int64
tm.assert_series_equal(df2.loc[:, 'a'], df2.iloc[:, 0])
with catch_warnings(record=True):
tm.assert_series_equal(df2.loc[:, 'a'], df2.ix[:, 0])
def test_range_in_series_indexing(self):
# range can cause an indexing error
# GH 11652
for x in [5, 999999, 1000000]:
s = Series(index=range(x))
s.loc[range(1)] = 42
tm.assert_series_equal(s.loc[range(1)], Series(42.0, index=[0]))
s.loc[range(2)] = 43
tm.assert_series_equal(s.loc[range(2)], Series(43.0, index=[0, 1]))
def test_non_reducing_slice(self):
df = DataFrame([[0, 1], [2, 3]])
slices = [
# pd.IndexSlice[:, :],
pd.IndexSlice[:, 1],
pd.IndexSlice[1, :],
pd.IndexSlice[[1], [1]],
pd.IndexSlice[1, [1]],
pd.IndexSlice[[1], 1],
pd.IndexSlice[1],
pd.IndexSlice[1, 1],
slice(None, None, None),
[0, 1],
np.array([0, 1]),
Series([0, 1])
]
for slice_ in slices:
tslice_ = _non_reducing_slice(slice_)
assert isinstance(df.loc[tslice_], DataFrame)
def test_list_slice(self):
# like dataframe getitem
slices = [['A'], Series(['A']), np.array(['A'])]
df = DataFrame({'A': [1, 2], 'B': [3, 4]}, index=['A', 'B'])
expected = pd.IndexSlice[:, ['A']]
for subset in slices:
result = _non_reducing_slice(subset)
tm.assert_frame_equal(df.loc[result], df.loc[expected])
def test_maybe_numeric_slice(self):
df = DataFrame({'A': [1, 2], 'B': ['c', 'd'], 'C': [True, False]})
result = _maybe_numeric_slice(df, slice_=None)
expected = pd.IndexSlice[:, ['A']]
assert result == expected
result = _maybe_numeric_slice(df, None, include_bool=True)
expected = pd.IndexSlice[:, ['A', 'C']]
result = _maybe_numeric_slice(df, [1])
expected = [1]
assert result == expected
def test_partial_boolean_frame_indexing(self):
# GH 17170
df = DataFrame(np.arange(9.).reshape(3, 3),
index=list('abc'), columns=list('ABC'))
index_df = DataFrame(1, index=list('ab'), columns=list('AB'))
result = df[index_df.notnull()]
expected = DataFrame(np.array([[0., 1., np.nan],
[3., 4., np.nan],
[np.nan] * 3]),
index=list('abc'),
columns=list('ABC'))
tm.assert_frame_equal(result, expected)
def test_no_reference_cycle(self):
df = DataFrame({'a': [0, 1], 'b': [2, 3]})
for name in ('loc', 'iloc', 'at', 'iat'):
getattr(df, name)
with catch_warnings(record=True):
getattr(df, 'ix')
wr = weakref.ref(df)
del df
assert wr() is None
class TestSeriesNoneCoercion(object):
EXPECTED_RESULTS = [
# For numeric series, we should coerce to NaN.
([1, 2, 3], [np.nan, 2, 3]),
([1.0, 2.0, 3.0], [np.nan, 2.0, 3.0]),
# For datetime series, we should coerce to NaT.
([datetime(2000, 1, 1), datetime(2000, 1, 2), datetime(2000, 1, 3)],
[NaT, datetime(2000, 1, 2), datetime(2000, 1, 3)]),
# For objects, we should preserve the None value.
(["foo", "bar", "baz"], [None, "bar", "baz"]),
]
def test_coercion_with_setitem(self):
for start_data, expected_result in self.EXPECTED_RESULTS:
start_series = Series(start_data)
start_series[0] = None
expected_series = Series(expected_result)
tm.assert_series_equal(start_series, expected_series)
def test_coercion_with_loc_setitem(self):
for start_data, expected_result in self.EXPECTED_RESULTS:
start_series = Series(start_data)
start_series.loc[0] = None
expected_series = Series(expected_result)
tm.assert_series_equal(start_series, expected_series)
def test_coercion_with_setitem_and_series(self):
for start_data, expected_result in self.EXPECTED_RESULTS:
start_series = Series(start_data)
start_series[start_series == start_series[0]] = None
expected_series = Series(expected_result)
tm.assert_series_equal(start_series, expected_series)
def test_coercion_with_loc_and_series(self):
for start_data, expected_result in self.EXPECTED_RESULTS:
start_series = Series(start_data)
start_series.loc[start_series == start_series[0]] = None
expected_series = Series(expected_result)
tm.assert_series_equal(start_series, expected_series)
class TestDataframeNoneCoercion(object):
EXPECTED_SINGLE_ROW_RESULTS = [
# For numeric series, we should coerce to NaN.
([1, 2, 3], [np.nan, 2, 3]),
([1.0, 2.0, 3.0], [np.nan, 2.0, 3.0]),
# For datetime series, we should coerce to NaT.
([datetime(2000, 1, 1), datetime(2000, 1, 2), datetime(2000, 1, 3)],
[NaT, datetime(2000, 1, 2), datetime(2000, 1, 3)]),
# For objects, we should preserve the None value.
(["foo", "bar", "baz"], [None, "bar", "baz"]),
]
def test_coercion_with_loc(self):
for start_data, expected_result, in self.EXPECTED_SINGLE_ROW_RESULTS:
start_dataframe = DataFrame({'foo': start_data})
start_dataframe.loc[0, ['foo']] = None
expected_dataframe = DataFrame({'foo': expected_result})
tm.assert_frame_equal(start_dataframe, expected_dataframe)
def test_coercion_with_setitem_and_dataframe(self):
for start_data, expected_result, in self.EXPECTED_SINGLE_ROW_RESULTS:
start_dataframe = DataFrame({'foo': start_data})
start_dataframe[start_dataframe['foo'] == start_dataframe['foo'][
0]] = None
expected_dataframe = DataFrame({'foo': expected_result})
tm.assert_frame_equal(start_dataframe, expected_dataframe)
def test_none_coercion_loc_and_dataframe(self):
for start_data, expected_result, in self.EXPECTED_SINGLE_ROW_RESULTS:
start_dataframe = DataFrame({'foo': start_data})
start_dataframe.loc[start_dataframe['foo'] == start_dataframe[
'foo'][0]] = None
expected_dataframe = DataFrame({'foo': expected_result})
tm.assert_frame_equal(start_dataframe, expected_dataframe)
def test_none_coercion_mixed_dtypes(self):
start_dataframe = DataFrame({
'a': [1, 2, 3],
'b': [1.0, 2.0, 3.0],
'c': [datetime(2000, 1, 1), datetime(2000, 1, 2), datetime(2000, 1,
3)],
'd': ['a', 'b', 'c']
})
start_dataframe.iloc[0] = None
exp = DataFrame({'a': [np.nan, 2, 3],
'b': [np.nan, 2.0, 3.0],
'c': [NaT, datetime(2000, 1, 2),
datetime(2000, 1, 3)],
'd': [None, 'b', 'c']})
tm.assert_frame_equal(start_dataframe, exp)
def test_validate_indices_ok():
indices = np.asarray([0, 1])
validate_indices(indices, 2)
validate_indices(indices[:0], 0)
validate_indices(np.array([-1, -1]), 0)
def test_validate_indices_low():
indices = np.asarray([0, -2])
with tm.assert_raises_regex(ValueError, "'indices' contains"):
validate_indices(indices, 2)
def test_validate_indices_high():
indices = np.asarray([0, 1, 2])
with tm.assert_raises_regex(IndexError, "indices are out"):
validate_indices(indices, 2)
def test_validate_indices_empty():
with tm.assert_raises_regex(IndexError, "indices are out"):
validate_indices(np.array([0, 1]), 0)
| bsd-3-clause |
q1ang/scikit-learn | examples/plot_isotonic_regression.py | 303 | 1767 | """
===================
Isotonic Regression
===================
An illustration of the isotonic regression on generated data. The
isotonic regression finds a non-decreasing approximation of a function
while minimizing the mean squared error on the training data. The benefit
of such a model is that it does not assume any form for the target
function such as linearity. For comparison a linear regression is also
presented.
"""
print(__doc__)
# Author: Nelle Varoquaux <nelle.varoquaux@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Licence: BSD
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection
from sklearn.linear_model import LinearRegression
from sklearn.isotonic import IsotonicRegression
from sklearn.utils import check_random_state
n = 100
x = np.arange(n)
rs = check_random_state(0)
y = rs.randint(-50, 50, size=(n,)) + 50. * np.log(1 + np.arange(n))
###############################################################################
# Fit IsotonicRegression and LinearRegression models
ir = IsotonicRegression()
y_ = ir.fit_transform(x, y)
lr = LinearRegression()
lr.fit(x[:, np.newaxis], y) # x needs to be 2d for LinearRegression
###############################################################################
# plot result
segments = [[[i, y[i]], [i, y_[i]]] for i in range(n)]
lc = LineCollection(segments, zorder=0)
lc.set_array(np.ones(len(y)))
lc.set_linewidths(0.5 * np.ones(n))
fig = plt.figure()
plt.plot(x, y, 'r.', markersize=12)
plt.plot(x, y_, 'g.-', markersize=12)
plt.plot(x, lr.predict(x[:, np.newaxis]), 'b-')
plt.gca().add_collection(lc)
plt.legend(('Data', 'Isotonic Fit', 'Linear Fit'), loc='lower right')
plt.title('Isotonic regression')
plt.show()
| bsd-3-clause |
WaveBlocks/WaveBlocks | src/scripts_spawn_na/PlotWavefunctionSpawn.py | 1 | 4334 | """The WaveBlocks Project
Plot the wavefunctions probability densities of the
spawned wavepackets.
@author: R. Bourquin
@copyright: Copyright (C) 2011 R. Bourquin
@license: Modified BSD License
"""
import sys
from numpy import angle, conj, real, imag
from matplotlib.pyplot import *
from WaveBlocks import PotentialFactory
from WaveBlocks import IOManager
from WaveBlocks.Plot import plotcf
import GraphicsDefaults as GD
def plot_frames(iom, gid, view=None, plotphase=False, plotcomponents=False, plotabssqr=True, imgsize=(12,9)):
"""Plot the wave function for a series of timesteps.
:param iom: An ``IOManager`` instance providing the spawning simulation data.
:param gid: The group ID of the data group we plot the frames.
:param view: The aspect ratio.
:param plotphase: Whether to plot the complex phase. (slow)
:param plotcomponents: Whether to plot the real/imaginary parts..
:param plotabssqr: Whether to plot the absolute value squared.
"""
parameters_s = iom.load_parameters()
grid = iom.load_grid(blockid="global")
# For each mother-child spawn try pair
bidm, bidc = iom.get_block_ids(groupid=gid)
timegrid = iom.load_wavefunction_timegrid(blockid=bidm)
for step in timegrid:
print(" Timestep # " + str(step))
# Retrieve spawn data for both packets
values = []
try:
# Load data of original packet
wave = iom.load_wavefunction(timestep=step, blockid=bidm)
values.append([ wave[j,...] for j in xrange(parameters_s["ncomponents"]) ])
# Load data of spawned packet
wave = iom.load_wavefunction(timestep=step, blockid=bidc)
values.append([ wave[j,...] for j in xrange(parameters_s["ncomponents"]) ])
have_spawn_data = True
except ValueError:
have_spawn_data = False
# Plot the probability densities projected to the eigenbasis
fig = figure(figsize=imgsize)
# Create a bunch of subplots
axes = []
for index in xrange(parameters_s["ncomponents"]):
ax = fig.add_subplot(parameters_s["ncomponents"],1,index+1)
ax.ticklabel_format(style="sci", scilimits=(0,0), axis="y")
axes.append(ax)
# Plot spawned Wavefunctions
if have_spawn_data is True:
# For all data blocks
for colind, values in enumerate(values):
# For all components of a packet
for index, component in enumerate(values):
# Plot the packet
if plotcomponents is True:
axes[index].plot(grid, real(component))
axes[index].plot(grid, imag(component))
axes[index].set_ylabel(r"$\Re \varphi_"+str(index)+r", \Im \varphi_"+str(index)+r"$")
if plotabssqr is True:
axes[index].plot(grid, component*conj(component), color=colors_mc[colind])
axes[index].set_ylabel(r"$\langle \varphi_"+str(index)+r"| \varphi_"+str(index)+r"\rangle$")
if plotphase is True:
plotcf(grid, angle(component), component*conj(component))
axes[index].set_ylabel(r"$\langle \varphi_"+str(index)+r"| \varphi_"+str(index)+r"\rangle$")
# Set the axis properties
for ax in axes:
ax.set_xlabel(r"$x$")
# Set the aspect window
if view is not None:
ax.set_xlim(view[:2])
ax.set_ylim(view[2:])
fig.suptitle(r"$\Psi$ at time $"+str(step*parameters_s["dt"])+r"$")
fig.savefig("wavefunction_spawned_group"+str(gid)+"_"+ (5-len(str(step)))*"0"+str(step) +GD.output_format)
close(fig)
print(" Plotting frames finished")
if __name__ == "__main__":
iom = IOManager()
# Read file with new simulation data
try:
iom.open_file(filename=sys.argv[1])
except IndexError:
iom.open_file()
# The axes rectangle that is plotted
view = [-8.5, 8.5, -0.01, 0.6]
# Colors foth mother and child packet
colors_mc = ["red", "orange"]
gids = iom.get_group_ids(exclude=["global"])
for gid in gids:
plot_frames(iom, gid, view=view)
iom.finalize()
| bsd-3-clause |
nvoron23/scikit-learn | examples/ensemble/plot_gradient_boosting_regression.py | 227 | 2520 | """
============================
Gradient Boosting regression
============================
Demonstrate Gradient Boosting on the Boston housing dataset.
This example fits a Gradient Boosting model with least squares loss and
500 regression trees of depth 4.
"""
print(__doc__)
# Author: Peter Prettenhofer <peter.prettenhofer@gmail.com>
#
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import ensemble
from sklearn import datasets
from sklearn.utils import shuffle
from sklearn.metrics import mean_squared_error
###############################################################################
# Load data
boston = datasets.load_boston()
X, y = shuffle(boston.data, boston.target, random_state=13)
X = X.astype(np.float32)
offset = int(X.shape[0] * 0.9)
X_train, y_train = X[:offset], y[:offset]
X_test, y_test = X[offset:], y[offset:]
###############################################################################
# Fit regression model
params = {'n_estimators': 500, 'max_depth': 4, 'min_samples_split': 1,
'learning_rate': 0.01, 'loss': 'ls'}
clf = ensemble.GradientBoostingRegressor(**params)
clf.fit(X_train, y_train)
mse = mean_squared_error(y_test, clf.predict(X_test))
print("MSE: %.4f" % mse)
###############################################################################
# Plot training deviance
# compute test set deviance
test_score = np.zeros((params['n_estimators'],), dtype=np.float64)
for i, y_pred in enumerate(clf.staged_decision_function(X_test)):
test_score[i] = clf.loss_(y_test, y_pred)
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.title('Deviance')
plt.plot(np.arange(params['n_estimators']) + 1, clf.train_score_, 'b-',
label='Training Set Deviance')
plt.plot(np.arange(params['n_estimators']) + 1, test_score, 'r-',
label='Test Set Deviance')
plt.legend(loc='upper right')
plt.xlabel('Boosting Iterations')
plt.ylabel('Deviance')
###############################################################################
# Plot feature importance
feature_importance = clf.feature_importances_
# make importances relative to max importance
feature_importance = 100.0 * (feature_importance / feature_importance.max())
sorted_idx = np.argsort(feature_importance)
pos = np.arange(sorted_idx.shape[0]) + .5
plt.subplot(1, 2, 2)
plt.barh(pos, feature_importance[sorted_idx], align='center')
plt.yticks(pos, boston.feature_names[sorted_idx])
plt.xlabel('Relative Importance')
plt.title('Variable Importance')
plt.show()
| bsd-3-clause |
pereirapysensing/GeoPython_2017_3D | GeoPython_2017.py | 1 | 9004 | # -*- coding: utf-8 -*-
'''
Working with 3D point clouds with Python: GeoPython 2017, Basel - Switzerland
@author: Joao Paulo Pereira
University of Freiburg
Chair of Remote Sensing and Landscape Information Systems - FeLis
---------------------------------------
This material was developed especificaly for the workshop "Working with
3D point clouds with Python" presented at the GeoPython 2017 conference.
The goal here is to present the basics on 3D point cloud processing using
Python programming language. Along this workshop, participants will have
contact with the following topics:
- where can one download free 3D point cloud data
- where 3D point clouds come from
- how to import LAS files and use it as a numpy array
- how to manipulate LAS files in python
- how to visualize 3D point clouds
- how to produce images from 3D point clouds
In case you whish to contact me, please send me an e-mail:
joao.pereira@felis.uni-freiburg.de
pereira.jpa00@gmail.com
###############################################################################
#################### ####################
#################### INSTRUCTIONS ####################
#################### ####################
###############################################################################
This script must be accompanied by the Auxiliary3DFunctions.py. If that is not
the case, this script will not work. In addition, lines starting with '#!!!'
were diactivated on purpose. If you wish to visualize the 3D point clouds,
please reactivate the lines by removing the '#!!!'.
Have fun =) !!!!
'''
# Import libraries
import os
os.chdir("F:\Joao\MEGA\Publications\GeoPython 2017\Workshop\GeoPython_2017_3D".replace("\\","/"))
try:
import Auxiliary3DFunctions as af
except ImportError:
print """Auxiliary3DFunctions was not found. Please contact
João Paulo Pereira at joao.pereira@felis.uni-freiburg.de"""
import math
import numpy as np
import matplotlib.pyplot as plt
##############################################################################
### Create a laspy object
# This object is what you will use for the rest of the time. You can access
# las file attributes(e.g. Classification, return number, intensity)
las, x, y, z = af.import_las("data/points.las")
print 'Data type: {}'.format(type(las))
print 'Point cloud size: {} points'.format(len(x))
##############################################################################
### Now that we have the laspy object, we can access several attributes
# Let's take a look how is the elevetation distribution.
# Plotting the histogram for the elevation
af.plot_las(z,
num_bins=100,
color='red',
xlabel='Elevation (meters)',
ylabel='Frequency',
title='Elevation distribution in the LAS file')
###############################################################################
#### Let's see the point cloud to identify what is wrong
#
af.view_las(x, y, z, z, title='Elevation')
#
###############################################################################
#### Let's remove some of this noise first and then plot the histogram again.
#
## Remove all points with elevation equal or abouve 100 meters
mask = np.where(z>=100)
xc, yc, zc = (np.delete(x, mask),
np.delete(y, mask),
np.delete(z, mask))
#
## Plotting the histogram for the elevation, now without the noise
af.plot_las(zc,
num_bins=100,
color='orange',
xlabel='Elevation (meters)',
ylabel='Frequency',
title='Elevation distribution without outlier in the LAS file')
#
## ##############################################################################
## ### Now, the point cloud should look fine
##
af.view_las(xc,yc,zc,zc, title='Elevation')
##
## ##############################################################################
## ### Now, we should see if our LiDAR data is classified...
##
## # Create a new variable with the classification information
las_class = np.array(las.classification, dtype='int8')
las_class = np.delete(las_class, mask) #Remove the noise points
print 'Smaller class (Unclassified):', min(las_class)
print 'Bigger class (Noise):', max(las_class)
##
## ##############################################################################
## ### Now, let's the point cloud with the classification information
##
af.view_las(xc,yc, zc, las_class, title='Classification')
##
## ##############################################################################
## ### Prepare the data to generate images
## # Point density from the metadata 24,41 points/m^2
##
## Get first returns for the Digital Surface Model (DSM)
first_mask = np.where(las.return_num!=1)
x_first, y_first, z_first = (np.delete(x, first_mask),
np.delete(y, first_mask),
np.delete(z, first_mask))
print len(z_first)
#
z_first_mask = np.where(z_first>=100)
x_first, y_first, z_first = (np.delete(x_first, z_first_mask),
np.delete(y_first, z_first_mask),
np.delete(z_first, z_first_mask))
print len(z_first)
##
## ##############################################################################
## # Plotting the histogram for the elevation, now without the noise
af.plot_las(z_first,
num_bins=100,
color='brown',
xlabel='Elevation (meters)',
ylabel='Frequency',
title='Elevation distribution from first returns')
##
##############################################################################
# Now to calculate the DSM we need first to find the correct spatial resolution
# using the point density information from the LAS file metadata (24,41 points/m^2)
ideal_resolution = round(math.sqrt(1./24.41) + 0.1,2)
# Then we apply the las2grid function to calculte the dsm
image = af.las2grid(x_first,
y_first,
z_first,
cell=ideal_resolution,
NODATA=0.0,
target=r'F:/geopython/dsm_python.asc')
#
###############################################################################
# Let's work now with the digital terrain model (DTM).
#First we need to filter the point cloud and select only points with
# classification 2 (ground)
# creates mask for classification 2
ground_mask = np.where(las.classification==2)
#apply mask to data
x_ground, y_ground, z_ground = (x[ground_mask],
y[ground_mask],
z[ground_mask])
af.plot_las(z_ground,
num_bins=100,
color='grey',
xlabel='Elevation (meters)',
ylabel='Frequency',
title='Elevation distribution from ground points')
# we need the dsm to get the geoinformation
source='F:/geopython/dsm_python.asc'
# calculate the dtm using the function lasinterpolated
dtm = af.lasinterpolated(source,
"F:/geopython/dtm_python.tif",
x_ground,
y_ground,
z_ground,
method='nearest',
fill_value=0.0,
EPSG=26910,
contour=False,
plot=True)
###############################################################################
# We can also calculate other products like slope, aspect and shaded images.
products = af.las2raster(source,
slopegrid = "F:/geopython/slope.asc",
aspectgrid = "F:/geopython/aspect.asc",
shadegrid = "F:/geopython/relief.asc")
# from the dsm we can calculate contour lines
af.las2contour(source, "F:/geopython/contour", 1, 10)
#
#################################################################################
# Also create RGB images using different products combinations
af.raster2color(source,
"F:/geopython/relief.asc",
"F:/geopython/slope.asc",
target="F:/geopython/colored.tif",
EPSG=26910)
"""
###############################################################################
##################### ######################
##################### THANK YOU ######################
##################### ######################
###############################################################################
"""
| mit |
MechCoder/scikit-garden | skgarden/mondrian/ensemble/forest.py | 1 | 14687 | import numpy as np
from scipy import sparse
from sklearn.base import ClassifierMixin
from sklearn.ensemble.forest import ForestClassifier
from sklearn.ensemble.forest import ForestRegressor
from sklearn.exceptions import NotFittedError
from sklearn.externals.joblib import delayed, Parallel
from sklearn.preprocessing import LabelEncoder
from sklearn.utils import check_random_state
from sklearn.utils.validation import check_array
from sklearn.utils.validation import check_X_y
from ..tree import MondrianTreeClassifier
from ..tree import MondrianTreeRegressor
def _single_tree_pfit(tree, X, y, classes=None):
if classes is not None:
tree.partial_fit(X, y, classes)
else:
tree.partial_fit(X, y)
return tree
class BaseMondrian(object):
def weighted_decision_path(self, X):
"""
Returns the weighted decision path in the forest.
Each non-zero value in the decision path determines the
weight of that particular node while making predictions.
Parameters
----------
X : array-like, shape = (n_samples, n_features)
Input.
Returns
-------
decision_path : sparse csr matrix, shape = (n_samples, n_total_nodes)
Return a node indicator matrix where non zero elements
indicate the weight of that particular node in making predictions.
est_inds : array-like, shape = (n_estimators + 1,)
weighted_decision_path[:, est_inds[i]: est_inds[i + 1]]
provides the weighted_decision_path of estimator i
"""
X = self._validate_X_predict(X)
est_inds = np.cumsum(
[0] + [est.tree_.node_count for est in self.estimators_])
paths = sparse.hstack(
[est.weighted_decision_path(X) for est in self.estimators_]).tocsr()
return paths, est_inds
# XXX: This is mainly a stripped version of BaseForest.fit
# from sklearn.forest
def partial_fit(self, X, y, classes=None):
"""
Incremental building of Mondrian Forests.
Parameters
----------
X : array_like, shape = [n_samples, n_features]
The input samples. Internally, it will be converted to
``dtype=np.float32``
y: array_like, shape = [n_samples]
Input targets.
classes: array_like, shape = [n_classes]
Ignored for a regression problem. For a classification
problem, if not provided this is inferred from y.
This is taken into account for only the first call to
partial_fit and ignored for subsequent calls.
Returns
-------
self: instance of MondrianForest
"""
X, y = check_X_y(X, y, dtype=np.float32, multi_output=False)
random_state = check_random_state(self.random_state)
# Wipe out estimators if partial_fit is called after fit.
first_call = not hasattr(self, "first_")
if first_call:
self.first_ = True
if isinstance(self, ClassifierMixin):
if first_call:
if classes is None:
classes = LabelEncoder().fit(y).classes_
self.classes_ = classes
self.n_classes_ = len(self.classes_)
# Remap output
n_samples, self.n_features_ = X.shape
y = np.atleast_1d(y)
if y.ndim == 2 and y.shape[1] == 1:
warn("A column-vector y was passed when a 1d array was"
" expected. Please change the shape of y to "
"(n_samples,), for example using ravel().",
DataConversionWarning, stacklevel=2)
self.n_outputs_ = 1
# Initialize estimators at first call to partial_fit.
if first_call:
# Check estimators
self._validate_estimator()
self.estimators_ = []
for _ in range(self.n_estimators):
tree = self._make_estimator(append=False, random_state=random_state)
self.estimators_.append(tree)
# XXX: Switch to threading backend when GIL is released.
if isinstance(self, ClassifierMixin):
self.estimators_ = Parallel(n_jobs=self.n_jobs, verbose=self.verbose)(
delayed(_single_tree_pfit)(t, X, y, classes) for t in self.estimators_)
else:
self.estimators_ = Parallel(n_jobs=self.n_jobs, verbose=self.verbose)(
delayed(_single_tree_pfit)(t, X, y) for t in self.estimators_)
return self
class MondrianForestRegressor(ForestRegressor, BaseMondrian):
"""
A MondrianForestRegressor is an ensemble of MondrianTreeRegressors.
The variance in predictions is reduced by averaging the predictions
from all trees.
Parameters
----------
n_estimators : integer, optional (default=10)
The number of trees in the forest.
max_depth : integer, optional (default=None)
The depth to which each tree is grown. If None, the tree is either
grown to full depth or is constrained by `min_samples_split`.
min_samples_split : integer, optional (default=2)
Stop growing the tree if all the nodes have lesser than
`min_samples_split` number of samples.
bootstrap : boolean, optional (default=False)
If bootstrap is set to False, then all trees are trained on the
entire training dataset. Else, each tree is fit on n_samples
drawn with replacement from the training dataset.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
"""
def __init__(self,
n_estimators=10,
max_depth=None,
min_samples_split=2,
bootstrap=False,
n_jobs=1,
random_state=None,
verbose=0):
super(MondrianForestRegressor, self).__init__(
base_estimator=MondrianTreeRegressor(),
n_estimators=n_estimators,
estimator_params=("max_depth", "min_samples_split",
"random_state"),
bootstrap=bootstrap,
n_jobs=n_jobs,
random_state=random_state,
verbose=verbose)
self.max_depth = max_depth
self.min_samples_split = min_samples_split
def fit(self, X, y):
"""Builds a forest of trees from the training set (X, y).
Parameters
----------
X : array-like or sparse matrix of shape = [n_samples, n_features]
The training input samples. Internally, its dtype will be converted
to ``dtype=np.float32``. If a sparse matrix is provided, it will be
converted into a sparse ``csc_matrix``.
y : array-like, shape = [n_samples] or [n_samples, n_outputs]
The target values (class labels in classification, real numbers in
regression).
sample_weight : array-like, shape = [n_samples] or None
Sample weights. If None, then samples are equally weighted. Splits
that would create child nodes with net zero or negative weight are
ignored while searching for a split in each node. In the case of
classification, splits are also ignored if they would result in any
single class carrying a negative weight in either child node.
Returns
-------
self : object
Returns self.
"""
X, y = check_X_y(X, y, dtype=np.float32, multi_output=False)
return super(MondrianForestRegressor, self).fit(X, y)
def predict(self, X, return_std=False):
"""
Returns the predicted mean and std.
The prediction is a GMM drawn from
\(\sum_{i=1}^T w_i N(m_i, \sigma_i)\) where \(w_i = {1 \over T}\).
The mean \(E[Y | X]\) reduces to \({\sum_{i=1}^T m_i \over T}\)
The variance \(Var[Y | X]\) is given by $$Var[Y | X] = E[Y^2 | X] - E[Y | X]^2$$
$$=\\frac{\sum_{i=1}^T E[Y^2_i| X]}{T} - E[Y | X]^2$$
$$= \\frac{\sum_{i=1}^T (Var[Y_i | X] + E[Y_i | X]^2)}{T} - E[Y| X]^2$$
Parameters
----------
X : array-like, shape = (n_samples, n_features)
Input samples.
return_std : boolean, default (False)
Whether or not to return the standard deviation.
Returns
-------
y : array-like, shape = (n_samples,)
Predictions at X.
std : array-like, shape = (n_samples,)
Standard deviation at X.
"""
X = check_array(X)
if not hasattr(self, "estimators_"):
raise NotFittedError("The model has to be fit before prediction.")
ensemble_mean = np.zeros(X.shape[0])
exp_y_sq = np.zeros_like(ensemble_mean)
for est in self.estimators_:
if return_std:
mean, std = est.predict(X, return_std=True)
exp_y_sq += (std**2 + mean**2)
else:
mean = est.predict(X, return_std=False)
ensemble_mean += mean
ensemble_mean /= len(self.estimators_)
exp_y_sq /= len(self.estimators_)
if not return_std:
return ensemble_mean
std = exp_y_sq - ensemble_mean**2
std[std <= 0.0] = 0.0
std **= 0.5
return ensemble_mean, std
def partial_fit(self, X, y):
"""
Incremental building of Mondrian Forest Regressors.
Parameters
----------
X : array_like, shape = [n_samples, n_features]
The input samples. Internally, it will be converted to
``dtype=np.float32``
y: array_like, shape = [n_samples]
Input targets.
classes: array_like, shape = [n_classes]
Ignored for a regression problem. For a classification
problem, if not provided this is inferred from y.
This is taken into account for only the first call to
partial_fit and ignored for subsequent calls.
Returns
-------
self: instance of MondrianForestClassifier
"""
return super(MondrianForestRegressor, self).partial_fit(X, y)
class MondrianForestClassifier(ForestClassifier, BaseMondrian):
"""
A MondrianForestClassifier is an ensemble of MondrianTreeClassifiers.
The probability \(p_{j}\) of class \(j\) is given
$$\sum_{i}^{N_{est}} \\frac{p_{j}^i}{N_{est}}$$
Parameters
----------
n_estimators : integer, optional (default=10)
The number of trees in the forest.
max_depth : integer, optional (default=None)
The depth to which each tree is grown. If None, the tree is either
grown to full depth or is constrained by `min_samples_split`.
min_samples_split : integer, optional (default=2)
Stop growing the tree if all the nodes have lesser than
`min_samples_split` number of samples.
bootstrap : boolean, optional (default=False)
If bootstrap is set to False, then all trees are trained on the
entire training dataset. Else, each tree is fit on n_samples
drawn with replacement from the training dataset.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
"""
def __init__(self,
n_estimators=10,
max_depth=None,
min_samples_split=2,
bootstrap=False,
n_jobs=1,
random_state=None,
verbose=0):
super(MondrianForestClassifier, self).__init__(
base_estimator=MondrianTreeClassifier(),
n_estimators=n_estimators,
estimator_params=("max_depth", "min_samples_split",
"random_state"),
bootstrap=bootstrap,
n_jobs=n_jobs,
random_state=random_state,
verbose=verbose)
self.max_depth = max_depth
self.min_samples_split = min_samples_split
def fit(self, X, y):
"""Builds a forest of trees from the training set (X, y).
Parameters
----------
X : array-like or sparse matrix of shape = [n_samples, n_features]
The training input samples. Internally, its dtype will be converted
to ``dtype=np.float32``. If a sparse matrix is provided, it will be
converted into a sparse ``csc_matrix``.
y : array-like, shape = [n_samples] or [n_samples, n_outputs]
The target values (class labels in classification, real numbers in
regression).
sample_weight : array-like, shape = [n_samples] or None
Sample weights. If None, then samples are equally weighted. Splits
that would create child nodes with net zero or negative weight are
ignored while searching for a split in each node. In the case of
classification, splits are also ignored if they would result in any
single class carrying a negative weight in either child node.
Returns
-------
self : object
Returns self.
"""
X, y = check_X_y(X, y, dtype=np.float32, multi_output=False)
return super(MondrianForestClassifier, self).fit(X, y)
def partial_fit(self, X, y, classes=None):
"""
Incremental building of Mondrian Forest Classifiers.
Parameters
----------
X : array_like, shape = [n_samples, n_features]
The input samples. Internally, it will be converted to
``dtype=np.float32``
y: array_like, shape = [n_samples]
Input targets.
classes: array_like, shape = [n_classes]
Ignored for a regression problem. For a classification
problem, if not provided this is inferred from y.
This is taken into account for only the first call to
partial_fit and ignored for subsequent calls.
Returns
-------
self: instance of MondrianForestClassifier
"""
return super(MondrianForestClassifier, self).partial_fit(
X, y, classes=classes)
| bsd-3-clause |
WarrenWeckesser/scikits-image | doc/examples/plot_ransac.py | 24 | 1589 | """
=========================================
Robust line model estimation using RANSAC
=========================================
In this example we see how to robustly fit a line model to faulty data using
the RANSAC algorithm.
"""
import numpy as np
from matplotlib import pyplot as plt
from skimage.measure import LineModel, ransac
np.random.seed(seed=1)
# generate coordinates of line
x = np.arange(-200, 200)
y = 0.2 * x + 20
data = np.column_stack([x, y])
# add faulty data
faulty = np.array(30 * [(180., -100)])
faulty += 5 * np.random.normal(size=faulty.shape)
data[:faulty.shape[0]] = faulty
# add gaussian noise to coordinates
noise = np.random.normal(size=data.shape)
data += 0.5 * noise
data[::2] += 5 * noise[::2]
data[::4] += 20 * noise[::4]
# fit line using all data
model = LineModel()
model.estimate(data)
# robustly fit line only using inlier data with RANSAC algorithm
model_robust, inliers = ransac(data, LineModel, min_samples=2,
residual_threshold=1, max_trials=1000)
outliers = inliers == False
# generate coordinates of estimated models
line_x = np.arange(-250, 250)
line_y = model.predict_y(line_x)
line_y_robust = model_robust.predict_y(line_x)
fig, ax = plt.subplots()
ax.plot(data[inliers, 0], data[inliers, 1], '.b', alpha=0.6,
label='Inlier data')
ax.plot(data[outliers, 0], data[outliers, 1], '.r', alpha=0.6,
label='Outlier data')
ax.plot(line_x, line_y, '-k', label='Line model from all data')
ax.plot(line_x, line_y_robust, '-b', label='Robust line model')
ax.legend(loc='lower left')
plt.show()
| bsd-3-clause |
zhenv5/scikit-learn | sklearn/manifold/setup.py | 99 | 1243 | import os
from os.path import join
import numpy
from numpy.distutils.misc_util import Configuration
from sklearn._build_utils import get_blas_info
def configuration(parent_package="", top_path=None):
config = Configuration("manifold", parent_package, top_path)
libraries = []
if os.name == 'posix':
libraries.append('m')
config.add_extension("_utils",
sources=["_utils.c"],
include_dirs=[numpy.get_include()],
libraries=libraries,
extra_compile_args=["-O3"])
cblas_libs, blas_info = get_blas_info()
eca = blas_info.pop('extra_compile_args', [])
eca.append("-O4")
config.add_extension("_barnes_hut_tsne",
libraries=cblas_libs,
sources=["_barnes_hut_tsne.c"],
include_dirs=[join('..', 'src', 'cblas'),
numpy.get_include(),
blas_info.pop('include_dirs', [])],
extra_compile_args=eca, **blas_info)
return config
if __name__ == "__main__":
from numpy.distutils.core import setup
setup(**configuration().todict())
| bsd-3-clause |
mhdella/scikit-learn | examples/ensemble/plot_partial_dependence.py | 249 | 4456 | """
========================
Partial Dependence Plots
========================
Partial dependence plots show the dependence between the target function [1]_
and a set of 'target' features, marginalizing over the
values of all other features (the complement features). Due to the limits
of human perception the size of the target feature set must be small (usually,
one or two) thus the target features are usually chosen among the most
important features
(see :attr:`~sklearn.ensemble.GradientBoostingRegressor.feature_importances_`).
This example shows how to obtain partial dependence plots from a
:class:`~sklearn.ensemble.GradientBoostingRegressor` trained on the California
housing dataset. The example is taken from [HTF2009]_.
The plot shows four one-way and one two-way partial dependence plots.
The target variables for the one-way PDP are:
median income (`MedInc`), avg. occupants per household (`AvgOccup`),
median house age (`HouseAge`), and avg. rooms per household (`AveRooms`).
We can clearly see that the median house price shows a linear relationship
with the median income (top left) and that the house price drops when the
avg. occupants per household increases (top middle).
The top right plot shows that the house age in a district does not have
a strong influence on the (median) house price; so does the average rooms
per household.
The tick marks on the x-axis represent the deciles of the feature values
in the training data.
Partial dependence plots with two target features enable us to visualize
interactions among them. The two-way partial dependence plot shows the
dependence of median house price on joint values of house age and avg.
occupants per household. We can clearly see an interaction between the
two features:
For an avg. occupancy greater than two, the house price is nearly independent
of the house age, whereas for values less than two there is a strong dependence
on age.
.. [HTF2009] T. Hastie, R. Tibshirani and J. Friedman,
"Elements of Statistical Learning Ed. 2", Springer, 2009.
.. [1] For classification you can think of it as the regression score before
the link function.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn.cross_validation import train_test_split
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble.partial_dependence import plot_partial_dependence
from sklearn.ensemble.partial_dependence import partial_dependence
from sklearn.datasets.california_housing import fetch_california_housing
# fetch California housing dataset
cal_housing = fetch_california_housing()
# split 80/20 train-test
X_train, X_test, y_train, y_test = train_test_split(cal_housing.data,
cal_housing.target,
test_size=0.2,
random_state=1)
names = cal_housing.feature_names
print('_' * 80)
print("Training GBRT...")
clf = GradientBoostingRegressor(n_estimators=100, max_depth=4,
learning_rate=0.1, loss='huber',
random_state=1)
clf.fit(X_train, y_train)
print("done.")
print('_' * 80)
print('Convenience plot with ``partial_dependence_plots``')
print
features = [0, 5, 1, 2, (5, 1)]
fig, axs = plot_partial_dependence(clf, X_train, features, feature_names=names,
n_jobs=3, grid_resolution=50)
fig.suptitle('Partial dependence of house value on nonlocation features\n'
'for the California housing dataset')
plt.subplots_adjust(top=0.9) # tight_layout causes overlap with suptitle
print('_' * 80)
print('Custom 3d plot via ``partial_dependence``')
print
fig = plt.figure()
target_feature = (1, 5)
pdp, (x_axis, y_axis) = partial_dependence(clf, target_feature,
X=X_train, grid_resolution=50)
XX, YY = np.meshgrid(x_axis, y_axis)
Z = pdp.T.reshape(XX.shape).T
ax = Axes3D(fig)
surf = ax.plot_surface(XX, YY, Z, rstride=1, cstride=1, cmap=plt.cm.BuPu)
ax.set_xlabel(names[target_feature[0]])
ax.set_ylabel(names[target_feature[1]])
ax.set_zlabel('Partial dependence')
# pretty init view
ax.view_init(elev=22, azim=122)
plt.colorbar(surf)
plt.suptitle('Partial dependence of house value on median age and '
'average occupancy')
plt.subplots_adjust(top=0.9)
plt.show()
| bsd-3-clause |
ratnania/pigasus | python/plugin/parabolic_monge_ampere.py | 1 | 6136 | # -*- coding: UTF-8 -*-
#! /usr/bin/python
import sys
import numpy as np
from pigasus.gallery.basicPDE import *
import matplotlib.pyplot as plt
import numpy as np
from caid.cad_geometry import cad_nurbs
from __main__ import __file__ as filename
# ...
abs = np.abs; sin = np.sin ; cos = np.cos ; exp = np.exp ; log = np.log ; sqrt = np.sqrt
pi = np.pi; atan = np.arctan2 ; cosh = np.cosh
sech = lambda x: 1./cosh(x)
# ...
#-----------------------------------
n = [15,15]
p = [ 3, 3]
#-----------------------------------
#-----------------------------------
# ...
C0 = 1.0
rho0 = lambda x,y : 1.
# C1 = 0.616805883732
# t = 0.5
# rho1 = lambda x,y : (1. + 5*exp(-50*abs((x-0.5-t)**2+(y-0.5)**2-0.09)))
C1 = 1.75484181939
rho1 = lambda x,y : ( 1. / (2. + cos(8*pi*sqrt((x-0.5)**2+(y-0.5)**2))))
# ... test7
#xc = 0.7 ; yc = 0.5
#C1 = 0.281648379406
#
#r = lambda s,t : sqrt( (s-xc)**2 + (t-yc)**2 )
#theta = lambda s,t : atan(t-yc,s-xc)
#def rho1(s,t):
# r_ = r(s,t) ; t_ = theta(s,t)
# val = C1 * (1. + 9./(1. + (10*r_*cos(t_-20*r_**2))**2) )
# return val
# ...
c_rho = C0/C1
# ...
#-----------------------------------
#-----------------------------------
# ...
from caid.cad_geometry import square as domain
geo = domain(n=n,p=p)
gamma = 11.
eps = 1.
dt = 1.
rtol = 1.e-3
maxiter = 10
verbose = True
# ...
# values of gradu.n at the boundary
# ...
def func_g(x,y):
return [x,y]
# ...
# ...
# values of u at the boundary
# ...
bc_neumann={}
bc_neumann [0,0] = func_g
bc_neumann [0,1] = func_g
bc_neumann [0,2] = func_g
bc_neumann [0,3] = func_g
# ...
# ...
tc = {}
tc['A'] = lambda x,y : [-eps*gamma, 0., 0., -eps*gamma]
tc['b'] = lambda x,y : [eps]
tc['u'] = lambda x,y : [0.]
tc['f'] = lambda x,y : [0.5*(x**2+y**2)]
tc['bc_neumann'] = bc_neumann
# ...
# ...
PDE = basicPDE(geometry=geo, testcase=tc)
PDE.meanConstraint = False
# ...
# ...
V = PDE.space
V.nderiv_pts = 2
# ...
# ...
U = PDE.unknown
rhs = PDE.rhs
# ...
# ...
PDE.assembly()
# ...
#-----------------------------------
def F(U,x,y):
# ...
P = V.get_points()
x = P[0,0,:]
xdu = P[0,1,:]
xdv = P[0,2,:]
xduu = P[0,3,:]
xduv = P[0,4,:]
xdvv = P[0,5,:]
y = P[1,0,:]
ydu = P[1,1,:]
ydv = P[1,2,:]
yduu = P[1,3,:]
yduv = P[1,4,:]
ydvv = P[1,5,:]
jac = xdu * ydv - xdv * ydu
# ...
# ...
D = U.evaluate(patch_id=0, nderiv=2)
_U = D[0,0,:]
Udu = D[0,1,:]
Udv = D[0,2,:]
Uduu = D[0,3,:]
Uduv = D[0,4,:]
Udvv = D[0,5,:]
Udx = ydv * Udu - ydu * Udv
Udx /= jac
Udy = - xdv * Udu + xdu * Udv
Udy /= jac
C1 = Uduu - xduu * Udx - yduu * Udy
C2 = Uduv - xduv * Udx - yduv * Udy
C3 = Udvv - xdvv * Udx - ydvv * Udy
Udxx = C1 * ydv**2 - 2 * C2 * ydu * ydv + C3 * ydu**2
Udxx /= jac**2
Udxy = - C1 * xdv * ydv + C2 *(xdu * ydv + xdv * ydu) - C3 * xdu * ydu
Udxy /= jac**2
Udyy = C1 * xdv**2 - 2 * C2 * xdu * xdv + C3 * xdu**2
Udyy /= jac**2
# ...
Hessian = Udxx * Udyy - Udxy**2
# _F = sqrt ( abs(Hessian * rho1 (Udx,Udy) / (c_rho * rho0(x,y))) )
_F = log ( abs(Hessian * rho1 (Udx,Udy) / (c_rho * rho0(x,y))) )
# f_values = c_rho * rho0(x,y) / rho1 (Udx,Udy)
# _F = - np.sqrt ( Udxx**2 + Udyy**2 + 2 * Udxy**2 + 2 * f_values )
return [_F]
#-----------------------------------
#-----------------------------------
def plotMesh(PDE, ntx=60, nty=60):
from matplotlib import pylab as plt
geo = PDE.geometry
patch_id = 0
nrb = geo[patch_id]
C = np.zeros_like(nrb.points)
_C = U.tomatrix(patch_id)
shape = list(nrb.shape)
C = np.zeros(shape+[3])
C[...,0] = _C
srf = cad_nurbs(nrb.knots, C, weights= nrb.weights)
ub = srf.knots[0][0]
ue = srf.knots[0][-1]
vb = srf.knots[1][0]
ve = srf.knots[1][-1]
tx = np.linspace(ub, ue, ntx)
ty = np.linspace(vb, ve, nty)
nderiv = 1
nderiv = 2
# ...
P = nrb.evaluate_deriv(tx,ty,nderiv=nderiv)
x = P[0,:,:,0]
xdu = P[1,:,:,0]
xdv = P[2,:,:,0]
xduu = P[3,:,:,0]
xduv = P[4,:,:,0]
xdvv = P[5,:,:,0]
y = P[0,:,:,1]
ydu = P[1,:,:,1]
ydv = P[2,:,:,1]
yduu = P[3,:,:,1]
yduv = P[4,:,:,1]
ydvv = P[5,:,:,1]
jac = xdu * ydv - xdv * ydu
# ...
# ...
D = srf.evaluate_deriv(tx,ty,nderiv=nderiv)
Udu = D[1,...,0]
Udv = D[2,...,0]
Uduu = D[3,...,0]
Uduv = D[4,...,0]
Udvv = D[5,...,0]
Udx = ydv * Udu - ydu * Udv
Udx /= jac
Udy = - xdv * Udu + xdu * Udv
Udy /= jac
C1 = Uduu - xduu * Udx - yduu * Udy
C2 = Uduv - xduv * Udx - yduv * Udy
C3 = Udvv - xdvv * Udx - ydvv * Udy
Udxx = C1 * ydv**2 - 2 * C2 * ydu * ydv + C3 * ydu**2
Udxy = - C1 * xdv * ydv + C2 *(xdu * ydv + xdv * ydu) - C3 * xdu * ydu
Udyy = C1 * xdv**2 - 2 * C2 * xdu * xdv + C3 * xdu**2
# ...
# ...
fig = plt.figure()
# Udx[:,0] = 0.
# Udx[:,-1] = 1.
# Udy[0,:] = 0.
# Udy[-1,:] = 1.
for i,v in enumerate(ty):
# phidx = Udu[:,i]
# phidy = Udv[:,i]
phidx = Udx[:,i]
phidy = Udy[:,i]
plt.plot(phidx, phidy, '-b')
for i,u in enumerate(tx):
# phidx = Udu[i,:]
# phidy = Udv[i,:]
phidx = Udx[i,:]
phidy = Udy[i,:]
plt.plot(phidx, phidy, '-b')
plt.show()
#-----------------------------------
# ...
def rhs_func(x,y):
return F(U,x,y)
rhs.set_func(rhs_func)
# ...
# ...
u0 = lambda x,y : 0.5*(x**2+y**2)
PDE.interpolate(u0)
#PDE.plot(); plt.colorbar() ; plt.show()
# ...
# ...
list_Err = [1.e6]
t = 0.
i = 0
while (list_Err[-1] > rtol) and (i < maxiter):
t += dt
un = U.get()
PDE.update()
PDE.solve(rhs)
dn = U.get()
uh = un + dt * dn
U.set(uh)
err = np.linalg.norm(dn) / np.linalg.norm(un)
list_Err.append(err)
if verbose:
print(i, ": "," |F(x)| = ", list_Err[-1])
i += 1
# ...
# ...
list_Err = np.asarray(list_Err[1:])
# ...
# ...
plotMesh(PDE, ntx=60, nty=60)
# ...
| mit |
jiajunshen/partsNet | scripts/popLargeMatchUpdateVaryParts.py | 1 | 12328 | from __future__ import division, print_function,absolute_import
import pylab as plt
import amitgroup.plot as gr
import numpy as np
import amitgroup as ag
import os
import pnet
import matplotlib.pylab as plot
from pnet.cyfuncs import index_map_pooling
from Queue import Queue
def extract(ims,allLayers):
#print(allLayers)
curX = ims
for layer in allLayers:
#print('-------------')
#print(layer)
curX = layer.extract(curX)
#print(np.array(curX).shape)
#print('------------------')
return curX
def partsPool(originalPartsRegion, numParts):
partsGrid = np.zeros((1,1,numParts))
for i in range(originalPartsRegion.shape[0]):
for j in range(originalPartsRegion.shape[1]):
if(originalPartsRegion[i,j]!=-1):
partsGrid[0,0,originalPartsRegion[i,j]] = 1
return partsGrid
def test(ims,labels,totalNumPartsCoded,net):
yhat = net.classify((ims,totalNumPartsCoded))
return yhat == labels
#def trainPOP():
if pnet.parallel.main(__name__):
#X = np.load("testMay151.npy")
#X = np.load("_3_100*6*6_1000*1*1_Jun_16_danny.npy")
X = np.load("sequential6*6.npy")
model = X.item()
# get num of Parts
numParts = model['layers'][1]['num_parts']
net = pnet.PartsNet.load_from_dict(model)
allLayer = net.layers
ims,labels = ag.io.load_mnist('training')
trainingDataNum = 1000
firstLayerShape = 6
extractedFeature = extract(ims[0:trainingDataNum],allLayer[0:2])[0]
print(extractedFeature.shape)
extractedFeature = extractedFeature.reshape(extractedFeature.shape[0:3])
partsPlot = np.zeros((numParts,firstLayerShape,firstLayerShape))
partsCodedNumber = np.zeros(numParts)
imgRegion= [[] for x in range(numParts)]
partsRegion = [[] for x in range(numParts)]
for i in range(trainingDataNum):
codeParts = extractedFeature[i]
for m in range(29 - firstLayerShape):
for n in range(29 - firstLayerShape):
if(codeParts[m,n]!=-1):
partsPlot[codeParts[m,n]]+=ims[i,m:m+firstLayerShape,n:n+firstLayerShape]
partsCodedNumber[codeParts[m,n]]+=1
for j in range(numParts):
partsPlot[j] = partsPlot[j]/partsCodedNumber[j]
secondLayerCodedNumber = 0
secondLayerShape = 12
frame = (secondLayerShape - firstLayerShape)/2
frame = int(frame)
totalRange = 29 - firstLayerShape
if 1:
for i in range(trainingDataNum):
codeParts = extractedFeature[i]
for m in range(totalRange)[frame:totalRange - frame]:
for n in range(totalRange)[frame:totalRange - frame]:
if(codeParts[m,n]!=-1):
imgRegion[codeParts[m,n]].append(ims[i, m - frame:m + secondLayerShape - frame,n - frame:n + secondLayerShape - frame])
secondLayerCodedNumber+=1
partsGrid = partsPool(codeParts[m-frame:m+frame + 1,n-frame:n+frame + 1],numParts)
partsRegion[codeParts[m,n]].append(partsGrid)
##second-layer parts
numSecondLayerParts = 20
numSecondLayerPartsList = np.zeros(numParts)
for i in range(numParts):
numSecondLayerPartsList[i] = np.asarray(partsRegion[i]).shape[0]
patchPPart = np.floor(np.sum(numSecondLayerPartsList)/2000)
for i in range(numParts):
numSecondLayerPartsList[i] = np.minimum(np.floor(numSecondLayerPartsList[i]/patchPPart),numSecondLayerParts)
numSecondLayerPartsList = np.asarray(numSecondLayerPartsList,dtype = np.uint8)
totalNumSecondParts = np.sum(numSecondLayerPartsList)
print("00000000000000000000000000000000000000000000")
print(numSecondLayerPartsList)
allPartsLayer = [[pnet.PartsLayer(numSecondLayerPartsList[i],(1,1),
settings=dict(outer_frame = 0,
threshold = 5,
sample_per_image = 1,
max_samples=10000,
min_prob = 0.005))]
for i in range(numParts)]
allPartsLayerImg = np.zeros((numParts,numSecondLayerParts,secondLayerShape,secondLayerShape))
allPartsLayerImgNumber = np.zeros((numParts,numSecondLayerParts))
zeroParts = 0
imgRegionPool = [[] for i in range(numParts * numSecondLayerParts)]
for i in range(numParts):
if(not partsRegion[i]):
continue
allPartsLayer[i][0].train_from_samples(np.array(partsRegion[i]),None)
extractedFeaturePart = extract(np.array(partsRegion[i],dtype = np.uint8),allPartsLayer[i])[0]
print(extractedFeaturePart.shape)
for j in range(len(partsRegion[i])):
if(extractedFeaturePart[j,0,0,0]!=-1):
partIndex = extractedFeaturePart[j,0,0,0]
allPartsLayerImg[i,partIndex]+=imgRegion[i][j]
imgRegionPool[i * numSecondLayerParts + partIndex].append(imgRegion[i][j])
allPartsLayerImgNumber[i,partIndex]+=1
else:
zeroParts+=1
for i in range(numParts):
for j in range(numSecondLayerParts):
if(allPartsLayerImgNumber[i,j]):
allPartsLayerImg[i,j] = allPartsLayerImg[i,j]/allPartsLayerImgNumber[i,j]
"""
Visualize the SuperParts
"""
settings = {'interpolation':'nearest','cmap':plot.cm.gray,}
settings['vmin'] = 0
settings['vmax'] = 1
plotData = np.ones(((2 + secondLayerShape)*100+2,(2+secondLayerShape)*(numSecondLayerParts + 1)+2))*0.8
visualShiftParts = 0
if 0:
allPartsPlot = np.zeros((20,numSecondLayerParts + 1,12,12))
gr.images(partsPlot.reshape(numParts,6,6),zero_to_one=False,vmin = 0, vmax = 1)
allPartsPlot[:,0] = 0.5
allPartsPlot[:,0,3:9,3:9] = partsPlot[20:40]
allPartsPlot[:,1:,:,:] = allPartsLayerImg[20:40]
gr.images(allPartsPlot.reshape(20 * (numSecondLayerParts + 1),12,12),zero_to_one=False, vmin = 0, vmax =1)
elif 1:
for i in range(numSecondLayerParts + 1):
for j in range(100):
if i == 0:
plotData[5 + j * (2 + secondLayerShape):5+firstLayerShape + j * (2 + secondLayerShape), 5 + i * (2 + secondLayerShape): 5+firstLayerShape + i * (2 + secondLayerShape)] = partsPlot[j+visualShiftParts]
else:
plotData[2 + j * (2 + secondLayerShape):2 + secondLayerShape+ j * (2 + secondLayerShape),2 + i * (2 + secondLayerShape): 2+ secondLayerShape + i * (2 + secondLayerShape)] = allPartsLayerImg[j+visualShiftParts,i-1]
plot.figure(figsize=(10,40))
plot.axis('off')
plot.imshow(plotData, **settings)
plot.savefig('test3.pdf',format='pdf',dpi=900)
else:
pass
"""
Train A Class-Model Layer
Building a parts-layer consisting all the components from all group s
"""
secondLayerPartsLayer = [pnet.PartsLayer(totalNumSecondParts,(1,1),settings = dict(outer_frame = 0, threshold = 5, sample_per_image = 1, max_samples=10000, min_prob = 0.005))]
secondLayerPartsLayer[0]._parts = np.zeros((totalNumSecondParts,)+allPartsLayer[0][0]._parts.shape[1:])
secondLayerPartsList = []
indexNumber = 0
for i in range(numParts):
for j in range(numSecondLayerPartsList[i]):
secondLayerPartsLayer[0]._parts[indexNumber] = allPartsLayer[i][0]._parts[j]
indexNumber+=1
print("000000000000000000000000000")
print(secondLayerPartsLayer[0]._parts.shape)
digits = range(10)
sup_ims = []
sup_labels = []
classificationTrainingNum = 100
for d in digits:
ims0 = ag.io.load_mnist('training', [d], selection = slice(classificationTrainingNum), return_labels = False)
sup_ims.append(ims0)
sup_labels.append(d * np.ones(len(ims0),dtype = np.int64))
sup_ims = np.concatenate(sup_ims, axis = 0)
sup_labels = np.concatenate(sup_labels,axis = 0)
curX = extract(sup_ims,allLayer[0:2])[0]
#print(curX.shape)
curX = curX.reshape(curX.shape[0:3])
secondLevelCurx = np.zeros((10 * classificationTrainingNum,29 - secondLayerShape,29 - secondLayerShape,1,1,numParts))
secondLevelCurxCenter = np.zeros((10 * classificationTrainingNum,29- secondLayerShape,29 - secondLayerShape))
#for i in range(10 * classificationTrainingNum):
# codeParts = curX[i]
for m in range(totalRange)[frame:totalRange-frame]:
for n in range(totalRange)[frame:totalRange-frame]:
secondLevelCurx[:,m-frame,n-frame] = index_map_pooling(curX[:,m-frame:m+frame+1,n-frame:n+frame+1],numParts,(2 * frame + 1,2 * frame + 1),(2 * frame + 1,2 * frame + 1))
secondLevelCurxCenter[:,m-frame,n-frame] = curX[:,m,n]
secondLevelCurx = np.asarray(secondLevelCurx.reshape(secondLevelCurx.shape[0],29- secondLayerShape,29-secondLayerShape,numParts),dtype = np.uint8)
thirdLevelCurx = np.zeros((10 * classificationTrainingNum, 29 - secondLayerShape,29 - secondLayerShape))
thirdLevelCurx = extract(secondLevelCurx,secondLayerPartsLayer)[0]
print(thirdLevelCurx.shape)
print("+++++++++++++++++++++++++++++++++++++++++++++")
if 1:
classificationLayers = [
pnet.PoolingLayer(shape = (4,4),strides = (4,4)),
#pnet.MixtureClassificationLayer(n_components = 5, min_prob = 1e-7, block_size = 20)
pnet.SVMClassificationLayer(C=1.0)
]
classificationNet = pnet.PartsNet(classificationLayers)
classificationNet.train((np.array(thirdLevelCurx,dtype = np.int64),totalNumSecondParts),sup_labels[:])
print("Training Success!!")
if 1:
testImg,testLabels = ag.io.load_mnist('testing')
testingNum = testLabels.shape[0]
print("training extract Begin")
curTestX = extract(testImg, allLayer[0:2])[0]
print("training extract End")
curTestX = curTestX.reshape(curTestX.shape[0:3])
secondLevelCurTestX = np.zeros((testingNum, 29 - secondLayerShape,29 - secondLayerShape,1,1,numParts))
secondLevelCurTestXCenter = np.zeros((testingNum, 29 - secondLayerShape,29 - secondLayerShape))
import time
start = time.time()
#for i in range(testingNum):
# codeParts = curTestX[i]
for m in range(totalRange)[frame:totalRange - frame]:
for n in range(totalRange)[frame:totalRange-frame]:
secondLevelCurTestX[:,m-frame,n-frame] = index_map_pooling(curTestX[:,m-frame:m+frame + 1,n-frame:n+frame + 1],numParts,(2 * frame + 1,2 * frame + 1),(2 * frame + 1,2 * frame + 1))
secondLevelCurTestXCenter[:,m-frame,n-frame] = curTestX[:,m,n]
afterPool = time.time()
print(afterPool - start)
secondLevelCurTestX = np.asarray(secondLevelCurTestX.reshape(secondLevelCurTestX.shape[0],29 - secondLayerShape, 29 - secondLayerShape, numParts),dtype = np.uint8)
thirdLevelCurTestX = np.zeros((testingNum, 29 - secondLayerShape, 29 - secondLayerShape))
featureMap = [[] for i in range(numParts)]
thirdLevelCurTestX = extract(secondLevelCurTestX,secondLayerPartsLayer)[0]
end = time.time()
print(end-afterPool)
print(thirdLevelCurTestX.shape)
testImg_Input = np.array(thirdLevelCurTestX,dtype = np.int64)
testImg_batches = np.array_split(testImg_Input,200)
testLabels_batches = np.array_split(testLabels, 200)
args = [tup + (totalNumSecondParts,) + (classificationNet,) for tup in zip(testImg_batches,testLabels_batches)]
corrects = 0
total = 0
def format_error_rate(pr):
return "{:.2f}%".format(100 * (1-pr))
print("Testing Starting...")
for i, res in enumerate(pnet.parallel.starmap_unordered(test,args)):
if i !=0 and i % 20 ==0:
print("{0:05}/{1:05} Error rate: {2}".format(total, len(ims),format_error_rate(pr)))
corrects += res.sum()
total += res.size
pr = corrects / total
print("Final error rate:", format_error_rate(pr))
for i in range(numParts):
print(np.asarray(partsRegion[i]).shape)
| bsd-3-clause |
vermouthmjl/scikit-learn | examples/ensemble/plot_adaboost_twoclass.py | 347 | 3268 | """
==================
Two-class AdaBoost
==================
This example fits an AdaBoosted decision stump on a non-linearly separable
classification dataset composed of two "Gaussian quantiles" clusters
(see :func:`sklearn.datasets.make_gaussian_quantiles`) and plots the decision
boundary and decision scores. The distributions of decision scores are shown
separately for samples of class A and B. The predicted class label for each
sample is determined by the sign of the decision score. Samples with decision
scores greater than zero are classified as B, and are otherwise classified
as A. The magnitude of a decision score determines the degree of likeness with
the predicted class label. Additionally, a new dataset could be constructed
containing a desired purity of class B, for example, by only selecting samples
with a decision score above some value.
"""
print(__doc__)
# Author: Noel Dawe <noel.dawe@gmail.com>
#
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import make_gaussian_quantiles
# Construct dataset
X1, y1 = make_gaussian_quantiles(cov=2.,
n_samples=200, n_features=2,
n_classes=2, random_state=1)
X2, y2 = make_gaussian_quantiles(mean=(3, 3), cov=1.5,
n_samples=300, n_features=2,
n_classes=2, random_state=1)
X = np.concatenate((X1, X2))
y = np.concatenate((y1, - y2 + 1))
# Create and fit an AdaBoosted decision tree
bdt = AdaBoostClassifier(DecisionTreeClassifier(max_depth=1),
algorithm="SAMME",
n_estimators=200)
bdt.fit(X, y)
plot_colors = "br"
plot_step = 0.02
class_names = "AB"
plt.figure(figsize=(10, 5))
# Plot the decision boundaries
plt.subplot(121)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
Z = bdt.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.Paired)
plt.axis("tight")
# Plot the training points
for i, n, c in zip(range(2), class_names, plot_colors):
idx = np.where(y == i)
plt.scatter(X[idx, 0], X[idx, 1],
c=c, cmap=plt.cm.Paired,
label="Class %s" % n)
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.legend(loc='upper right')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Decision Boundary')
# Plot the two-class decision scores
twoclass_output = bdt.decision_function(X)
plot_range = (twoclass_output.min(), twoclass_output.max())
plt.subplot(122)
for i, n, c in zip(range(2), class_names, plot_colors):
plt.hist(twoclass_output[y == i],
bins=10,
range=plot_range,
facecolor=c,
label='Class %s' % n,
alpha=.5)
x1, x2, y1, y2 = plt.axis()
plt.axis((x1, x2, y1, y2 * 1.2))
plt.legend(loc='upper right')
plt.ylabel('Samples')
plt.xlabel('Score')
plt.title('Decision Scores')
plt.tight_layout()
plt.subplots_adjust(wspace=0.35)
plt.show()
| bsd-3-clause |
revzin/uav-firmware | pc-navsys/graph_data.py | 1 | 3349 | from __future__ import unicode_literals
import json, pprint, time
from matplotlib import rc
font = {'family': 'Times New Roman',
'weight': 'normal',
'size': '15'}
rc('font', **font)
import matplotlib.pyplot as plt
def tosec(jtime):
return ((jtime["h"] * 24) + (jtime["m"] * 60) + jtime["s"])
# Массивы данных для рисования
mplTimeData = []
mplNumsatData = []
mplHdopData = []
mplHeightData = []
mplLatData = []
mplLonData = []
# JSON-файл для рисования
file = "E:\Dropbox\Dropbox\ee_logs\out_antenna_outside_window"
with open(file) as hugeJSON:
# грузим JSON
d = json.load(hugeJSON)
startsec = 0
# для каждой записи о навигации
for member in d["navdatas"]:
navdata = member['navdata']
jtime = navdata['time']
# подсчитываем секунды приёмника (с начала дня)
totsec = tosec(jtime)
if (totsec == 0):
continue
else:
if (startsec == 0):
# Момент первого сигнала от спутника, пишем данные с него
startsec = totsec
# когда остановиться
if (totsec - startsec == 250):
break
# сохранияем данные из JSON в массивы
mplTimeData.append(totsec - startsec)
print(navdata['numsat'], totsec - startsec)
mplNumsatData.append(navdata["numsat"])
mplHdopData.append(navdata["hdop"])
mplHeightData.append(navdata["height"])
mplLatData.append(navdata["lat"])
mplLonData.append(navdata["lon"])
#pprint.pprint(mplTimeData)
mplDLatData = []
# высчитываем "шум" в младших разрядах градусов
for i in range(0, len(mplLatData)):
delta = abs(mplLatData[i] - mplLatData[i - 1])
if (delta < 1.0):
mplDLatData.append(delta)
else:
mplDLatData.append(0.0)
mplDLonData = []
for i in range(0, len(mplLonData)):
delta = abs(mplLonData[i] - mplLonData[i - 1])
if (delta < 0.0001):
mplDLonData.append(delta)
else:
mplDLonData.append(0.0)
# строим графики
#pprint.pprint(mplNumsatData)
plt.subplot(511)
#plt.xlabel(u"Время, с")
#plt.ylabel(u"К-во спутников")
plt.plot(mplTimeData, mplNumsatData, '-')
plt.subplot(512)
#plt.xlabel(u"Время, с")
#plt.ylabel(u"Горизонтальная погрешность, м")
plt.plot(mplTimeData, mplHdopData, '-')
plt.subplot(513)
#plt.xlabel(u"Время, с")
#plt.ylabel(u"Высота антенны, м")
plt.plot(mplTimeData, mplHeightData, '-')
plt.ylim([125,155]) # Отдельные пределы для высоты
plt.subplot(514)
#plt.xlabel(u"Время, с")
#plt.ylabel(u"Нестаб. шир. стацион. ант., град.")
plt.plot(mplTimeData, mplDLatData, '-')
plt.subplot(515)
#plt.xlabel(u"Время, с")
#plt.ylabel(u"Нестаб. дол. стацион. ант., град.")
plt.plot(mplTimeData, mplDLonData, '-')
plt.show()
| gpl-2.0 |
adamhajari/spyre | spyre/example_show_all_the_inputs.py | 1 | 7551 | import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from numpy import pi
import requests
import json
from bokeh.resources import INLINE
try:
from . import server
except Exception:
import server
server.include_df_index = True
class TestApp1(server.App):
colors = [
{"label": "Green", "value": 'g'},
{"label": "Red", "value": 'r', "checked": True},
{"label": "Blue", "value": 'b'},
{"label": "Yellow", "value": 'y'}
]
states = {
"Alabama": "AL",
"Arkansas": "AR",
"Alaska": "AK",
"Nevada": "NV",
"New York": "NY",
"New Jersey": "NJ"
}
title = "Test App 1"
inputs = [
{
"type": 'text',
"label": 'Title',
"value": 'Simple Sine Wave',
"key": 'title',
"action_id": "refresh",
}, {
"type": 'searchbox',
"label": 'Frontend Search',
"options": list(states.keys()),
"key": 'state',
"action_id": "refresh",
}, {
"type": 'searchbox',
"label": 'Backend Search',
"value": 'Foo Fighters',
"output_id": "backend_search",
"key": 'results',
"action_id": "refresh",
}, {
"type": 'radiobuttons',
"label": 'Function',
"options": [
{"label": "Sine", "value": "sin", "checked": True},
{"label": "Cosine", "value": "cos"}
],
"key": 'func_type',
"action_id": "refresh",
}, {
"type": 'checkboxgroup',
"label": 'Axis Labels',
"options": [
{"label": "x-axis", "value": "x", "checked": True},
{"label": "y-axis", "value": "y"}
],
"key": 'axis_label',
"action_id": "refresh",
}, {
"type": 'dropdown',
"label": 'Line Color',
"options": colors,
"key": 'color',
"action_id": "refresh",
"linked_key": 'title',
"linked_type": 'text',
"linked_value": "hey"
}, {
"type": 'slider',
"label": 'frequency',
"key": 'freq',
"value": 2,
"min": 1,
"max": 30,
"action_id": "refresh",
"linked_key": 'title',
"linked_type": 'text',
}
]
tabs = ["Tab1", "Tab2"]
controls = [
{
"type": "upload",
"id": "button3",
"label": "upload"
}, {
"type": "button",
"id": "refresh",
"label": "refresh",
}, {
"type": "button",
"id": "button2",
"label": "download",
},
]
outputs = [
{
"type": "html",
"id": "html1",
"control_id": "refresh",
"tab": "Tab1"
}, {
"type": "plot",
"id": "plot1",
"control_id": "refresh",
"tab": "Tab1"
}, {
"type": "table",
"id": "table1",
"control_id": "refresh",
"tab": "Tab1"
}, {
"type": "plot",
"id": "plot2",
"control_id": "refresh",
"tab": "Tab1"
}, {
"type": "download",
"id": "download_id",
"control_id": "button2",
"on_page_load": False,
}, {
"type": "html",
"id": "html_out",
"control_id": "refresh",
"tab": "Tab1"
}, {
"type": "plot",
"id": "plot3",
"control_id": "refresh",
"tab": "Tab2"
}, {
"type": "table",
"id": "table2",
"control_id": "refresh",
"sortable": True,
"tab": "Tab2"
}, {
"type": "json",
"id": "backend_search",
"control_id": "refresh",
}
]
def __init__(self):
self.upload_data = None
def html1(self, params):
text = ""
if self.upload_data is not None:
text += self.upload_data
return text
def backend_search(self, params):
# the searchbox input will automatically add the query to 'params' as q
q = params.get('q', params['results'])
url = (
"https://api.nextbigsound.com/search/v1/artists/"
"?fields=id,name,category&limit=15&query=%s" % q
)
resp = requests.get(url)
data = json.loads(resp.text)
artists = []
for artist in data['artists']:
artists.append({'label': artist['name'], 'value': artist['id']})
return artists
def storeUpload(self, file):
self.upload_file = file
self.upload_data = file.read()
file.close()
def getTable1Data(self, params):
count = [1, 4, 3]
name = ['<a href="http://adamhajari.com">A</a>', 'B', 'C']
return {'name': name, 'count': count}
def table1(self, params):
data = self.getTable1Data(params)
df = pd.DataFrame(data)
return df
def table2(self, params):
f = float(params['freq'])
x = np.arange(0, 6 * pi, pi / 50)
y1 = np.cos(f * x)
y2 = np.sin(f * x)
df = pd.DataFrame({"cos": y1, "sin": y2}, index=x)
df.index.name = "t"
return df
def plot3(self, params):
df = self.table2(params)
ax = df.plot(title=params['title'])
ax.set_ylabel('y axis')
ax.set_xlabel('x axis')
return ax
def plot1(self, params):
fig = plt.figure() # make figure object
splt = fig.add_subplot(1, 1, 1)
f = float(params['freq'])
title = "%s: %s" % (params['title'], params['state'])
axis_label = params['axis_label']
color = params['color']
func_type = params['func_type']
x = np.arange(0, 6 * pi, pi / 50)
splt.set_title(title)
for axis in axis_label:
if axis == "x":
splt.set_xlabel('x axis')
if axis == "y":
splt.set_ylabel('y axis')
if func_type == 'cos':
y = np.cos(f * x)
else:
y = np.sin(f * x)
splt.plot(x, y, color=color) # sine wave
return fig
def plot2(self, params):
title = params['results']
data = self.table1(params)
fig = plt.figure() # make figure object
splt = fig.add_subplot(1, 1, 1)
splt.set_title(title)
ind = np.arange(len(data['name']))
width = 0.85
splt.bar(ind, data['count'], width)
splt.set_xticks(ind + width / 2)
splt.set_xticklabels(["A", "B", "C"])
return fig
def html_out(self, params):
func_type = params['func_type']
axis_label = params['axis_label']
color = params['color']
freq = params['freq']
html = (
"function type: {} <br>axis label: {}<br>color: {}<br>frequency: {}"
.format(func_type, axis_label, color, freq)
)
return html
def download_id(self, params):
return self.table2(params)
def noOutput(self, input_params):
return 0
def getCustomCSS(self):
return INLINE.css_raw[0]
if __name__ == '__main__':
app = TestApp1()
app.launch(port=9096)
| mit |
marcharper/python-ternary | setup.py | 1 | 1160 | import setuptools
from distutils.core import setup
version = "1.0.8"
with open('README.txt') as file:
long_description = file.read()
classifiers = [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Programming Language :: Python",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Topic :: Scientific/Engineering :: Visualization"
]
setup(
name="python-ternary",
version=version,
packages=['ternary'],
install_requires=["matplotlib>=2"],
author="Marc Harper and contributors",
author_email="marc.harper@gmail.com",
classifiers=classifiers,
description="Make ternary plots in python with matplotlib",
long_description=long_description,
keywords="matplotlib ternary plotting",
license="MIT",
url="https://github.com/marcharper/python-ternary",
download_url="https://github.com/marcharper/python-ternary/tarball/{}".format(version),
)
| mit |
OshynSong/scikit-learn | sklearn/utils/metaestimators.py | 283 | 2353 | """Utilities for meta-estimators"""
# Author: Joel Nothman
# Andreas Mueller
# Licence: BSD
from operator import attrgetter
from functools import update_wrapper
__all__ = ['if_delegate_has_method']
class _IffHasAttrDescriptor(object):
"""Implements a conditional property using the descriptor protocol.
Using this class to create a decorator will raise an ``AttributeError``
if the ``attribute_name`` is not present on the base object.
This allows ducktyping of the decorated method based on ``attribute_name``.
See https://docs.python.org/3/howto/descriptor.html for an explanation of
descriptors.
"""
def __init__(self, fn, attribute_name):
self.fn = fn
self.get_attribute = attrgetter(attribute_name)
# update the docstring of the descriptor
update_wrapper(self, fn)
def __get__(self, obj, type=None):
# raise an AttributeError if the attribute is not present on the object
if obj is not None:
# delegate only on instances, not the classes.
# this is to allow access to the docstrings.
self.get_attribute(obj)
# lambda, but not partial, allows help() to work with update_wrapper
out = lambda *args, **kwargs: self.fn(obj, *args, **kwargs)
# update the docstring of the returned function
update_wrapper(out, self.fn)
return out
def if_delegate_has_method(delegate):
"""Create a decorator for methods that are delegated to a sub-estimator
This enables ducktyping by hasattr returning True according to the
sub-estimator.
>>> from sklearn.utils.metaestimators import if_delegate_has_method
>>>
>>>
>>> class MetaEst(object):
... def __init__(self, sub_est):
... self.sub_est = sub_est
...
... @if_delegate_has_method(delegate='sub_est')
... def predict(self, X):
... return self.sub_est.predict(X)
...
>>> class HasPredict(object):
... def predict(self, X):
... return X.sum(axis=1)
...
>>> class HasNoPredict(object):
... pass
...
>>> hasattr(MetaEst(HasPredict()), 'predict')
True
>>> hasattr(MetaEst(HasNoPredict()), 'predict')
False
"""
return lambda fn: _IffHasAttrDescriptor(fn, '%s.%s' % (delegate, fn.__name__))
| bsd-3-clause |
jarthurgross/bloch_distribution | scripts/plot_parallelogram_area_q12.py | 1 | 1437 | #!/usr/bin/python3
import matplotlib.pyplot as plt
from matplotlib import cm, colors
import numpy as np
from bloch_distribution.invert_angles import parallelogram_area_q12
from my_cms import husl_hot
# Parameters
epsilon = 0.575
q1_min = -4
q1_max = 4
q1_samples = 512
q2_min = -4
q2_max = 4
q2_samples = 512
Q1 = np.linspace(q1_min, q1_max, q1_samples)
Q2 = np.linspace(q2_min, q2_max, q2_samples)
Q1, Q2 = np.meshgrid(Q1, Q2)
Area = parallelogram_area_q12(Q1, Q2, epsilon)
Area_symmetry = Area - parallelogram_area_q12(Q1, -Q2, epsilon)
norm0 = colors.Normalize()
norm1 = colors.Normalize()
cmap0 = husl_hot
cmap1 = cm.coolwarm
fontsize = 20
fig = plt.figure(figsize=(16, 8))
fig.suptitle(r'$\epsilon=' + str(epsilon) + '$', fontsize=fontsize)
ax0 = plt.subplot(1, 2, 1)
ax0.pcolormesh(Q1, Q2, Area, cmap=cmap0, norm=norm0, shading='gouraud')
ax0.plot([-4, 4], [0, 0], color='w')
ax0.set_xlabel(r'$q_1$', fontsize=fontsize)
ax0.set_ylabel(r'$q_2$', fontsize=fontsize)
m0 = cm.ScalarMappable(cmap=cmap0, norm=norm0)
m0.set_array(Area)
plt.colorbar(m0)
ax1 = plt.subplot(1, 2, 2)
ax1.pcolormesh(Q1, Q2, Area_symmetry, cmap=cmap1, norm=norm1,
shading='gouraud')
ax1.set_xlabel(r'$q_1$', fontsize=fontsize)
ax1.set_ylabel(r'$q_2$', fontsize=fontsize)
m1 = cm.ScalarMappable(cmap=cmap1, norm=norm1)
m1.set_array(Area_symmetry)
plt.colorbar(m1)
plt.savefig('plots/parallelogram_area_q12_e' + str(epsilon) + '.png')
| mit |
oemof/examples | oemof_examples/oemof.solph/v0.3.x/generic_chp/mchp.py | 2 | 3042 | # -*- coding: utf-8 -*-
"""
General description
-------------------
Example that illustrates how to use custom component `GenericCHP` can be used.
In this case it is used to model a motoric chp.
Installation requirements
-------------------------
This example requires the version v0.3.x of oemof. Install by:
pip install 'oemof>=0.3,<0.4'
"""
__copyright__ = "oemof developer group"
__license__ = "GPLv3"
import os
import pandas as pd
import oemof.solph as solph
from oemof.network import Node
from oemof.outputlib import processing, views
try:
import matplotlib.pyplot as plt
except ImportError:
plt = None
# read sequence data
full_filename = os.path.join(os.path.dirname(__file__),
'generic_chp.csv')
data = pd.read_csv(full_filename, sep=",")
# select periods
periods = len(data)-1
# create an energy system
idx = pd.date_range('1/1/2017', periods=periods, freq='H')
es = solph.EnergySystem(timeindex=idx)
Node.registry = es
# resources
bgas = solph.Bus(label='bgas')
rgas = solph.Source(label='rgas', outputs={bgas: solph.Flow()})
# heat
bth = solph.Bus(label='bth')
# dummy source at high costs that serves the residual load
source_th = solph.Source(label='source_th',
outputs={bth: solph.Flow(variable_costs=1000)})
demand_th = solph.Sink(label='demand_th', inputs={bth: solph.Flow(fixed=True,
actual_value=data['demand_th'], nominal_value=200)})
# power
bel = solph.Bus(label='bel')
demand_el = solph.Sink(label='demand_el', inputs={bel: solph.Flow(
variable_costs=data['price_el'])})
# motoric chp
mchp = solph.components.GenericCHP(
label='motoric_chp',
fuel_input={bgas: solph.Flow(
H_L_FG_share_max=[0.18 for p in range(0, periods)],
H_L_FG_share_min=[0.41 for p in range(0, periods)])},
electrical_output={bel: solph.Flow(
P_max_woDH=[200 for p in range(0, periods)],
P_min_woDH=[100 for p in range(0, periods)],
Eta_el_max_woDH=[0.44 for p in range(0, periods)],
Eta_el_min_woDH=[0.40 for p in range(0, periods)])},
heat_output={bth: solph.Flow(
Q_CW_min=[0 for p in range(0, periods)])},
Beta=[0 for p in range(0, periods)],
fixed_costs=0, back_pressure=False)
# create an optimization problem and solve it
om = solph.Model(es)
# debugging
# om.write('generic_chp.lp', io_options={'symbolic_solver_labels': True})
# solve model
om.solve(solver='cbc', solve_kwargs={'tee': True})
# create result object
results = processing.results(om)
# plot data
if plt is not None:
# plot PQ diagram from component results
data = results[(mchp, None)]['sequences']
ax = data.plot(kind='scatter', x='Q', y='P', grid=True)
ax.set_xlabel('Q (MW)')
ax.set_ylabel('P (MW)')
plt.show()
# plot thermal bus
data = views.node(results, 'bth')['sequences']
ax = data.plot(kind='line', drawstyle='steps-post', grid=True)
ax.set_xlabel('Time (h)')
ax.set_ylabel('Q (MW)')
plt.show()
| gpl-3.0 |
felixcheung/vagrant-projects | Spark-IPython-32bit/ipython-pyspark.py | 4 | 3462 | #!/usr/bin/env python
# https://github.com/felixcheung/vagrant-projects
import getpass
import glob
import inspect
import os
import platform
import re
import subprocess
import sys
import time
#-----------------------
# PySpark
#
master = 'local[*]'
num_executors = 12 #24
executor_cores = 2
executor_memory = '1g' #10g
pyspark_submit_args = os.getenv('PYSPARK_SUBMIT_ARGS', None)
if not pyspark_submit_args:
pyspark_submit_args = '--num-executors %d --executor-cores %d --executor-memory %s' % (num_executors, executor_cores, executor_memory)
pyspark_submit_args = '--master %s %s' % (master, pyspark_submit_args)
if not os.getenv('PYSPARK_PYTHON', None):
os.environ['PYSPARK_PYTHON'] = sys.executable
os.environ['PYSPARK_DRIVER_PYTHON']='ipython' # PySpark Driver (ie. IPython)
profile_name = 'pyspark'
os.environ['PYSPARK_DRIVER_PYTHON_OPTS'] = 'notebook --profile=%s' % profile_name
#-----------------------
# IPython Notebook
#
ipython_notebook_config_template = '''c = get_config()
c.NotebookApp.ip = '{ip}'
c.NotebookApp.port = {port}
c.NotebookApp.open_browser = False
'''
pyspark_setup_template = '''import os
if not os.getenv('PYSPARK_SUBMIT_ARGS', None):
raise ValueError('PYSPARK_SUBMIT_ARGS environment variable is not set')
spark_home = os.getenv('SPARK_HOME', None)
if not spark_home:
raise ValueError('SPARK_HOME environment variable is not set')
'''
ip = '*' # Warning: this is potentially insecure
port = 1088
#-----------------------
# Create profile and start
#
try:
ipython_profile_path = os.popen('ipython locate').read().rstrip('\n') + '/profile_%s' % profile_name
setup_py_path = ipython_profile_path + '/startup/00-pyspark-setup.py'
ipython_notebook_config_path = ipython_profile_path + '/ipython_notebook_config.py'
ipython_kernel_config_path = ipython_profile_path + '/ipython_kernel_config.py'
if not os.path.exists(ipython_profile_path):
print 'Creating IPython Notebook profile\n'
cmd = 'ipython profile create %s' % profile_name
os.system(cmd)
print '\n'
if not os.path.exists(setup_py_path):
print 'Writing PySpark setup\n'
setup_file = open(setup_py_path, 'w')
setup_file.write(pyspark_setup_template)
setup_file.close()
os.chmod(setup_py_path, 0600)
# matplotlib inline
kernel_config = open(ipython_kernel_config_path).read()
if "c.IPKernelApp.matplotlib = 'inline'" not in kernel_config:
print 'Writing IPython kernel config\n'
new_kernel_config = kernel_config.replace('# c.IPKernelApp.matplotlib = None', "c.IPKernelApp.matplotlib = 'inline'")
kernel_file = open(ipython_kernel_config_path, 'w')
kernel_file.write(new_kernel_config)
kernel_file.close()
os.chmod(ipython_kernel_config_path, 0600)
if not os.path.exists(ipython_notebook_config_path) or 'open_browser = False' not in open(ipython_notebook_config_path).read():
print 'Writing IPython Notebook config\n'
config_file = open(ipython_notebook_config_path, 'w')
config_file.write(ipython_notebook_config_template.format(ip = ip, port = port))
config_file.close()
os.chmod(ipython_notebook_config_path, 0600)
print 'Launching PySpark with IPython Notebook\n'
cmd = 'pyspark %s' % pyspark_submit_args
os.system(cmd)
sys.exit(0)
except KeyboardInterrupt:
print 'Aborted\n'
sys.exit(1)
| apache-2.0 |
RachitKansal/scikit-learn | sklearn/cluster/tests/test_birch.py | 342 | 5603 | """
Tests for the birch clustering algorithm.
"""
from scipy import sparse
import numpy as np
from sklearn.cluster.tests.common import generate_clustered_data
from sklearn.cluster.birch import Birch
from sklearn.cluster.hierarchical import AgglomerativeClustering
from sklearn.datasets import make_blobs
from sklearn.linear_model import ElasticNet
from sklearn.metrics import pairwise_distances_argmin, v_measure_score
from sklearn.utils.testing import assert_greater_equal
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_greater
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_warns
def test_n_samples_leaves_roots():
# Sanity check for the number of samples in leaves and roots
X, y = make_blobs(n_samples=10)
brc = Birch()
brc.fit(X)
n_samples_root = sum([sc.n_samples_ for sc in brc.root_.subclusters_])
n_samples_leaves = sum([sc.n_samples_ for leaf in brc._get_leaves()
for sc in leaf.subclusters_])
assert_equal(n_samples_leaves, X.shape[0])
assert_equal(n_samples_root, X.shape[0])
def test_partial_fit():
# Test that fit is equivalent to calling partial_fit multiple times
X, y = make_blobs(n_samples=100)
brc = Birch(n_clusters=3)
brc.fit(X)
brc_partial = Birch(n_clusters=None)
brc_partial.partial_fit(X[:50])
brc_partial.partial_fit(X[50:])
assert_array_equal(brc_partial.subcluster_centers_,
brc.subcluster_centers_)
# Test that same global labels are obtained after calling partial_fit
# with None
brc_partial.set_params(n_clusters=3)
brc_partial.partial_fit(None)
assert_array_equal(brc_partial.subcluster_labels_, brc.subcluster_labels_)
def test_birch_predict():
# Test the predict method predicts the nearest centroid.
rng = np.random.RandomState(0)
X = generate_clustered_data(n_clusters=3, n_features=3,
n_samples_per_cluster=10)
# n_samples * n_samples_per_cluster
shuffle_indices = np.arange(30)
rng.shuffle(shuffle_indices)
X_shuffle = X[shuffle_indices, :]
brc = Birch(n_clusters=4, threshold=1.)
brc.fit(X_shuffle)
centroids = brc.subcluster_centers_
assert_array_equal(brc.labels_, brc.predict(X_shuffle))
nearest_centroid = pairwise_distances_argmin(X_shuffle, centroids)
assert_almost_equal(v_measure_score(nearest_centroid, brc.labels_), 1.0)
def test_n_clusters():
# Test that n_clusters param works properly
X, y = make_blobs(n_samples=100, centers=10)
brc1 = Birch(n_clusters=10)
brc1.fit(X)
assert_greater(len(brc1.subcluster_centers_), 10)
assert_equal(len(np.unique(brc1.labels_)), 10)
# Test that n_clusters = Agglomerative Clustering gives
# the same results.
gc = AgglomerativeClustering(n_clusters=10)
brc2 = Birch(n_clusters=gc)
brc2.fit(X)
assert_array_equal(brc1.subcluster_labels_, brc2.subcluster_labels_)
assert_array_equal(brc1.labels_, brc2.labels_)
# Test that the wrong global clustering step raises an Error.
clf = ElasticNet()
brc3 = Birch(n_clusters=clf)
assert_raises(ValueError, brc3.fit, X)
# Test that a small number of clusters raises a warning.
brc4 = Birch(threshold=10000.)
assert_warns(UserWarning, brc4.fit, X)
def test_sparse_X():
# Test that sparse and dense data give same results
X, y = make_blobs(n_samples=100, centers=10)
brc = Birch(n_clusters=10)
brc.fit(X)
csr = sparse.csr_matrix(X)
brc_sparse = Birch(n_clusters=10)
brc_sparse.fit(csr)
assert_array_equal(brc.labels_, brc_sparse.labels_)
assert_array_equal(brc.subcluster_centers_,
brc_sparse.subcluster_centers_)
def check_branching_factor(node, branching_factor):
subclusters = node.subclusters_
assert_greater_equal(branching_factor, len(subclusters))
for cluster in subclusters:
if cluster.child_:
check_branching_factor(cluster.child_, branching_factor)
def test_branching_factor():
# Test that nodes have at max branching_factor number of subclusters
X, y = make_blobs()
branching_factor = 9
# Purposefully set a low threshold to maximize the subclusters.
brc = Birch(n_clusters=None, branching_factor=branching_factor,
threshold=0.01)
brc.fit(X)
check_branching_factor(brc.root_, branching_factor)
brc = Birch(n_clusters=3, branching_factor=branching_factor,
threshold=0.01)
brc.fit(X)
check_branching_factor(brc.root_, branching_factor)
# Raises error when branching_factor is set to one.
brc = Birch(n_clusters=None, branching_factor=1, threshold=0.01)
assert_raises(ValueError, brc.fit, X)
def check_threshold(birch_instance, threshold):
"""Use the leaf linked list for traversal"""
current_leaf = birch_instance.dummy_leaf_.next_leaf_
while current_leaf:
subclusters = current_leaf.subclusters_
for sc in subclusters:
assert_greater_equal(threshold, sc.radius)
current_leaf = current_leaf.next_leaf_
def test_threshold():
# Test that the leaf subclusters have a threshold lesser than radius
X, y = make_blobs(n_samples=80, centers=4)
brc = Birch(threshold=0.5, n_clusters=None)
brc.fit(X)
check_threshold(brc, 0.5)
brc = Birch(threshold=5.0, n_clusters=None)
brc.fit(X)
check_threshold(brc, 5.)
| bsd-3-clause |
chugunovyar/factoryForBuild | env/lib/python2.7/site-packages/matplotlib/backends/backend_macosx.py | 10 | 7837 | from __future__ import (absolute_import, division, print_function,
unicode_literals)
import six
import os
from matplotlib._pylab_helpers import Gcf
from matplotlib.backend_bases import FigureManagerBase, FigureCanvasBase, \
NavigationToolbar2, TimerBase
from matplotlib.backend_bases import ShowBase
from matplotlib.figure import Figure
from matplotlib import rcParams
from matplotlib.widgets import SubplotTool
import matplotlib
from matplotlib.backends import _macosx
from .backend_agg import RendererAgg, FigureCanvasAgg
class Show(ShowBase):
def mainloop(self):
_macosx.show()
show = Show()
########################################################################
#
# The following functions and classes are for pylab and implement
# window/figure managers, etc...
#
########################################################################
def draw_if_interactive():
"""
For performance reasons, we don't want to redraw the figure after
each draw command. Instead, we mark the figure as invalid, so that
it will be redrawn as soon as the event loop resumes via PyOS_InputHook.
This function should be called after each draw event, even if
matplotlib is not running interactively.
"""
if matplotlib.is_interactive():
figManager = Gcf.get_active()
if figManager is not None:
figManager.canvas.invalidate()
def new_figure_manager(num, *args, **kwargs):
"""
Create a new figure manager instance
"""
FigureClass = kwargs.pop('FigureClass', Figure)
figure = FigureClass(*args, **kwargs)
return new_figure_manager_given_figure(num, figure)
def new_figure_manager_given_figure(num, figure):
"""
Create a new figure manager instance for the given figure.
"""
canvas = FigureCanvasMac(figure)
manager = FigureManagerMac(canvas, num)
return manager
class TimerMac(_macosx.Timer, TimerBase):
'''
Subclass of :class:`backend_bases.TimerBase` that uses CoreFoundation
run loops for timer events.
Attributes:
* interval: The time between timer events in milliseconds. Default
is 1000 ms.
* single_shot: Boolean flag indicating whether this timer should
operate as single shot (run once and then stop). Defaults to False.
* callbacks: Stores list of (func, args) tuples that will be called
upon timer events. This list can be manipulated directly, or the
functions add_callback and remove_callback can be used.
'''
# completely implemented at the C-level (in _macosx.Timer)
class FigureCanvasMac(_macosx.FigureCanvas, FigureCanvasAgg):
"""
The canvas the figure renders into. Calls the draw and print fig
methods, creates the renderers, etc...
Public attribute
figure - A Figure instance
Events such as button presses, mouse movements, and key presses
are handled in the C code and the base class methods
button_press_event, button_release_event, motion_notify_event,
key_press_event, and key_release_event are called from there.
"""
def __init__(self, figure):
FigureCanvasBase.__init__(self, figure)
width, height = self.get_width_height()
_macosx.FigureCanvas.__init__(self, width, height)
self._device_scale = 1.0
def _set_device_scale(self, value):
if self._device_scale != value:
self.figure.dpi = self.figure.dpi / self._device_scale * value
self._device_scale = value
def get_renderer(self, cleared=False):
l, b, w, h = self.figure.bbox.bounds
key = w, h, self.figure.dpi
try:
self._lastKey, self._renderer
except AttributeError:
need_new_renderer = True
else:
need_new_renderer = (self._lastKey != key)
if need_new_renderer:
self._renderer = RendererAgg(w, h, self.figure.dpi)
self._lastKey = key
elif cleared:
self._renderer.clear()
return self._renderer
def _draw(self):
renderer = self.get_renderer()
if not self.figure.stale:
return renderer
self.figure.draw(renderer)
return renderer
def draw(self):
self.invalidate()
def draw_idle(self, *args, **kwargs):
self.invalidate()
def blit(self, bbox):
self.invalidate()
def resize(self, width, height):
dpi = self.figure.dpi
width /= dpi
height /= dpi
self.figure.set_size_inches(width * self._device_scale,
height * self._device_scale,
forward=False)
FigureCanvasBase.resize_event(self)
self.draw_idle()
def new_timer(self, *args, **kwargs):
"""
Creates a new backend-specific subclass of :class:`backend_bases.Timer`.
This is useful for getting periodic events through the backend's native
event loop. Implemented only for backends with GUIs.
optional arguments:
*interval*
Timer interval in milliseconds
*callbacks*
Sequence of (func, args, kwargs) where func(*args, **kwargs) will
be executed by the timer every *interval*.
"""
return TimerMac(*args, **kwargs)
class FigureManagerMac(_macosx.FigureManager, FigureManagerBase):
"""
Wrap everything up into a window for the pylab interface
"""
def __init__(self, canvas, num):
FigureManagerBase.__init__(self, canvas, num)
title = "Figure %d" % num
_macosx.FigureManager.__init__(self, canvas, title)
if rcParams['toolbar']=='toolbar2':
self.toolbar = NavigationToolbar2Mac(canvas)
else:
self.toolbar = None
if self.toolbar is not None:
self.toolbar.update()
def notify_axes_change(fig):
'this will be called whenever the current axes is changed'
if self.toolbar != None: self.toolbar.update()
self.canvas.figure.add_axobserver(notify_axes_change)
if matplotlib.is_interactive():
self.show()
self.canvas.draw_idle()
def close(self):
Gcf.destroy(self.num)
class NavigationToolbar2Mac(_macosx.NavigationToolbar2, NavigationToolbar2):
def __init__(self, canvas):
NavigationToolbar2.__init__(self, canvas)
def _init_toolbar(self):
basedir = os.path.join(rcParams['datapath'], "images")
_macosx.NavigationToolbar2.__init__(self, basedir)
def draw_rubberband(self, event, x0, y0, x1, y1):
self.canvas.set_rubberband(int(x0), int(y0), int(x1), int(y1))
def release(self, event):
self.canvas.remove_rubberband()
def set_cursor(self, cursor):
_macosx.set_cursor(cursor)
def save_figure(self, *args):
filename = _macosx.choose_save_file('Save the figure',
self.canvas.get_default_filename())
if filename is None: # Cancel
return
self.canvas.print_figure(filename)
def prepare_configure_subplots(self):
toolfig = Figure(figsize=(6,3))
canvas = FigureCanvasMac(toolfig)
toolfig.subplots_adjust(top=0.9)
tool = SubplotTool(self.canvas.figure, toolfig)
return canvas
def set_message(self, message):
_macosx.NavigationToolbar2.set_message(self, message.encode('utf-8'))
def dynamic_update(self):
self.canvas.draw_idle()
########################################################################
#
# Now just provide the standard names that backend.__init__ is expecting
#
########################################################################
FigureCanvas = FigureCanvasMac
FigureManager = FigureManagerMac
| gpl-3.0 |
ClimbsRocks/scikit-learn | sklearn/tests/test_discriminant_analysis.py | 15 | 13124 | import sys
import numpy as np
from nose import SkipTest
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_true
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_raise_message
from sklearn.utils.testing import assert_warns
from sklearn.utils.testing import assert_greater
from sklearn.utils.testing import ignore_warnings
from sklearn.datasets import make_blobs
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from sklearn.discriminant_analysis import _cov
# import reload
version = sys.version_info
if version[0] == 3:
# Python 3+ import for reload. Builtin in Python2
if version[1] == 3:
reload = None
else:
from importlib import reload
# Data is just 6 separable points in the plane
X = np.array([[-2, -1], [-1, -1], [-1, -2], [1, 1], [1, 2], [2, 1]], dtype='f')
y = np.array([1, 1, 1, 2, 2, 2])
y3 = np.array([1, 1, 2, 2, 3, 3])
# Degenerate data with only one feature (still should be separable)
X1 = np.array([[-2, ], [-1, ], [-1, ], [1, ], [1, ], [2, ]], dtype='f')
# Data is just 9 separable points in the plane
X6 = np.array([[0, 0], [-2, -2], [-2, -1], [-1, -1], [-1, -2],
[1, 3], [1, 2], [2, 1], [2, 2]])
y6 = np.array([1, 1, 1, 1, 1, 2, 2, 2, 2])
y7 = np.array([1, 2, 3, 2, 3, 1, 2, 3, 1])
# Degenerate data with 1 feature (still should be separable)
X7 = np.array([[-3, ], [-2, ], [-1, ], [-1, ], [0, ], [1, ], [1, ],
[2, ], [3, ]])
# Data that has zero variance in one dimension and needs regularization
X2 = np.array([[-3, 0], [-2, 0], [-1, 0], [-1, 0], [0, 0], [1, 0], [1, 0],
[2, 0], [3, 0]])
# One element class
y4 = np.array([1, 1, 1, 1, 1, 1, 1, 1, 2])
# Data with less samples in a class than n_features
X5 = np.c_[np.arange(8), np.zeros((8, 3))]
y5 = np.array([0, 0, 0, 0, 0, 1, 1, 1])
solver_shrinkage = [('svd', None), ('lsqr', None), ('eigen', None),
('lsqr', 'auto'), ('lsqr', 0), ('lsqr', 0.43),
('eigen', 'auto'), ('eigen', 0), ('eigen', 0.43)]
def test_lda_predict():
# Test LDA classification.
# This checks that LDA implements fit and predict and returns correct
# values for simple toy data.
for test_case in solver_shrinkage:
solver, shrinkage = test_case
clf = LinearDiscriminantAnalysis(solver=solver, shrinkage=shrinkage)
y_pred = clf.fit(X, y).predict(X)
assert_array_equal(y_pred, y, 'solver %s' % solver)
# Assert that it works with 1D data
y_pred1 = clf.fit(X1, y).predict(X1)
assert_array_equal(y_pred1, y, 'solver %s' % solver)
# Test probability estimates
y_proba_pred1 = clf.predict_proba(X1)
assert_array_equal((y_proba_pred1[:, 1] > 0.5) + 1, y,
'solver %s' % solver)
y_log_proba_pred1 = clf.predict_log_proba(X1)
assert_array_almost_equal(np.exp(y_log_proba_pred1), y_proba_pred1,
8, 'solver %s' % solver)
# Primarily test for commit 2f34950 -- "reuse" of priors
y_pred3 = clf.fit(X, y3).predict(X)
# LDA shouldn't be able to separate those
assert_true(np.any(y_pred3 != y3), 'solver %s' % solver)
# Test invalid shrinkages
clf = LinearDiscriminantAnalysis(solver="lsqr", shrinkage=-0.2231)
assert_raises(ValueError, clf.fit, X, y)
clf = LinearDiscriminantAnalysis(solver="eigen", shrinkage="dummy")
assert_raises(ValueError, clf.fit, X, y)
clf = LinearDiscriminantAnalysis(solver="svd", shrinkage="auto")
assert_raises(NotImplementedError, clf.fit, X, y)
# Test unknown solver
clf = LinearDiscriminantAnalysis(solver="dummy")
assert_raises(ValueError, clf.fit, X, y)
def test_lda_priors():
# Test priors (negative priors)
priors = np.array([0.5, -0.5])
clf = LinearDiscriminantAnalysis(priors=priors)
msg = "priors must be non-negative"
assert_raise_message(ValueError, msg, clf.fit, X, y)
# Test that priors passed as a list are correctly handled (run to see if
# failure)
clf = LinearDiscriminantAnalysis(priors=[0.5, 0.5])
clf.fit(X, y)
# Test that priors always sum to 1
priors = np.array([0.5, 0.6])
prior_norm = np.array([0.45, 0.55])
clf = LinearDiscriminantAnalysis(priors=priors)
assert_warns(UserWarning, clf.fit, X, y)
assert_array_almost_equal(clf.priors_, prior_norm, 2)
def test_lda_coefs():
# Test if the coefficients of the solvers are approximately the same.
n_features = 2
n_classes = 2
n_samples = 1000
X, y = make_blobs(n_samples=n_samples, n_features=n_features,
centers=n_classes, random_state=11)
clf_lda_svd = LinearDiscriminantAnalysis(solver="svd")
clf_lda_lsqr = LinearDiscriminantAnalysis(solver="lsqr")
clf_lda_eigen = LinearDiscriminantAnalysis(solver="eigen")
clf_lda_svd.fit(X, y)
clf_lda_lsqr.fit(X, y)
clf_lda_eigen.fit(X, y)
assert_array_almost_equal(clf_lda_svd.coef_, clf_lda_lsqr.coef_, 1)
assert_array_almost_equal(clf_lda_svd.coef_, clf_lda_eigen.coef_, 1)
assert_array_almost_equal(clf_lda_eigen.coef_, clf_lda_lsqr.coef_, 1)
def test_lda_transform():
# Test LDA transform.
clf = LinearDiscriminantAnalysis(solver="svd", n_components=1)
X_transformed = clf.fit(X, y).transform(X)
assert_equal(X_transformed.shape[1], 1)
clf = LinearDiscriminantAnalysis(solver="eigen", n_components=1)
X_transformed = clf.fit(X, y).transform(X)
assert_equal(X_transformed.shape[1], 1)
clf = LinearDiscriminantAnalysis(solver="lsqr", n_components=1)
clf.fit(X, y)
msg = "transform not implemented for 'lsqr'"
assert_raise_message(NotImplementedError, msg, clf.transform, X)
def test_lda_explained_variance_ratio():
# Test if the sum of the normalized eigen vectors values equals 1,
# Also tests whether the explained_variance_ratio_ formed by the
# eigen solver is the same as the explained_variance_ratio_ formed
# by the svd solver
state = np.random.RandomState(0)
X = state.normal(loc=0, scale=100, size=(40, 20))
y = state.randint(0, 3, size=(40,))
clf_lda_eigen = LinearDiscriminantAnalysis(solver="eigen")
clf_lda_eigen.fit(X, y)
assert_almost_equal(clf_lda_eigen.explained_variance_ratio_.sum(), 1.0, 3)
clf_lda_svd = LinearDiscriminantAnalysis(solver="svd")
clf_lda_svd.fit(X, y)
assert_almost_equal(clf_lda_svd.explained_variance_ratio_.sum(), 1.0, 3)
tested_length = min(clf_lda_svd.explained_variance_ratio_.shape[0],
clf_lda_eigen.explained_variance_ratio_.shape[0])
# NOTE: clf_lda_eigen.explained_variance_ratio_ is not of n_components
# length. Make it the same length as clf_lda_svd.explained_variance_ratio_
# before comparison.
assert_array_almost_equal(clf_lda_svd.explained_variance_ratio_,
clf_lda_eigen.explained_variance_ratio_[:tested_length])
def test_lda_orthogonality():
# arrange four classes with their means in a kite-shaped pattern
# the longer distance should be transformed to the first component, and
# the shorter distance to the second component.
means = np.array([[0, 0, -1], [0, 2, 0], [0, -2, 0], [0, 0, 5]])
# We construct perfectly symmetric distributions, so the LDA can estimate
# precise means.
scatter = np.array([[0.1, 0, 0], [-0.1, 0, 0], [0, 0.1, 0], [0, -0.1, 0],
[0, 0, 0.1], [0, 0, -0.1]])
X = (means[:, np.newaxis, :] + scatter[np.newaxis, :, :]).reshape((-1, 3))
y = np.repeat(np.arange(means.shape[0]), scatter.shape[0])
# Fit LDA and transform the means
clf = LinearDiscriminantAnalysis(solver="svd").fit(X, y)
means_transformed = clf.transform(means)
d1 = means_transformed[3] - means_transformed[0]
d2 = means_transformed[2] - means_transformed[1]
d1 /= np.sqrt(np.sum(d1 ** 2))
d2 /= np.sqrt(np.sum(d2 ** 2))
# the transformed within-class covariance should be the identity matrix
assert_almost_equal(np.cov(clf.transform(scatter).T), np.eye(2))
# the means of classes 0 and 3 should lie on the first component
assert_almost_equal(np.abs(np.dot(d1[:2], [1, 0])), 1.0)
# the means of classes 1 and 2 should lie on the second component
assert_almost_equal(np.abs(np.dot(d2[:2], [0, 1])), 1.0)
def test_lda_scaling():
# Test if classification works correctly with differently scaled features.
n = 100
rng = np.random.RandomState(1234)
# use uniform distribution of features to make sure there is absolutely no
# overlap between classes.
x1 = rng.uniform(-1, 1, (n, 3)) + [-10, 0, 0]
x2 = rng.uniform(-1, 1, (n, 3)) + [10, 0, 0]
x = np.vstack((x1, x2)) * [1, 100, 10000]
y = [-1] * n + [1] * n
for solver in ('svd', 'lsqr', 'eigen'):
clf = LinearDiscriminantAnalysis(solver=solver)
# should be able to separate the data perfectly
assert_equal(clf.fit(x, y).score(x, y), 1.0,
'using covariance: %s' % solver)
def test_qda():
# QDA classification.
# This checks that QDA implements fit and predict and returns
# correct values for a simple toy dataset.
clf = QuadraticDiscriminantAnalysis()
y_pred = clf.fit(X6, y6).predict(X6)
assert_array_equal(y_pred, y6)
# Assure that it works with 1D data
y_pred1 = clf.fit(X7, y6).predict(X7)
assert_array_equal(y_pred1, y6)
# Test probas estimates
y_proba_pred1 = clf.predict_proba(X7)
assert_array_equal((y_proba_pred1[:, 1] > 0.5) + 1, y6)
y_log_proba_pred1 = clf.predict_log_proba(X7)
assert_array_almost_equal(np.exp(y_log_proba_pred1), y_proba_pred1, 8)
y_pred3 = clf.fit(X6, y7).predict(X6)
# QDA shouldn't be able to separate those
assert_true(np.any(y_pred3 != y7))
# Classes should have at least 2 elements
assert_raises(ValueError, clf.fit, X6, y4)
def test_qda_priors():
clf = QuadraticDiscriminantAnalysis()
y_pred = clf.fit(X6, y6).predict(X6)
n_pos = np.sum(y_pred == 2)
neg = 1e-10
clf = QuadraticDiscriminantAnalysis(priors=np.array([neg, 1 - neg]))
y_pred = clf.fit(X6, y6).predict(X6)
n_pos2 = np.sum(y_pred == 2)
assert_greater(n_pos2, n_pos)
def test_qda_store_covariances():
# The default is to not set the covariances_ attribute
clf = QuadraticDiscriminantAnalysis().fit(X6, y6)
assert_true(not hasattr(clf, 'covariances_'))
# Test the actual attribute:
clf = QuadraticDiscriminantAnalysis(store_covariances=True).fit(X6, y6)
assert_true(hasattr(clf, 'covariances_'))
assert_array_almost_equal(
clf.covariances_[0],
np.array([[0.7, 0.45], [0.45, 0.7]])
)
assert_array_almost_equal(
clf.covariances_[1],
np.array([[0.33333333, -0.33333333], [-0.33333333, 0.66666667]])
)
def test_qda_regularization():
# the default is reg_param=0. and will cause issues
# when there is a constant variable
clf = QuadraticDiscriminantAnalysis()
with ignore_warnings():
y_pred = clf.fit(X2, y6).predict(X2)
assert_true(np.any(y_pred != y6))
# adding a little regularization fixes the problem
clf = QuadraticDiscriminantAnalysis(reg_param=0.01)
with ignore_warnings():
clf.fit(X2, y6)
y_pred = clf.predict(X2)
assert_array_equal(y_pred, y6)
# Case n_samples_in_a_class < n_features
clf = QuadraticDiscriminantAnalysis(reg_param=0.1)
with ignore_warnings():
clf.fit(X5, y5)
y_pred5 = clf.predict(X5)
assert_array_equal(y_pred5, y5)
def test_deprecated_lda_qda_deprecation():
if reload is None:
raise SkipTest("Can't reload module on Python3.3")
def import_lda_module():
import sklearn.lda
# ensure that we trigger DeprecationWarning even if the sklearn.lda
# was loaded previously by another test.
reload(sklearn.lda)
return sklearn.lda
lda = assert_warns(DeprecationWarning, import_lda_module)
assert lda.LDA is LinearDiscriminantAnalysis
def import_qda_module():
import sklearn.qda
# ensure that we trigger DeprecationWarning even if the sklearn.qda
# was loaded previously by another test.
reload(sklearn.qda)
return sklearn.qda
qda = assert_warns(DeprecationWarning, import_qda_module)
assert qda.QDA is QuadraticDiscriminantAnalysis
def test_covariance():
x, y = make_blobs(n_samples=100, n_features=5,
centers=1, random_state=42)
# make features correlated
x = np.dot(x, np.arange(x.shape[1] ** 2).reshape(x.shape[1], x.shape[1]))
c_e = _cov(x, 'empirical')
assert_almost_equal(c_e, c_e.T)
c_s = _cov(x, 'auto')
assert_almost_equal(c_s, c_s.T)
| bsd-3-clause |
JackKelly/neuralnilm_prototype | neuralnilm/metrics.py | 2 | 3184 | from __future__ import print_function, division
import numpy as np
import sklearn.metrics as metrics
METRICS = {
'classification': [
'accuracy_score',
'f1_score',
'precision_score',
'recall_score'
],
'regression': [
'mean_absolute_error'
]
}
def run_metrics(y_true, y_pred, mains, on_power_threshold=4):
"""
Parameters
----------
on_power_threshold : int
"""
# Truncate
n = min(len(y_true), len(y_pred))
y_true = y_true[:n]
y_pred = y_pred[:n]
y_true[y_true <= on_power_threshold] = 0
y_true_class = y_true > on_power_threshold
y_pred_class = y_pred > on_power_threshold
ARGS = {
'classification': '(y_true_class, y_pred_class)',
'regression': '(y_true, y_pred)'
}
scores = {}
for metric_type, metric_list in METRICS.iteritems():
args = ARGS[metric_type]
for metric in metric_list:
score = eval('metrics.' + metric + args)
scores[metric] = float(score)
sum_y_true = np.sum(y_true)
sum_y_pred = np.sum(y_pred)
# negative means underestimates
relative_error_in_total_energy = float(
(sum_y_pred - sum_y_true) / max(sum_y_true, sum_y_pred))
# For total energy correctly assigned
denominator = 2 * np.sum(mains)
abs_diff = np.fabs(y_pred - y_true)
sum_abs_diff = np.sum(abs_diff)
total_energy_correctly_assigned = 1 - (sum_abs_diff / denominator)
total_energy_correctly_assigned = float(total_energy_correctly_assigned)
scores.update({
'relative_error_in_total_energy': relative_error_in_total_energy,
'total_energy_correctly_assigned': total_energy_correctly_assigned,
'sum_abs_diff': float(sum_abs_diff)
})
return scores
def across_all_appliances(scores, mains, aggregate_predictions):
total_sum_abs_diff = 0.0
for appliance_scores in scores.values():
total_sum_abs_diff += appliance_scores['sum_abs_diff']
# Total energy correctly assigned
# See Eq(1) on p5 of Kolter & Johnson 2011
denominator = 2 * np.sum(mains)
total_energy_correctly_assigned = 1 - (total_sum_abs_diff / denominator)
total_energy_correctly_assigned = float(total_energy_correctly_assigned)
# explained variance
n = min(len(mains), len(aggregate_predictions))
mains = mains[:n]
aggregate_predictions = aggregate_predictions[:n]
scores['across all appliances'] = {
'total_energy_correctly_assigned': total_energy_correctly_assigned,
'explained_variance_score': float(
metrics.explained_variance_score(mains, aggregate_predictions)),
'mean_absolute_error': float(
np.mean(
[scores[app]['mean_absolute_error']
for app in scores])),
'relative_error_in_total_energy': float(
np.mean(
[scores[app]['relative_error_in_total_energy']
for app in scores])),
}
scores['across all appliances'].update({
metric: float(np.mean([scores[app][metric] for app in scores]))
for metric in METRICS['classification']
})
return scores
| mit |
botswana-harvard/edc-rdb | bcpp_rdb/mixins/dataframe_mixin.py | 1 | 1841 | import pytz
import pandas as pd
from django.conf import settings
from sqlalchemy.engine import create_engine
from ..private_settings import Rdb, Edc
tz = pytz.timezone(settings.TIME_ZONE)
class DataframeMixin:
conn_settings = Rdb, Edc
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
string = None
self.engine = {}
self.db_connections = {}
for settings in self.conn_settings:
if settings.engine == 'pg':
string = 'postgresql://{user}:{password}@{host}:{port}/{dbname}'
elif settings.engine == 'mysql':
string = 'mysql+mysqldb://{user}:{password}@{host}:{port}/{dbname}'
string = string.format(
user=settings.user,
password=settings.password,
host=settings.host,
dbname=settings.dbname,
port=settings.port)
self.engine[settings.connection_name] = self.open_engine(string)
self.db_connections[settings.connection_name] = dict(
name=settings.connection_name,
host=settings.host,
dbname=settings.dbname,
port=settings.port)
def open_engine(self, string):
connect_args = {}
try:
if settings.timeout:
connect_args = {'connect_timeout': settings.timeout}
else:
connect_args = {}
except AttributeError:
pass
return create_engine(string, connect_args=connect_args)
def get_dataframe(self, sql=None, connection_name=None):
"""Return a dataframe for the sql query."""
with self.engine[connection_name].connect() as conn, conn.begin():
dataframe = pd.read_sql_query(sql, conn)
return dataframe
| gpl-2.0 |
bstadie/cgt | examples/demo_variational_autoencoder.py | 18 | 10799 | import cgt
from cgt import core
from cgt import nn
import numpy as np
import cPickle as pickle
from scipy.stats import norm
import matplotlib.pyplot as plt
from example_utils import fetch_dataset
'''
MNIST manifold demo (with 2-dimensional latent z) using variational autoencoder
'''
rng = np.random.RandomState(1234)
def kld_unit_mvn(mu, var):
# KL divergence from N(0, I)
return (mu.shape[1] + cgt.sum(cgt.log(var), axis=1) - cgt.sum(cgt.square(mu), axis=1) - cgt.sum(var, axis=1)) / 2.0
def log_diag_mvn(mu, var):
# log probability of x under N(mu, diag(var))
def f(x):
# expects batches
k = mu.shape[1]
logp = (-k / 2.0) * np.log(2 * np.pi) - 0.5 * cgt.sum(cgt.log(var), axis=1) - cgt.sum(0.5 * (1.0 / var) * (x - mu) * (x - mu), axis=1)
return logp
return f
class HiddenLayer(object):
# adapted from http://deeplearning.net/tutorial/mlp.html
def __init__(self, input, n_in, n_out, W=None, b=None,
activation=cgt.tanh, prefix=""):
self.n_in = n_in
self.n_out = n_out
if W is None:
# XXX replace with nn init
W_values = np.asarray(
rng.uniform(
low=-np.sqrt(6. / (n_in + n_out)),
high=np.sqrt(6. / (n_in + n_out)),
size=(n_in, n_out)
),
dtype=cgt.floatX
)
if activation == cgt.sigmoid:
W_values *= 4
W = cgt.shared(W_values, name=prefix+"_W")
if b is None:
b_values = np.zeros((n_out,), dtype=cgt.floatX)
b = cgt.shared(b_values, name=prefix+"_b")
self.W = W
self.b = b
# XXX broadcast api may change
lin_output = cgt.broadcast("+", cgt.dot(input, self.W),
cgt.dimshuffle(self.b, ["x", 0]), "xx,1x")
self.output = (
lin_output if activation is None
else activation(lin_output)
)
# parameters of the model
self.params = [self.W, self.b]
class _MLP(object):
# building block for MLP instantiations defined below
def __init__(self, x, n_in, n_hid, nlayers=1, prefix=""):
self.nlayers = nlayers
self.hidden_layers = list()
inp = x
for k in xrange(self.nlayers):
hlayer = HiddenLayer(
input=inp,
n_in=n_in,
n_out=n_hid,
activation=cgt.tanh,
prefix=prefix + ("_%d" % (k + 1))
)
n_in = n_hid
inp = hlayer.output
self.hidden_layers.append(hlayer)
self.params = [param for l in self.hidden_layers for param in l.params]
self.input = input
# NOTE output layer computed by instantations
class GaussianMLP(_MLP):
def __init__(self, x, n_in, n_hid, n_out, nlayers=1, y=None, eps=None):
super(GaussianMLP, self).__init__(x, n_in, n_hid, nlayers=nlayers, prefix="GaussianMLP_hidden")
self.mu_layer = HiddenLayer(
input=self.hidden_layers[-1].output,
n_in=self.hidden_layers[-1].n_out,
n_out=n_out,
activation=None,
prefix="GaussianMLP_mu"
)
# log(sigma^2)
self.logvar_layer = HiddenLayer(
input=self.hidden_layers[-1].output,
n_in=self.hidden_layers[-1].n_out,
n_out=n_out,
activation=None,
prefix="GaussianMLP_logvar"
)
self.mu = self.mu_layer.output
self.var = cgt.exp(self.logvar_layer.output)
self.sigma = cgt.sqrt(self.var)
self.params = self.params + self.mu_layer.params +\
self.logvar_layer.params
# for use as encoder
if eps is not None:
assert(y is None)
self.out = self.mu + self.sigma * eps
# for use as decoder
if y:
assert(eps is None)
self.out = cgt.sigmoid(self.mu)
self.cost = -cgt.sum(log_diag_mvn(self.out, self.var)(y))
class BernoulliMLP(_MLP):
def __init__(self, x, n_in, n_hid, n_out, nlayers=1, y=None):
super(BernoulliMLP, self).__init__(x, n_in, n_hid, nlayers=nlayers, prefix="BernoulliMLP_hidden")
self.out_layer = HiddenLayer(
input=self.hidden_layers[-1].output,
n_in=self.hidden_layers[-1].n_out,
n_out=n_out,
activation=cgt.sigmoid,
prefix="BernoulliMLP_y_hat"
)
self.params = self.params + self.out_layer.params
if y is not None:
self.out = self.out_layer.output
self.cost = cgt.sum(nn.binary_crossentropy(self.out, y))
class VAE(object):
def __init__(self, xdim, args, dec="bernoulli"):
self.xdim = xdim
self.hdim = args.hdim
self.zdim = args.zdim
self.lmbda = args.lmbda # weight decay coefficient * 2
self.x = cgt.matrix("x", dtype=cgt.floatX)
self.eps = cgt.matrix("eps", dtype=cgt.floatX)
self.enc_mlp = GaussianMLP(self.x, self.xdim, self.hdim, self.zdim, nlayers=args.nlayers, eps=self.eps)
if dec == "bernoulli":
# log p(x | z) defined as -CE(x, y) = dec_mlp.cost(y)
self.dec_mlp = BernoulliMLP(self.enc_mlp.out, self.zdim, self.hdim, self.xdim, nlayers=args.nlayers, y=self.x)
elif dec == "gaussian":
self.dec_mlp = GaussianMLP(self.enc_mlp.out, self.zdim, self.hdim, self.xdim, nlayers=args.nlayers, y=self.x)
else:
raise RuntimeError("unrecognized decoder %" % dec)
self.cost = (-cgt.sum(kld_unit_mvn(self.enc_mlp.mu, self.enc_mlp.var)) + self.dec_mlp.cost) / args.batch_size
self.params = self.enc_mlp.params + self.dec_mlp.params
# L2 regularization
self.gparams = [cgt.grad(self.cost, [p])[0] + self.lmbda * p for p in self.params]
self.gaccums = [cgt.shared(np.zeros(p.op.get_value().shape, dtype=cgt.floatX)) for p in self.params]
# XXX replace w/ adagrad update from nn
ADAGRAD_EPS = 1e-10 # for stability
self.updates = [
(param, param - args.lr * gparam / cgt.sqrt(gaccum + cgt.square(gparam) + ADAGRAD_EPS))
for param, gparam, gaccum in zip(self.params, self.gparams, self.gaccums)
]
self.updates += [
(gaccum, gaccum + cgt.square(gparam))
for gaccum, gparam in zip(self.gaccums, self.gparams)
]
self.train = cgt.function(
[self.x, self.eps],
self.cost,
updates=self.updates
)
self.test = cgt.function(
[self.x, self.eps],
self.cost,
updates=None
)
# can be used for semi-supervised learning for example
self.encode = cgt.function(
[self.x, self.eps],
self.enc_mlp.out
)
def main():
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--batch_size", default=100)
parser.add_argument("--nlayers", default=1, type=int, help="number of hidden layers in MLP before output layers")
parser.add_argument("--hdim", default=500, type=int, help="dimension of hidden layer")
parser.add_argument("--zdim", default=2, type=int, help="dimension of continuous latent variable")
parser.add_argument("--lmbda", default=0.001, type=float, help="weight decay coefficient")
parser.add_argument("--lr", default=0.01, type=float, help="learning rate")
parser.add_argument("--epochs", default=1000, type=int, help="number of passes over dataset")
parser.add_argument("--print_every", default=100, type=int, help="how often to print cost")
parser.add_argument("--outfile", default="vae_model.pk", help="output file to save model to")
args = parser.parse_args()
print(args)
if args.epochs > 100:
print("NOTE: training might take a while. You may want to first sanity check by setting --epochs to something like 20 (manifold will be fuzzy).")
# set up dataset
mnist = fetch_dataset("http://rll.berkeley.edu/cgt-data/mnist.npz")
X = (mnist["X"]/255.).astype(cgt.floatX)
y = mnist["y"]
np.random.seed(0)
sortinds = np.random.permutation(70000)
X = X[sortinds]
y = y[sortinds]
train_x = X[0:50000]
train_y = y[0:50000]
valid_x = X[50000:60000]
valid_y = y[50000:60000]
# run SGVB algorithm
model = VAE(train_x.shape[1], args, dec="bernoulli")
expcost = None
num_train_batches = train_x.shape[0] / args.batch_size
num_valid_batches = valid_x.shape[0] / args.batch_size
valid_freq = num_train_batches
for b in xrange(args.epochs * num_train_batches):
k = b % num_train_batches
x = train_x[k * args.batch_size:(k + 1) * args.batch_size, :]
eps = np.random.randn(x.shape[0], args.zdim).astype(cgt.floatX)
cost = model.train(x, eps)
if not expcost:
expcost = cost
else:
expcost = 0.01 * cost + 0.99 * expcost
if (b + 1) % args.print_every == 0:
print("iter %d, cost %f, expcost %f" % (b + 1, cost, expcost))
if (b + 1) % valid_freq == 0:
valid_cost = 0
for l in xrange(num_valid_batches):
x_val = valid_x[l * args.batch_size:(l + 1) * args.batch_size, :]
eps_val = np.zeros((x_val.shape[0], args.zdim), dtype=cgt.floatX)
valid_cost = valid_cost + model.test(x_val, eps_val)
valid_cost = valid_cost / num_valid_batches
print("valid cost: %f" % valid_cost)
# XXX fix pickling of cgt models
#print("saving final model")
#with open(args.outfile, "wb") as f:
#pickle.dump(model, f, protocol=pickle.HIGHEST_PROTOCOL)
# XXX use this to sample, should later be able to compile f(z) = y directly (See Issue #18)
newz = cgt.matrix("newz", dtype=cgt.floatX)
newy = cgt.core.clone(model.dec_mlp.out, {model.enc_mlp.out:newz})
decode = cgt.function(
[newz],
newy
)
S = (28, 28)
M = 20
manifold = np.zeros((S[0]*M, S[1]*M), dtype=cgt.floatX)
for z1 in xrange(M):
for z2 in xrange(M):
print z1, z2
z = np.zeros((1, 2))
# pass unit square through inverse Gaussian CDF
z[0, 0] = norm.ppf(z1 * 1.0/M + 1.0/(M * 2))
z[0, 1] = norm.ppf(z2 * 1.0/M + 1.0/(M * 2))
z = np.array(z, dtype=cgt.floatX)
x_hat = decode(z)
x_hat = x_hat.reshape(S)
manifold[z1 * S[0]:(z1 + 1) * S[0],
z2 * S[1]:(z2 + 1) * S[1]] = x_hat
plt.imshow(manifold, cmap="Greys_r")
plt.axis("off")
plt.show()
if __name__ == "__main__":
main()
| mit |
murali-munna/scikit-learn | sklearn/neighbors/tests/test_ball_tree.py | 129 | 10192 | import pickle
import numpy as np
from numpy.testing import assert_array_almost_equal
from sklearn.neighbors.ball_tree import (BallTree, NeighborsHeap,
simultaneous_sort, kernel_norm,
nodeheap_sort, DTYPE, ITYPE)
from sklearn.neighbors.dist_metrics import DistanceMetric
from sklearn.utils.testing import SkipTest, assert_allclose
rng = np.random.RandomState(10)
V = rng.rand(3, 3)
V = np.dot(V, V.T)
DIMENSION = 3
METRICS = {'euclidean': {},
'manhattan': {},
'minkowski': dict(p=3),
'chebyshev': {},
'seuclidean': dict(V=np.random.random(DIMENSION)),
'wminkowski': dict(p=3, w=np.random.random(DIMENSION)),
'mahalanobis': dict(V=V)}
DISCRETE_METRICS = ['hamming',
'canberra',
'braycurtis']
BOOLEAN_METRICS = ['matching', 'jaccard', 'dice', 'kulsinski',
'rogerstanimoto', 'russellrao', 'sokalmichener',
'sokalsneath']
def dist_func(x1, x2, p):
return np.sum((x1 - x2) ** p) ** (1. / p)
def brute_force_neighbors(X, Y, k, metric, **kwargs):
D = DistanceMetric.get_metric(metric, **kwargs).pairwise(Y, X)
ind = np.argsort(D, axis=1)[:, :k]
dist = D[np.arange(Y.shape[0])[:, None], ind]
return dist, ind
def test_ball_tree_query():
np.random.seed(0)
X = np.random.random((40, DIMENSION))
Y = np.random.random((10, DIMENSION))
def check_neighbors(dualtree, breadth_first, k, metric, kwargs):
bt = BallTree(X, leaf_size=1, metric=metric, **kwargs)
dist1, ind1 = bt.query(Y, k, dualtree=dualtree,
breadth_first=breadth_first)
dist2, ind2 = brute_force_neighbors(X, Y, k, metric, **kwargs)
# don't check indices here: if there are any duplicate distances,
# the indices may not match. Distances should not have this problem.
assert_array_almost_equal(dist1, dist2)
for (metric, kwargs) in METRICS.items():
for k in (1, 3, 5):
for dualtree in (True, False):
for breadth_first in (True, False):
yield (check_neighbors,
dualtree, breadth_first,
k, metric, kwargs)
def test_ball_tree_query_boolean_metrics():
np.random.seed(0)
X = np.random.random((40, 10)).round(0)
Y = np.random.random((10, 10)).round(0)
k = 5
def check_neighbors(metric):
bt = BallTree(X, leaf_size=1, metric=metric)
dist1, ind1 = bt.query(Y, k)
dist2, ind2 = brute_force_neighbors(X, Y, k, metric)
assert_array_almost_equal(dist1, dist2)
for metric in BOOLEAN_METRICS:
yield check_neighbors, metric
def test_ball_tree_query_discrete_metrics():
np.random.seed(0)
X = (4 * np.random.random((40, 10))).round(0)
Y = (4 * np.random.random((10, 10))).round(0)
k = 5
def check_neighbors(metric):
bt = BallTree(X, leaf_size=1, metric=metric)
dist1, ind1 = bt.query(Y, k)
dist2, ind2 = brute_force_neighbors(X, Y, k, metric)
assert_array_almost_equal(dist1, dist2)
for metric in DISCRETE_METRICS:
yield check_neighbors, metric
def test_ball_tree_query_radius(n_samples=100, n_features=10):
np.random.seed(0)
X = 2 * np.random.random(size=(n_samples, n_features)) - 1
query_pt = np.zeros(n_features, dtype=float)
eps = 1E-15 # roundoff error can cause test to fail
bt = BallTree(X, leaf_size=5)
rad = np.sqrt(((X - query_pt) ** 2).sum(1))
for r in np.linspace(rad[0], rad[-1], 100):
ind = bt.query_radius(query_pt, r + eps)[0]
i = np.where(rad <= r + eps)[0]
ind.sort()
i.sort()
assert_array_almost_equal(i, ind)
def test_ball_tree_query_radius_distance(n_samples=100, n_features=10):
np.random.seed(0)
X = 2 * np.random.random(size=(n_samples, n_features)) - 1
query_pt = np.zeros(n_features, dtype=float)
eps = 1E-15 # roundoff error can cause test to fail
bt = BallTree(X, leaf_size=5)
rad = np.sqrt(((X - query_pt) ** 2).sum(1))
for r in np.linspace(rad[0], rad[-1], 100):
ind, dist = bt.query_radius(query_pt, r + eps, return_distance=True)
ind = ind[0]
dist = dist[0]
d = np.sqrt(((query_pt - X[ind]) ** 2).sum(1))
assert_array_almost_equal(d, dist)
def compute_kernel_slow(Y, X, kernel, h):
d = np.sqrt(((Y[:, None, :] - X) ** 2).sum(-1))
norm = kernel_norm(h, X.shape[1], kernel)
if kernel == 'gaussian':
return norm * np.exp(-0.5 * (d * d) / (h * h)).sum(-1)
elif kernel == 'tophat':
return norm * (d < h).sum(-1)
elif kernel == 'epanechnikov':
return norm * ((1.0 - (d * d) / (h * h)) * (d < h)).sum(-1)
elif kernel == 'exponential':
return norm * (np.exp(-d / h)).sum(-1)
elif kernel == 'linear':
return norm * ((1 - d / h) * (d < h)).sum(-1)
elif kernel == 'cosine':
return norm * (np.cos(0.5 * np.pi * d / h) * (d < h)).sum(-1)
else:
raise ValueError('kernel not recognized')
def test_ball_tree_kde(n_samples=100, n_features=3):
np.random.seed(0)
X = np.random.random((n_samples, n_features))
Y = np.random.random((n_samples, n_features))
bt = BallTree(X, leaf_size=10)
for kernel in ['gaussian', 'tophat', 'epanechnikov',
'exponential', 'linear', 'cosine']:
for h in [0.01, 0.1, 1]:
dens_true = compute_kernel_slow(Y, X, kernel, h)
def check_results(kernel, h, atol, rtol, breadth_first):
dens = bt.kernel_density(Y, h, atol=atol, rtol=rtol,
kernel=kernel,
breadth_first=breadth_first)
assert_allclose(dens, dens_true,
atol=atol, rtol=max(rtol, 1e-7))
for rtol in [0, 1E-5]:
for atol in [1E-6, 1E-2]:
for breadth_first in (True, False):
yield (check_results, kernel, h, atol, rtol,
breadth_first)
def test_gaussian_kde(n_samples=1000):
# Compare gaussian KDE results to scipy.stats.gaussian_kde
from scipy.stats import gaussian_kde
np.random.seed(0)
x_in = np.random.normal(0, 1, n_samples)
x_out = np.linspace(-5, 5, 30)
for h in [0.01, 0.1, 1]:
bt = BallTree(x_in[:, None])
try:
gkde = gaussian_kde(x_in, bw_method=h / np.std(x_in))
except TypeError:
raise SkipTest("Old version of scipy, doesn't accept "
"explicit bandwidth.")
dens_bt = bt.kernel_density(x_out[:, None], h) / n_samples
dens_gkde = gkde.evaluate(x_out)
assert_array_almost_equal(dens_bt, dens_gkde, decimal=3)
def test_ball_tree_two_point(n_samples=100, n_features=3):
np.random.seed(0)
X = np.random.random((n_samples, n_features))
Y = np.random.random((n_samples, n_features))
r = np.linspace(0, 1, 10)
bt = BallTree(X, leaf_size=10)
D = DistanceMetric.get_metric("euclidean").pairwise(Y, X)
counts_true = [(D <= ri).sum() for ri in r]
def check_two_point(r, dualtree):
counts = bt.two_point_correlation(Y, r=r, dualtree=dualtree)
assert_array_almost_equal(counts, counts_true)
for dualtree in (True, False):
yield check_two_point, r, dualtree
def test_ball_tree_pickle():
np.random.seed(0)
X = np.random.random((10, 3))
bt1 = BallTree(X, leaf_size=1)
# Test if BallTree with callable metric is picklable
bt1_pyfunc = BallTree(X, metric=dist_func, leaf_size=1, p=2)
ind1, dist1 = bt1.query(X)
ind1_pyfunc, dist1_pyfunc = bt1_pyfunc.query(X)
def check_pickle_protocol(protocol):
s = pickle.dumps(bt1, protocol=protocol)
bt2 = pickle.loads(s)
s_pyfunc = pickle.dumps(bt1_pyfunc, protocol=protocol)
bt2_pyfunc = pickle.loads(s_pyfunc)
ind2, dist2 = bt2.query(X)
ind2_pyfunc, dist2_pyfunc = bt2_pyfunc.query(X)
assert_array_almost_equal(ind1, ind2)
assert_array_almost_equal(dist1, dist2)
assert_array_almost_equal(ind1_pyfunc, ind2_pyfunc)
assert_array_almost_equal(dist1_pyfunc, dist2_pyfunc)
for protocol in (0, 1, 2):
yield check_pickle_protocol, protocol
def test_neighbors_heap(n_pts=5, n_nbrs=10):
heap = NeighborsHeap(n_pts, n_nbrs)
for row in range(n_pts):
d_in = np.random.random(2 * n_nbrs).astype(DTYPE)
i_in = np.arange(2 * n_nbrs, dtype=ITYPE)
for d, i in zip(d_in, i_in):
heap.push(row, d, i)
ind = np.argsort(d_in)
d_in = d_in[ind]
i_in = i_in[ind]
d_heap, i_heap = heap.get_arrays(sort=True)
assert_array_almost_equal(d_in[:n_nbrs], d_heap[row])
assert_array_almost_equal(i_in[:n_nbrs], i_heap[row])
def test_node_heap(n_nodes=50):
vals = np.random.random(n_nodes).astype(DTYPE)
i1 = np.argsort(vals)
vals2, i2 = nodeheap_sort(vals)
assert_array_almost_equal(i1, i2)
assert_array_almost_equal(vals[i1], vals2)
def test_simultaneous_sort(n_rows=10, n_pts=201):
dist = np.random.random((n_rows, n_pts)).astype(DTYPE)
ind = (np.arange(n_pts) + np.zeros((n_rows, 1))).astype(ITYPE)
dist2 = dist.copy()
ind2 = ind.copy()
# simultaneous sort rows using function
simultaneous_sort(dist, ind)
# simultaneous sort rows using numpy
i = np.argsort(dist2, axis=1)
row_ind = np.arange(n_rows)[:, None]
dist2 = dist2[row_ind, i]
ind2 = ind2[row_ind, i]
assert_array_almost_equal(dist, dist2)
assert_array_almost_equal(ind, ind2)
def test_query_haversine():
np.random.seed(0)
X = 2 * np.pi * np.random.random((40, 2))
bt = BallTree(X, leaf_size=1, metric='haversine')
dist1, ind1 = bt.query(X, k=5)
dist2, ind2 = brute_force_neighbors(X, X, k=5, metric='haversine')
assert_array_almost_equal(dist1, dist2)
assert_array_almost_equal(ind1, ind2)
| bsd-3-clause |
rgerkin/upsit | scratch.py | 1 | 33324 | import inspect
import builtins
import re
import time
import nbformat
from IPython.display import Image,display,HTML
import numpy as np
from scipy.special import beta as betaf
from scipy.stats import norm,beta
from scipy.optimize import minimize
import seaborn as sns
import pandas as pd
from sklearn.naive_bayes import MultinomialNB, BernoulliNB
from sklearn.model_selection import cross_val_score,LeaveOneOut,\
ShuffleSplit,GroupShuffleSplit
from sklearn.metrics import roc_curve, auc, roc_auc_score
from sklearn.preprocessing import scale
from sklearn.ensemble import RandomForestRegressor,RandomForestClassifier
from sklearn.multiclass import OneVsRestClassifier,OneVsOneClassifier,\
OutputCodeClassifier
from sklearn.preprocessing import MultiLabelBinarizer,Imputer
#from fancyimpute import BiScaler, KNN, NuclearNormMinimization, SoftImpute
import seaborn as sns
import bs4
import bbdp
from upsit import plt
SAVE = False
class Prindent:
def __init__(self):
self.base_depth = len(self.get_stack())
x = self.get_stack()
#for xi in x:
# builtins.print(xi.filename)
def get_stack(self):
stack = [x for x in inspect.stack() if not \
any([y in x.filename for y in \
['zmq','IPy','ipy','tornado','runpy',
'imp.py','importlib','traitlets']])]
return stack
def print(self,*args,add=0,**kwargs):
stack = self.get_stack()
#builtins.print(len(stack),self.base_depth)
depth = len(stack)-self.base_depth+add
args = ["\t"*depth + "%s"%arg for arg in args]
#builtins.print(len(inspect.stack()),self.base_depth)
builtins.print(*args,**kwargs)
print = Prindent().print
def save_fig():
global SAVE
if SAVE:
plt.savefig('%s.png' % time.time(), format='png', dpi=600)
def get_response_matrix(kind, options=['responses'], exclude_subjects={}):
responses = {}
subjects,tests = bbdp.load(kind)
exclude_subject_ids = []
for key,value in exclude_subjects.items():
exclude_subject_ids += [subject_id for subject_id,subject in subjects.items()\
if subject.__dict__[key]==value]
subjects = {subject_id:subject for subject_id,subject in subjects.items()\
if subject_id not in exclude_subject_ids}
tests = [test for test in tests if test.subject.case_id not in exclude_subject_ids]
question_nums = range(1,41)
possible_responses = [0,1,2,3]
for test in tests:
x = []
for q in question_nums:
if 'responses' in options:
response = []
actual_response = test.response_set.responses[q].choice_num
for possible_response in possible_responses:
if actual_response is possible_response:
x += [1]
else:
x += [0]
if 'responded' in options:
x += [test.response_set.responses[q].choice_num is not None]
if 'correct' in options:
x += [int(test.response_set.responses[q].correct)]
if 'total_correct' in options:
x += [sum([int(test.response_set.responses[q].correct) for q in question_nums])]
if 'fraction_correct' in options:
x += [sum([int(test.response_set.responses[q].correct) for q in question_nums])/40.0]
if 'total_responded' in options:
x += [sum([int(test.response_set.responses[q].choice_num is not None) for q in question_nums])]
if 'gender' in options:
x += [test.subject.gender]
if 'demented' in options:
try:
x += [test.subject.demented]
except:
x += [test.subject.dementia]
if 'expired_age' in options:
x += [test.subject.expired_age]
responses[test.subject] = x
ctrl = []
for key in sorted(responses.keys(),key=lambda x:x.case_id):
ctrl += [subjects[key.case_id].label == 'ctrl']
responses = np.array(list(responses[key] for key in \
sorted(responses.keys(),key=lambda x:x.case_id)),dtype='float')
if 'num_each_type' in options:
n = responses.shape[1]
for i in range(4):
responses = np.hstack((responses,responses[:,i:n:4].sum(axis=1).reshape(-1,1)))
return responses,ctrl
def get_labels(kind, exclude_subjects={}):
labels = {}
subjects,tests = bbdp.load(kind)
exclude_subject_ids = []
for key,value in exclude_subjects.items():
exclude_subject_ids += [subject_id for subject_id,subject in subjects.items()\
if subject.__dict__[key]==value]
subjects = {subject_id:subject for subject_id,subject in subjects.items()\
if subject_id not in exclude_subject_ids}
tests = [test for test in tests if test.subject.case_id not in exclude_subject_ids]
n_ctrl = 0
for test in tests:
x = []
if test.subject.label == 'ctrl':
n_ctrl += 1
if kind == 'dugger':
x += [test.subject.label == 'pd']
#x += [test.subject.expired_age-90]
#x += [test.subject.gender]
#if hasattr(test.subject,'dementia'):
# x += [test.subject.dementia]
#x += [test.subject.stint]
if hasattr(test.subject,'other'):
x += test.subject.other
x = [float(_) for _ in x]
labels[test.subject] = x
labels = [labels[key] for key in \
sorted(labels.keys(),key=lambda x:x.case_id)]
return np.array(labels)
def summarize(X_responses,ctrl):
print("The loaded matrix has shape (%d x %d), and there are %d controls" % (X_responses.shape[0],
X_responses.shape[1],
np.sum(ctrl)))
def plot_cumul(X,Y,label):
X_pos = X[Y == True]
X_neg = X[Y == False]
plt.plot(sorted(X_neg),np.linspace(0,1,len(X_neg)),'k',label='-')
plt.plot(sorted(X_pos),np.linspace(0,1,len(X_pos)),'r',label='+')
plt.xlabel(label)
plt.ylabel('Cumulative Probability')
plt.ylim(0,1)
plt.legend(loc=2)
def cross_validate(clf,X,Y,cv,kind):
mean = cross_val_score(clf,X,Y,cv=cv).mean()
print("Cross-validation accuracy for %s is %.3f" % (kind,mean))
def makeXY(keys,uses,x,drop_more=[],restrict=[]):
X = {key:{} for key in keys}
Y = {}
for key in keys:
pos,neg = [_.strip() for _ in key.split('vs')]
if restrict: # Not implmented.
for kind in pos,neg:
pass#x[kind].drop(inplace=1)
n = len(x[pos])+len(x[neg])
#order = np.random.permutation(range(n))
for use,regex in uses.items():
drop = []
for reg in regex:
drop += [feature for feature in list(x[pos]) if re.match(reg,feature)]
drop = list(set(drop))
drop += drop_more
x_new = pd.concat([x[_].drop(drop,axis=1) for _ in (pos,neg)])
#x_new = normalize(x_new,norm='max',axis=0)
#from sklearn.decomposition import PCA,NMF
#pca = PCA(n_components = min(10,x_new.shape[1]))
#nmf = NMF(n_components = min(5,x_new.shape[1]))
#x_new = pca.fit_transform(x_new)
#x_new = nmf.fit_transform(x_new)
X[key][use] = x_new#[order,:]
Y[key] = pd.Series(index=x_new.index,data=np.ones(n))
Y[key].loc[x[neg].index] = 0
return X,Y
def make_py(keys,uses,X,Y,clfs,ignore=None):
p_pos = {}
ys = {}
for key in keys:
#print(key)
splitter = ShuffleSplit(n_splits=100,test_size=0.2,random_state=0)
p_pos[key] = {}
for use in uses:
#print("\t%s" % use)
p_pos[key][use] = {'average':0}
#x = 0
for clf in clfs:
p,ys[key] = get_ps(clf,splitter,X[key][use],Y[key],ignore=ignore); # Extract the probability of PD from each classifier
p = p.clip(1e-15,1-1e-15)
p_pos[key][use][str(clf)[:7]] = p
p_pos[key][use]['average'] += p
p_pos[key][use]['average'] /= len(clfs)
return p_pos,ys
def plot_rocs(keys,uses,p_pos,Y,ys,smooth=True,no_plot=False):
sns.set(font_scale=2)
aucs = {}
aucs_sd = {}
for key in keys:
if not no_plot:
plt.figure()
d = {use:p_pos[key][use]['average'] for use in uses}
n0 = sum(Y[key]==0)
n1 = sum(Y[key]>0)
if n0==0 or n1==0:
print("Missing either positive or negative examples of %s" % key)
continue
if ys[key].std()==0:
print("Missing either positive or negative bootstrap examples of %s" % key)
continue
aucs[key],aucs_sd[key] = plot_roc_curve(ys[key],n0=n0,n1=n1,smooth=smooth,no_plot=no_plot,**d)
for i,(a1,sd1) in enumerate(zip(aucs[key],aucs_sd[key])):
for j,(a2,sd2) in enumerate(zip(aucs[key],aucs_sd[key])):
if i>j:
d = np.abs((a1-a2)/np.sqrt((sd1**2 + sd2**2)/2))
p = (1-norm.cdf(d,0,1))/2
print("\t%s vs %s: p=%.4f" % (sorted(uses)[i],sorted(uses)[j],p))
if not no_plot:
plt.title(keys[key])
save_fig()
return aucs,aucs_sd
def get_ps(clf,splitter,X,Y,ignore=None):
ps = []
ys = []
assert np.array_equal(X.index,Y.index),"X and Y indices must match"
for i, (train, test) in enumerate(splitter.split(Y)):
train = X.index[train]
test = X.index[test]
try:
assert Y.loc[train].mean() not in [0.0,1.0], \
"Must have both positive and negative examples"
except AssertionError as e:
print("Skipping split %d because: %s" % (i,e))
else:
clf.fit(X.loc[train], Y.loc[train])
X_test = X.loc[test].drop(ignore,errors='ignore') if ignore else X.loc[test]
Y_test = Y.loc[test].drop(ignore,errors='ignore') if ignore else Y.loc[test]
n_test_samples = X_test.shape[0]
if n_test_samples:
ps += list(clf.predict_proba(X_test)[:,1])
ys += list(Y_test)
else:
print("Skipping split %d because there are no test samples" % i)
return np.array(ps),np.array(ys)
def get_roc_curve(Y,p,smooth=False):
if not smooth:
fpr, tpr, thresholds = roc_curve(Y, p)
else:
from scipy.stats import gaussian_kde
x = -norm.isf(np.array(p))
x0 = x[Y==0]
x1 = x[Y==1]
threshold = np.linspace(-10,10,201)
fpr = [gaussian_kde(x0,0.2).integrate_box(t,np.inf) for t in threshold]
tpr = [gaussian_kde(x1,0.2).integrate_box(t,np.inf) for t in threshold]
roc_auc = auc(fpr, tpr)
if roc_auc < 0.5:
fpr = 1-np.array(fpr)
tpr = 1-np.array(tpr)
roc_auc = 1-roc_auc
return fpr,tpr,roc_auc
def binormal_roc(Y,p):
x = -norm.isf(np.array(p))
mu0 = x[Y==0].mean()
sigma0 = x[Y==0].std()
mu1 = x[Y==1].mean()
sigma1 = x[Y==1].std()
# Separation
a = (mu1-mu0)/sigma1
# Symmetry
b = sigma0/sigma1
threshold = np.linspace(0,1,1000)
roc = norm.cdf(a-b*norm.isf(threshold))
return threshold,roc
def bibeta_roc(Y,p):
def logL(ab):
a0,b0,a1,b1 = ab
LL = beta.logpdf(p[Y==0],a0,b0).sum() + beta.logpdf(p[Y==1],a1,b1).sum()
return -LL
result = minimize(logL,[1,3,3,1],bounds=[(1e-7,None)]*4)
a0,b0,a1,b1 = result.x
threshold = np.linspace(0,1,1000)
fpr = 1-beta.cdf(threshold,a0,b0)
tpr = 1-beta.cdf(threshold,a1,b1)
return threshold,fpr,tpr
def rgb2hex(r,g,b):
return "#%0.2x%0.2x%0.2x" % (r,g,b)
def get_colors(i):
black = rgb2hex(0, 0, 0)
blue = rgb2hex(0, 0, 255)
red = rgb2hex(255, 0, 0)
green = rgb2hex(0, 255, 0)
magenta = rgb2hex(255, 0, 255)
brown = rgb2hex(128, 0, 0)
yellow = rgb2hex(255, 255, 0)
pink = rgb2hex(255, 128, 128)
gray = rgb2hex(128, 128, 128)
orange = rgb2hex(255, 128, 0)
colors = [black,blue,red,green,magenta,brown,yellow,pink,gray,orange]
return colors[i % len(colors)]
def plot_roc_curve(Y,n0=None,n1=None,smooth=False,no_plot=False,**ps):
aucs = []
aucs_sd = []
if n0 is None:
n0 = sum(Y==0)
if n1 is None:
n1 = sum(Y>0)
for i,(title,p) in enumerate(sorted(ps.items())):
fpr,tpr,auc = get_roc_curve(Y,p,smooth=smooth)
aucs.append(auc)
# Confidence Intervals for the Area under the ROC Curve
# Cortes and Mohri
# http://www.cs.nyu.edu/~mohri/pub/area.pdf
m = n1
n = n0
A = auc
Pxxy = 0
Pxyy = 0
iters = 10000
for j in range(iters):
index = np.arange(len(Y))
np.random.shuffle(index)
p_shuff = p[index]
Y_shuff = Y[index]
pa,pb = p_shuff[Y_shuff>0][0:2]
na,nb = p_shuff[Y_shuff==0][0:2]
Pxxy += ((pa>na) and (pb>na))
Pxyy += ((na<pa) and (nb<pa))
Pxxy/=iters
Pxyy/=iters
#print(A,Pxxy,Pxyy,m,n)
var = (A*(1-A)+(m-1)*(Pxxy-(A**2))+(n-1)*(Pxyy-(A**2)))/(m*n)
sd = np.sqrt(var)
aucs_sd.append(sd)
if not no_plot:
plt.plot(fpr, tpr, lw=2, color=get_colors(i), label='%s = %0.2f' % (title,auc))
else:
print('%s = %0.3f +/- %0.3f' % (title,auc,sd))
if not no_plot:
plt.xlabel('False Positive Rate')#, fontsize='large', fontweight='bold')
plt.ylabel('True Positive Rate')#, fontsize='large', fontweight='bold')
plt.title('ROC curves')#, fontsize='large', fontweight='bold')
plt.xticks()#fontsize='large', fontweight='bold')
plt.yticks()#fontsize='large', fontweight='bold')
plt.xlim(-0.01,1.01)
plt.ylim(-0.01,1.01)
plt.legend(loc="lower right",fontsize=17)
return aucs,aucs_sd
def plot_roc_curves(Y,p,ax=None,label='full',title='AUC',color=None):
if ax is None:
fig,ax = plt.subplots(1,1)
colors = {'basic':'gray','total':'pink','all':'red'}
aucs = {}
for key in p:
fpr,tpr,auc = get_roc_curve(Y[key],p[key])
if color is None:
color = 'red' if key not in colors else colors[key]
ax.plot(fpr, tpr, lw=2, color=color,
label={'full':'AUC using\n%s = %0.2f' % (key,auc),
'sparse':'%s = %.2f' % (title,auc)}[label])
aucs[key] = auc
ax.set_xlim(-0.01,1.01)
ax.set_ylim(-0.01,1.01)
ax.set_xlabel('False Positive Rate')#, fontsize='large', fontweight='bold')
ax.set_ylabel('True Positive Rate')#, fontsize='large', fontweight='bold')
ax.set_title(title)#, fontsize='large', fontweight='bold')
# Set the tick labels font
for label in (ax.get_xticklabels() + ax.get_yticklabels()):
pass
#label.set_fontsize('large')
#label.set_fontweight('bold')
ax.legend(loc="lower right")
return aucs
def roc_data(X,Y,clf,n_iter=50,test_size=0.1):
if n_iter is None and test_size is None:
cv = LeaveOneOut()
else:
cv = ShuffleSplit(n_iter=n_iter,test_size=test_size)
n_labels = Y.shape[1]
Y_cv = {i:[] for i in range(n_labels)}
p = {i:[] for i in range(n_labels)}
p_1 = {i:[] for i in range(n_labels)}
p_0 = {i:[] for i in range(n_labels)}
for train, test in cv.split(Y):
clf.fit(X[train,:], Y[train,:])
Y_predicted = clf.predict_proba(X[test,:])
for i in range(Y.shape[1]):
if type(Y_predicted) is list:
p_ = 1 - Y_predicted[i][:,0]
else:
p_ = Y_predicted[:,i]
Y_cv[i] += list(Y[test,i])
p[i] += list(p_)
p_1[i] += list(p_[np.where(Y[test,i]==1)[0]])
p_0[i] += list(p_[np.where(Y[test,i]==0)[0]])
return Y_cv, p, p_1, p_0
def violin_roc(data):
plt.figure(figsize=(15, 15))
sns.set_context("notebook", font_scale=2.5,
rc={"lines.linewidth": 1.5, 'legend.fontsize': 20})
sns.violinplot(x='Predicted Probability', y='Diagnosis', hue='Outcome',
data=data, split=True, inner="quart",
palette={'--': "y", '+': "b"}, orient='h', width=1.0,
scale='area',#count
order=['VaD','Tauopathy NOS','AG','DLB','LB','ILBD',
'PD','AD','Parkinsonism NOS','PSP'])
leg = plt.gca().get_legend()
ltext = leg.get_texts() # all the text.Text instance in the legend
plt.setp(ltext, fontsize=24) # the legend text fontsize
plt.xlim(0,1)
sns.despine(left=True)
def plot_roc_curves_with_ps(Y,Y_cv,Xs,p0,p1,p,pathological,diagnoses=None):
if diagnoses is None:
diagnoses = pathological.columns.values
fig,ax = plt.subplots(len(diagnoses),4)
sns.set_context("notebook", font_scale=1,
rc={"lines.linewidth": 1.5,
'legend.fontsize': 12})
fig.set_size_inches(15,2*len(diagnoses))
for i in range(len(diagnoses)):
diagnosis = diagnoses[i]
ix = list(pathological.columns.values).index(diagnosis)
if diagnoses is not None and diagnosis not in diagnoses:
continue
for j,key in enumerate(['basic','total','all']):
X = Xs[key]
if len(p1[key][ix]) and len(p0[key][ix]):
ax[i,j].hist(p1[key][ix],bins=1000,range=(0,1),color='r',
normed=True,cumulative=True,histtype='step')
ax[i,j].hist(p0[key][ix],bins=1000,range=(0,1),color='k',
normed=True,cumulative=True,histtype='step')
ax[i,j].set_title(diagnosis)
if i==ax.shape[0]-1:
ax[i,j].set_xlabel('Predicted p(pathology)')
if j==0:
ax[i,j].set_ylabel('Cumulative fraction')
ax[i,j].set_xlim(0,1)
ax[i,j].set_ylim(0,1)
plot_roc_curves({key:Y_cv[key][ix] for key in Y_cv},
{key:p[key][ix] for key in p},
ax=ax[i,3],label='sparse')
fig.tight_layout()
def plot_just_rocs(props,ps,ys,imps,plot=True,axes=None,m=None,n=None,color='k',title='AUC'):
if plot:
if axes is None:
if m is None:
m = max(1,1+int(len(props)/4))
if n is None:
n = min(4,len(props))
fig,axes = plt.subplots(m,n,sharex=True,sharey=True,
squeeze=False,figsize=(m*3.5,n*3.5))
y = {}
p = {}
aucs = {}
for i,prop in enumerate(props):
if plot:
ax = axes.flat[i]
for imp in imps:
pimp = ps[imp][i,:,:].ravel() # Unravel all predictions of test data over all cv splits.
yimp = ys[imp][i,:,:].ravel() # Unravel all test data ground truth over all cv splits.
pimp = pimp[np.isnan(yimp)==0] # Remove NaNs (no ground truth)
yimp = yimp[np.isnan(yimp)==0] # Remove NaNs (no ground truth)
p[imp] = pimp[yimp.mask==False]#.compressed()
y[imp] = yimp[yimp.mask==False]#.compressed()
if plot:
aucs[prop] = plot_roc_curves(y,p,ax=ax,label='sparse',title=title,color=color) # Plot on the given axes.
else:
aucsi = {}
for imp in p:
fpr,tpr,auc = get_roc_curve(y[imp],p[imp])
aucsi[imp] = auc
aucs[prop] = aucsi
if plot:
ax.set_title(props[i].replace('Clinpath ','').replace('Nos','NOS')) # Set the title.
if plot:
for i in range(i+1,len(axes.flat)):
ax = axes.flat[i]
ax.set_axis_off()
plt.tight_layout()
return aucs,axes
def report(rs,props,imps):
n_props = list(rs.values())[0].shape[0]
assert n_props == len(props)
n_splits = list(rs.values())[0].shape[1]
for i,prop in enumerate(props):
for imp in imps:
vals = rs[imp][i,:]
print('%s,%s: %.3f +/- %.3f' % (imp,prop,vals.mean(),vals.std()/np.sqrt(n_splits)))
def build_p_frame(p0,p1,pathological,guide):
exclude = ['demunp','ftdpdefined','hs','cbdp'] # Exclude these labels from further analysis.
ps = [] # List of information for data frame.
for y,p_ in [(0,p0),(1,p1)]: # Iterate over no/yes and probabilities of no/yes.
for i in p_['all']: # Iterate over labels (AD, PD, etc.)
label = pathological.columns.values[i] # Name of label.
if label in exclude: # Exclude if in the exclude list.
continue
label = guide.query("Name=='%s'" % label)['Label'].values[0].replace('FD ','') # Fix label.
for value in p_['all'][i]: # Iterate over subjects.
ps.append((label,value,'+' if y else '--')) # Fill the list.
ps = pandas.DataFrame(ps, columns=['Diagnosis','Predicted Probability','Outcome']) # Convert to a data frame.
return ps
def fit_models(imps, X, Y, all_props, props=None,
labels=None, n_splits=5,
clf_args={'n_estimators':25,
'max_features':'auto',
'random_state':0}):
if props is None:
props = all_props
n_obs = X['missing'].shape[0] # Number of observations.
n_features = X['missing'].shape[1] # Number of observations.
n_props = len(props) # Number of properties to predict.
test_size = 0.2
if labels is None:
shuffle_split = ShuffleSplit(n_iter=n_splits,
test_size=test_size,random_state=0)
else:
shuffle_split = GroupShuffleSplit(n_iter=n_splits,
test_size=test_size,random_state=0)
n_test_samples = np.max([len(list(shuffle_split.split(range(n_obs),groups=labels))[i][1]) \
for i in range(n_splits)])
rs = {imp:np.ma.zeros((n_props,n_splits)) for imp in imps}
ps = {imp:np.ma.masked_all((n_props,n_splits,n_test_samples)) for imp in imps}
ys = {imp:np.ma.masked_all((n_props,n_splits,n_test_samples)) for imp in imps}
feature_importances = {imp:np.ma.zeros((n_props,n_features,n_splits)) for imp in imps}
for n_prop,prop in enumerate(props):
j = all_props.index(prop)
print("Fitting model for %s..." % prop)
for imp in imps:
for k,(train,test) in enumerate(shuffle_split.split(range(n_obs),
groups=labels)):
X_train,X_test = X[imp][train],X[imp][test]
Y_train,Y_test = Y[imp][train,j],Y['missing'][test,j]
clf_args_ = {key:(value if type(value) is not dict \
else value[prop])\
for key,value in clf_args.items()}
if clf_args_['max_features'] not in [None, 'auto']:
clf_args_['max_features'] = min(X_train.shape[1],
clf_args_['max_features'])
rfc = RandomForestClassifier(**clf_args_)
#if Y_train.shape[1] == 1:
# Y_train = Y_train.ravel()
rfc.fit(X_train,Y_train)
Y_predict = rfc.predict(X_test)#.reshape(-1,n_props)
probs = rfc.predict_proba(X_test)
if probs.shape[1]<2 and probs.mean()==1.0:
n_test_samples = len(probs)
ps[imp][n_prop,k,:n_test_samples] = 0.0
else:
n_test_samples = len(probs[:,1])
ps[imp][n_prop,k,:n_test_samples] = probs[:,1]
ys[imp][n_prop,k,:n_test_samples] = Y_test
rs[imp][n_prop,k] = np.ma.corrcoef(Y_predict,Y_test)[0,1]
feature_importances[imp][n_prop,:,k] = rfc.feature_importances_
return rs,feature_importances,ys,ps
def fit_models_mc(imps, X, Y, all_props, props=None,
labels=None, n_splits=5,
clf_args={'n_estimators':25,
'max_features':'auto',
'random_state':0}):
if props is None:
props = all_props
n_obs = X['missing'].shape[0] # Number of observations.
n_features = X['missing'].shape[1] # Number of observations.
n_props = len(props) # Number of properties to predict.
test_size = 0.2
if labels is None:
shuffle_split = ShuffleSplit(n_iter=n_splits,
test_size=test_size,random_state=0)
else:
shuffle_split = LabelShuffleSplit(n_iter=n_splits,
test_size=test_size,random_state=0)
n_test_samples = np.max([len(list(shuffle_split)[i][1]) \
for i in range(n_splits)])
rs = {imp:np.ma.zeros((n_props,n_splits)) for imp in imps}
ps = {imp:np.ma.masked_all((n_props,n_splits,n_test_samples)) for imp in imps}
ys = {imp:np.ma.masked_all((n_props,n_splits,n_test_samples)) for imp in imps}
feature_importances = None#{imp:np.ma.zeros((n_props,n_features,n_splits)) for imp in imps}
cols = np.array([i for i in range(len(all_props)) if all_props[i] in props])
for imp in imps:
for k,(train,test) in enumerate(shuffle_split.split(range(n_obs),groups=labels)):
#X_train,X_test = X[imp][train][:,cols],X[imp][test][:,cols]
#Y_train,Y_test = Y[imp][train][:,cols],Y['missing'][test][:,cols]
X_train,X_test = X[imp][train,:],X[imp][test,:]
Y_train,Y_test = Y[imp][train,:],Y['missing'][test,:]
clf_args_ = {key:(value if type(value) is not dict \
else value[prop])\
for key,value in clf_args.items()}
if clf_args_['max_features'] not in [None, 'auto']:
clf_args_['max_features'] = min(X_train.shape[1],
clf_args_['max_features'])
rfc = RandomForestClassifier(**clf_args_)
onevsrest = OneVsRestClassifier(rfc)
onevsrest.fit(X_train,Y_train)
Y_predict = onevsrest.predict(X_test)#.reshape(-1,n_props)
probs = onevsrest.predict_proba(X_test)
if probs.shape[1]<2 and probs.mean()==1.0:
n_test_samples = len(probs)
ps[imp][:,k,:n_test_samples] = 0.0
else:
n_test_samples = len(probs[:,1])
ps[imp][:,k,:n_test_samples] = probs.T
ys[imp][:,k,:n_test_samples] = Y_test.T
for i in range(n_props):
rs[imp][i,k] = np.ma.corrcoef(Y_predict[:,i],Y_test[:,i])[0,1]
#feature_importances[imp][n_prop,:,k] = onevsrest.feature_importances_
return rs,feature_importances,ys,ps
def scatter_diag(props,ps,os,x_diag,y_diag,plot=True):
from matplotlib.colors import Colormap as cmap
imp = 'knn'
xi = props.index(x_diag)
yi = props.index(y_diag)
p_x = ps[imp][xi,:,:].ravel() # Unravel all predictions of test data over all cv splits.
p_y = ps[imp][yi,:,:].ravel() # Unravel all test data ground truth over all cv splits.
o_x = os[imp][xi,:,:].ravel() # Unravel all predictions of test data over all cv splits.
o_y = os[imp][yi,:,:].ravel() # Unravel all test data ground truth over all cv splits.
mask = o_x.mask + o_y.mask
p_x = p_x[mask==False]
p_y = p_y[mask==False]
o_x = o_x[mask==False]
o_y = o_y[mask==False]
colors = np.vstack((o_x.data,np.zeros(len(o_x)),o_y.data)).T
colors[colors==0] = 0.2
if plot:
plt.figure(figsize=(10,10))
plt.scatter(p_x+0.02*np.random.rand(p_pd.shape[0]),
p_y+0.02*np.random.rand(p_pd.shape[0]),
s=15,
c=colors)
plt.xlabel(x_diag)
plt.ylabel(y_diag)
plt.xlim(0,p_x.max()*1.05)
plt.ylim(0,p_y.max()*1.05)
plt.legend()
return p_x,p_y,o_x,o_y
def roc_showdown(p_x,p_y,o_x,o_y,x_diag,y_diag,title='AUC',color='black'):
from sklearn.metrics import roc_curve,auc
p = p_x - p_y
o = o_x - o_y
p = p[np.abs(o)==1] # Only cases where x or y equals 1, but not both.
o = o[np.abs(o)==1]
o = o==1
fpr,tpr,_ = roc_curve(o, p)
plt.plot(fpr,1-tpr,label="%s = %.3f" % (title,auc(fpr,tpr)),c=color)
x_diag = x_diag.replace('Clinpath ','').replace('Nos','NOS')
y_diag = y_diag.replace('Clinpath ','').replace('Nos','NOS')
plt.xlabel('False %s rate' % x_diag)#'Fraction %s misdiagnosed as %s' % (y_diag,x_diag))
plt.ylabel('False %s rate' % y_diag)#'Fraction %s misdiagnosed as %s' % (x_diag,y_diag))
#plt.legend(loc=1)
def imputation(clean,imps=['knn','nmm','softimpute','biscaler']):
imputer = Imputer(strategy='median',axis=0)
X = {'missing':clean.as_matrix().astype('float')}
X['median'] = imputer.fit_transform(X['missing'])
if 'knn' in imps:
X['knn'] = KNN(k=3).complete(X['missing'])
if 'nnm' in imps:
X['nnm'] = NuclearNormMinimization().complete(X['missing'])
if 'softimpute' in imps:
X['softimpute'] = SoftImpute().complete(X['missing'])
X['missing'] = np.ma.array(X['missing'],mask=np.isnan(X['missing']))
return X
def display_importances(all_features,props,feature_importances,style=''):
props = [x.replace('Clinpath','').replace('Nos','NOS') \
for x in props]
df_importances = pandas.DataFrame(columns=props)
f_importance_means = feature_importances['knn'].mean(axis=2)
n_features = f_importance_means.shape[1]
for i,prop in enumerate(props):
f_d_importance_means = [(feature[:20],f_importance_means[i,j].round(3)) for j,feature in enumerate(all_features)]
df_importances[prop] = sorted(f_d_importance_means,key=lambda x:x[1],reverse=True)
index = pandas.Index(range(1,n_features+1))
df_importances.set_index(index,inplace=True)
html = df_importances.head(10).to_html()
bs = bs4.BeautifulSoup(html)
for i,th in enumerate(bs.findAll('th')):
th['width'] = '50px'
for i,td in enumerate(bs.findAll('td')):
feature,value = td.text.split(',')
value = float(value.replace(')',''))
feature = feature.replace('(','')
size = 9+3*(value-0.02)/0.1
td.string = feature.lower()#+'(%.3f)'%value
td['style'] = 'font-size:%dpx;' % size
if any([key in td.text for key in ['smell','upsit']]):
td['style'] += 'color:rgb(255,0,0);'
html = bs.html
#print(html)
return HTML('<span style="%s">%s</span>' % (style,html))
def classify(n_ctrl,data,alpha=1.0):
"""Naive bayes prediction of test results."""
n_subjects = data.shape[0]
n_pd = n_subjects - n_ctrl
Y = np.array([0]*n_ctrl + [1]*n_pd)
X = data
from sklearn.naive_bayes import MultinomialNB
from sklearn.cross_validation import train_test_split
from sklearn import metrics
clf = MultinomialNB(alpha=alpha)
scores = []
for i in range(10000):
X_train,X_test,Y_train,Y_test = train_test_split(X,Y,test_size=0.3)
clf.fit(X_train,Y_train)
Y_pred = clf.predict(X_test)
scores.append(metrics.accuracy_score(Y_test,Y_pred))
scores = np.array(scores)
print("\nClassification Accuracy: %.3f +/- %.3f" \
% (scores.mean(),scores.std()/np.sqrt(i)))
return clf,X,Y
def classify2(n_ctrl,data,clf=None):
from sklearn.neural_network import BernoulliRBM
from sklearn.pipeline import Pipeline
from sklearn import linear_model, metrics
from sklearn.cross_validation import train_test_split
n_subjects = data.shape[0]
n_pd = n_subjects - n_ctrl
Y = np.array([0]*n_ctrl + [1]*n_pd)
X = data
# Models we will use
if clf is None:
logistic = linear_model.LogisticRegression()
rbm = BernoulliRBM(random_state=0, verbose=True)
clf = Pipeline(steps=[('rbm', rbm), ('logistic', logistic)])
grid = GridSearchCV(logreg,{'C':[600]})
grid.fit(X,Y)
# Training Logistic regression
#logistic_classifier = linear_model.LogisticRegression(C=100.0)
#logistic_classifier.fit(X_train, Y_train)
Y_pred = clf.predict(X_test)
score = metrics.accuracy_score(Y_test,Y_pred)
#score = classifier.score(X_test,Y_test)
scores.append(score)
'''
print("Logistic regression using RBM features:\n%s\n" % (
metrics.classification_report(
Y_test,
classifier.predict(X_test))))
print("Logistic regression using raw pixel features:\n%s\n" % (
metrics.classification_report(
Y_test,
logistic_classifier.predict(X_test))))
'''
print(np.mean(scores),np.std(scores),len(scores))
| gpl-2.0 |
phoebe-project/phoebe2-docs | 2.2/examples/binary_pulsations.py | 2 | 1716 | #!/usr/bin/env python
# coding: utf-8
# Binary with Pulsations
# ============================
#
# **NOTE: pulsations are currently being tested but not yet supported**
#
# Setup
# -----------------------------
# Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
# In[ ]:
get_ipython().system('pip install -I "phoebe>=2.1,<2.2"')
# As always, let's do imports and initialize a logger and a new bundle. See [Building a System](../tutorials/building_a_system.html) for more details.
# In[1]:
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
# Adding Pulsations
# ---------------------
# Let's add one pulsation to each of our stars in the binary.
#
# A pulsation is a feature, and needs to be attached directly to a component upon creation. Providing a tag for 'feature' is entirely optional - if one is not provided it will be created automatically.
# In[2]:
b.add_feature('pulsation', component='primary', feature='puls01')
# In[3]:
#b.add_feature('pulsation', component='secondary', feature='puls02')
# Pulsation Parameters
# -----------------
# Pulsations are defined by a frequency and amplitude
# In[4]:
print b['puls01']
# In[5]:
b.set_value(qualifier='l', feature='puls01', value=0)
# In[6]:
b.set_value(qualifier='m', feature='puls01', value=0)
# In[7]:
b.add_dataset('lc', times=np.linspace(0,1,21))
# In[8]:
b.run_compute(irrad_method='none', pbmesh=True)
# In[9]:
plt.clf()
b['model'].animate()
# In[ ]:
| gpl-3.0 |
tracierenea/gnuradio | gr-filter/examples/fft_filter_ccc.py | 47 | 4363 | #!/usr/bin/env python
#
# Copyright 2013 Free Software Foundation, Inc.
#
# This file is part of GNU Radio
#
# GNU Radio is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3, or (at your option)
# any later version.
#
# GNU Radio is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with GNU Radio; see the file COPYING. If not, write to
# the Free Software Foundation, Inc., 51 Franklin Street,
# Boston, MA 02110-1301, USA.
#
from gnuradio import gr, filter
from gnuradio import analog
from gnuradio import blocks
from gnuradio import eng_notation
from gnuradio.eng_option import eng_option
from optparse import OptionParser
import sys
try:
import scipy
except ImportError:
print "Error: could not import scipy (http://www.scipy.org/)"
sys.exit(1)
try:
import pylab
except ImportError:
print "Error: could not import pylab (http://matplotlib.sourceforge.net/)"
sys.exit(1)
class example_fft_filter_ccc(gr.top_block):
def __init__(self, N, fs, bw0, bw1, tw, atten, D):
gr.top_block.__init__(self)
self._nsamps = N
self._fs = fs
self._bw0 = bw0
self._bw1 = bw1
self._tw = tw
self._at = atten
self._decim = D
taps = filter.firdes.complex_band_pass_2(1, self._fs,
self._bw0, self._bw1,
self._tw, self._at)
print "Num. Taps: ", len(taps)
self.src = analog.noise_source_c(analog.GR_GAUSSIAN, 1)
self.head = blocks.head(gr.sizeof_gr_complex, self._nsamps)
self.filt0 = filter.fft_filter_ccc(self._decim, taps)
self.vsnk_src = blocks.vector_sink_c()
self.vsnk_out = blocks.vector_sink_c()
self.connect(self.src, self.head, self.vsnk_src)
self.connect(self.head, self.filt0, self.vsnk_out)
def main():
parser = OptionParser(option_class=eng_option, conflict_handler="resolve")
parser.add_option("-N", "--nsamples", type="int", default=10000,
help="Number of samples to process [default=%default]")
parser.add_option("-s", "--samplerate", type="eng_float", default=8000,
help="System sample rate [default=%default]")
parser.add_option("-S", "--start-pass", type="eng_float", default=1000,
help="Start of Passband [default=%default]")
parser.add_option("-E", "--end-pass", type="eng_float", default=2000,
help="End of Passband [default=%default]")
parser.add_option("-T", "--transition", type="eng_float", default=100,
help="Transition band [default=%default]")
parser.add_option("-A", "--attenuation", type="eng_float", default=80,
help="Stopband attenuation [default=%default]")
parser.add_option("-D", "--decimation", type="int", default=1,
help="Decmation factor [default=%default]")
(options, args) = parser.parse_args ()
put = example_fft_filter_ccc(options.nsamples,
options.samplerate,
options.start_pass,
options.end_pass,
options.transition,
options.attenuation,
options.decimation)
put.run()
data_src = scipy.array(put.vsnk_src.data())
data_snk = scipy.array(put.vsnk_out.data())
# Plot the signals PSDs
nfft = 1024
f1 = pylab.figure(1, figsize=(12,10))
s1 = f1.add_subplot(1,1,1)
s1.psd(data_src, NFFT=nfft, noverlap=nfft/4,
Fs=options.samplerate)
s1.psd(data_snk, NFFT=nfft, noverlap=nfft/4,
Fs=options.samplerate)
f2 = pylab.figure(2, figsize=(12,10))
s2 = f2.add_subplot(1,1,1)
s2.plot(data_src)
s2.plot(data_snk.real, 'g')
pylab.show()
if __name__ == "__main__":
try:
main()
except KeyboardInterrupt:
pass
| gpl-3.0 |
nkeim/trackpy | trackpy/identification.py | 1 | 9989 | #Copyright 2013 Thomas A Caswell
#tcaswell@uchicago.edu
#http://jfi.uchicago.edu/~tcaswell
#
#This program is free software; you can redistribute it and/or modify
#it under the terms of the GNU General Public License as published by
#the Free Software Foundation; either version 3 of the License, or (at
#your option) any later version.
#
#This program is distributed in the hope that it will be useful, but
#WITHOUT ANY WARRANTY; without even the implied warranty of
#MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
#General Public License for more details.
#
#You should have received a copy of the GNU General Public License
#along with this program; if not, see <http://www.gnu.org/licenses>.
from __future__ import division
import numpy as np
import numpy.random as npr
import numba
from scipy import ndimage
import itertools
def find_local_max(img, d_rad, threshold=1e-15, inplace=False):
"""
This is effectively a replacement for pkfnd in the matlab/IDL code.
The output of this function is meant to be feed into :py:func:`~subpixel_centroid`
The magic of numpy means this should work for any dimension data.
:param img: an ndarray representing the data to find the local maxes
:param d_rad: the radius of the dilation, the smallest possible spacing between local maximum
:param threshold: optional, voxels < threshold are ignored.
:param inplace: If True, `img` is modified.
:rtype: (d,N) array of the local maximums.
"""
d_rad = int(d_rad)
# knock out singleton dimensions,
# and prepare to change values in thresholding step.
img = np.array(np.squeeze(img))
if not inplace:
img = img.copy() # Otherwise we could mess up use of 'img' by subsequent code.
img[img < threshold] = -np.inf # mask out pixels below threshold
dim = img.ndim # get the dimension of data
# make structuring element
s = ndimage.generate_binary_structure(dim, 1)
# scale it up to the desired size
d_struct = ndimage.iterate_structure(s, int(d_rad))
dilated_img = ndimage.grey_dilation(img,
footprint=d_struct,
cval=0,
mode='constant') # do the dilation
# find the locations that are the local maximum
# TODO clean this up
local_max = np.where(np.exp(img - dilated_img) > (1 - 1e-15))
# the extra [::-1] is because matplotlib and ndimage disagree an xy vs yx.
# Finally, there should be nothing within 'd_rad' of the edges of the image
return np.vstack(local_max[::-1])
@numba.autojit
def _refine_centroids_loop(img, local_maxes, mask_rad, offset_masks, d_struct, r2_mask):
results = np.zeros((4, local_maxes.shape[1]), dtype=np.float32)
for i in range(local_maxes.shape[1]):
x = local_maxes[1, i]
y = local_maxes[0, i]
mass = 0.
shiftx_accum = 0.
shifty_accum = 0.
r2 = 0.
imd = 0.
for xi in range(2 * mask_rad + 1):
for yi in range(2 * mask_rad + 1):
if d_struct[xi, yi]:
imd = img[y + yi - mask_rad, x + xi - mask_rad]
mass += imd * d_struct[xi, yi]
shiftx_accum += imd * offset_masks[0, xi, yi]
shifty_accum += imd * offset_masks[1, xi, yi]
r2 += imd * r2_mask[xi, yi]
results[0, i] = shifty_accum / mass # Note that local_maxes has xy backwards.
results[1, i] = shiftx_accum / mass
results[2, i] = mass
results[3, i] = r2
return results
def local_max_crop(img, local_maxes, mask_rad):
"""Prepare local maxes for centroid-finding by removing ones within
'mask_rad' of the image edges.
Returns a shortened 'local_maxes' array.
"""
return local_maxes.compress(
_local_max_within_bounds(img.shape, local_maxes, mask_rad), axis=1)
def _local_max_within_bounds(shape, local_maxes, mask_rad):
"""Determine which of 'local_maxes' is within the bounds 'shape'.
Return array with same length as axis 1 of local_maxes.
"""
lm = local_maxes
return (lm[0,:] >= mask_rad) & (lm[1,:] >= mask_rad) & \
(lm[0,:] <= shape[1] - 1 - mask_rad) & \
(lm[1,:] <= shape[0] - 1 - mask_rad)
def subpixel_centroid(img, local_maxes, mask_rad, struct_shape='circle'):
'''
This is effectively a replacement for cntrd in the matlab/IDL code.
Works for 2D data only. Accelerated by numba.
:param img: the data
:param local_maxes: a (d,N) array with the location of the local maximums (as generated by :py:func:`~find_local_max`)
:param mask_rad: the radius of the mask used for the averaging.
:param struct_shape: ['circle' | 'diamond'] Shape of mask over each particle.
:rtype: (d,N) array of positions, (d,) array of masses, (d,) array of r2,
'''
# First, check that all local maxes are within 'mask_rad' of the image
# edges. Otherwise we will be going outside the bounds of the array in
# _refine_centroids_loop()
if not all(_local_max_within_bounds(img.shape, local_maxes, mask_rad)):
raise IndexError('One or more local maxes are too close to the image edge. Use local_max_crop().')
# Make coordinate order compatible with upcoming code
local_maxes = local_maxes[::-1]
# do some data checking/munging
img = np.squeeze(img) # knock out singleton dimensions
dim = img.ndim
if dim > 2: raise ValueError('Use subpixel_centroid_nd() for dimension > 2')
so = [slice(-mask_rad, mask_rad + 1)] * dim
# Make circular structuring element
if struct_shape == 'circle':
d_struct = (np.sum(np.mgrid[so]**2, 0) <= mask_rad**2).astype(np.int8)
elif struct_shape == 'diamond':
s = ndimage.generate_binary_structure(dim, 1)
# scale it up to the desired size
d_struct = ndimage.iterate_structure(s, int(mask_rad))
else: raise ValueError('Shape must be diamond or circle')
offset_masks = np.array([d_struct * os for os in np.mgrid[so]]).astype(np.int8)
r2_mask = np.zeros(d_struct.shape)
for o in offset_masks:
r2_mask += o ** 2
r2_mask = np.sqrt(r2_mask).astype(float)
results = _refine_centroids_loop(img, local_maxes, mask_rad, offset_masks, d_struct, r2_mask)
pos = (results[0:2,:] + local_maxes)[::-1,:]
m = results[2,:]
r2 = results[3,:]
return pos, m, r2
def subpixel_centroid_nd(img, local_maxes, mask_rad):
'''
This is effectively a replacement for cntrd in the matlab/IDL code.
Should work for any dimension data
:param img: the data
:param local_maxes: a (d,N) array with the location of the local maximums (as generated by :py:func:`~find_local_max`)
:param mask_rad: the radius of the mask used for the averaging.
:rtype: (d,N) array of positions, (d,) array of masses, (d,) array of r2,
'''
local_maxes = local_maxes[::-1]
# do some data checking/munging
mask_rad = int(mask_rad)
img = np.squeeze(img) # knock out singleton dimensions
# make sure local_maxes.shape makes sense
dim = img.ndim
s = ndimage.generate_binary_structure(dim, 1)
# scale it up to the desired size
d_struct = ndimage.iterate_structure(s, int(mask_rad))
so = [slice(-mask_rad, mask_rad + 1)] * dim
offset_masks = [d_struct * os for os in np.mgrid[so]]
r2_mask = np.zeros(d_struct.shape)
for o in offset_masks:
r2_mask += o ** 2
r2_mask = np.sqrt(r2_mask)
shifts_lst = []
mass_lst = []
r2_lst = []
for loc in itertools.izip(*local_maxes):
window = [slice(p - mask_rad, p + mask_rad + 1) for p in loc]
img_win = img[window]
mass = np.sum(img_win * d_struct)
mass_lst.append(mass)
shifts_lst.append([np.sum(img_win * o) / mass for o in offset_masks])
r2_lst.append(np.sum(r2_mask * img_win))
sub_pixel = np.array(shifts_lst).T + local_maxes
return sub_pixel[::-1], mass_lst, r2_lst
def band_pass(img, p_rad, hwhm):
'''
Intended to be a replacement for bpass in the matlab/IDL code.
Works by convolving a Gaussian with the image, than a box car and
taking the difference.
:param img: array of data
:param p_rad: the size of the window used for the convolution
:param hwhm: the hwhm of the Gaussian
:rtype: :class:`numpy.ndarray` scaled between 0 and 1
'''
# make sure the input data is an array and float type.
img = np.asarray(img).astype(float)
p_dia = 2 * p_rad + 1
# do the two convolutions.
# These should maybe be replaced with masked kernels, but this is
# faster to code up.
img_boxcar = ndimage.filters.uniform_filter(img, p_dia, mode='nearest', cval=0)
img_gaus = ndimage.filters.gaussian_filter(img, hwhm, mode='nearest', cval=0)
# subtract them
ret_img = img_boxcar - img_gaus
# kill data at edegs where the convolution leaked out
ret_img[ret_img < 0] = 0
ret_img[:p_dia, :] = 0
ret_img[-p_dia:, :] = 0
ret_img[:, :p_dia] = 0
ret_img[:, -p_dia:] = 0
# normalize the image
ret_img -= np.min(ret_img)
ret_img /= np.max(ret_img)
return ret_img
def gen_fake_data(list_of_locs, p_rad, hwhm, img_shape):
"""
Function to generate fake images for testing purposes
"""
img = np.zeros(img_shape)
def pixel_values(window, loc):
i = np.mgrid[window] - loc.reshape(len(window), *[1] * len(window))
r = np.zeros(i[0].shape)
for _ in i:
r += _ ** 2
return np.exp(-r / (hwhm ** 2))
for loc in itertools.izip(*list_of_locs):
window = [slice(int(p) - (p_rad + 2), int(p) + (p_rad + 2) + 1) for p in loc]
p = pixel_values(window, np.array(loc))
img[window] += p
img *= 5
img += npr.randn(*img.shape) * .1
return img
| gpl-3.0 |
apache/spark | python/pyspark/pandas/__init__.py | 11 | 4308 | #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import sys
from distutils.version import LooseVersion
import warnings
from pyspark.sql.pandas.utils import require_minimum_pandas_version, require_minimum_pyarrow_version
try:
require_minimum_pandas_version()
require_minimum_pyarrow_version()
except ImportError as e:
if os.environ.get("SPARK_TESTING"):
warnings.warn(str(e))
sys.exit()
else:
raise
import pyarrow
if (
LooseVersion(pyarrow.__version__) >= LooseVersion("2.0.0")
and "PYARROW_IGNORE_TIMEZONE" not in os.environ
):
import logging
logging.warning(
"'PYARROW_IGNORE_TIMEZONE' environment variable was not set. It is required to "
"set this environment variable to '1' in both driver and executor sides if you use "
"pyarrow>=2.0.0. "
"pandas-on-Spark will set it for you but it does not work if there is a Spark context "
"already launched."
)
os.environ["PYARROW_IGNORE_TIMEZONE"] = "1"
from pyspark.pandas.frame import DataFrame
from pyspark.pandas.indexes.base import Index
from pyspark.pandas.indexes.category import CategoricalIndex
from pyspark.pandas.indexes.datetimes import DatetimeIndex
from pyspark.pandas.indexes.multi import MultiIndex
from pyspark.pandas.indexes.numeric import Float64Index, Int64Index
from pyspark.pandas.series import Series
from pyspark.pandas.groupby import NamedAgg
__all__ = [ # noqa: F405
"read_csv",
"read_parquet",
"to_datetime",
"date_range",
"from_pandas",
"get_dummies",
"DataFrame",
"Series",
"Index",
"MultiIndex",
"Int64Index",
"Float64Index",
"CategoricalIndex",
"DatetimeIndex",
"sql",
"range",
"concat",
"melt",
"get_option",
"set_option",
"reset_option",
"read_sql_table",
"read_sql_query",
"read_sql",
"options",
"option_context",
"NamedAgg",
]
def _auto_patch_spark() -> None:
import os
import logging
# Attach a usage logger.
logger_module = os.getenv("KOALAS_USAGE_LOGGER", "")
if logger_module != "":
try:
from pyspark.pandas import usage_logging
usage_logging.attach(logger_module)
except Exception as e:
logger = logging.getLogger("pyspark.pandas.usage_logger")
logger.warning(
"Tried to attach usage logger `{}`, but an exception was raised: {}".format(
logger_module, str(e)
)
)
_frame_has_class_getitem = False
_series_has_class_getitem = False
def _auto_patch_pandas() -> None:
import pandas as pd
# In order to use it in test cases.
global _frame_has_class_getitem
global _series_has_class_getitem
_frame_has_class_getitem = hasattr(pd.DataFrame, "__class_getitem__")
_series_has_class_getitem = hasattr(pd.Series, "__class_getitem__")
if sys.version_info >= (3, 7):
# Just in case pandas implements '__class_getitem__' later.
if not _frame_has_class_getitem:
pd.DataFrame.__class_getitem__ = lambda params: DataFrame.__class_getitem__(params)
if not _series_has_class_getitem:
pd.Series.__class_getitem__ = lambda params: Series.__class_getitem__(params)
_auto_patch_spark()
_auto_patch_pandas()
# Import after the usage logger is attached.
from pyspark.pandas.config import get_option, options, option_context, reset_option, set_option
from pyspark.pandas.namespace import * # F405
from pyspark.pandas.sql_processor import sql
| apache-2.0 |
clauwag/WikipediaGenderInequality | src/GoogleTrendAnalyzer.py | 1 | 19323 | __author__ = 'wagnerca'
from os import listdir
from os.path import isfile, join
import scipy.stats as stats
import numpy as np
import pandas as pd
import pylab as plt
from scipy.stats import itemfreq
import sys
import util as ut
import re
import os
import seaborn as sns
from statsmodels.formula.api import glm
from statsmodels.formula.api import ols
from statsmodels.api import families
from numpy import log
class GoogleTrendAnalyzer:
"""
Analyze Google Trend Output
Google Trends adjusts search data to make comparisons between terms easier. Otherwise, places with the most search volume would always be ranked highest.
To do this, each data point is divided by the total searches of the geography and time range it represents, to compare relative popularity.
The resulting numbers are then scaled to a range of 0 to 100.
"""
def __init__(self, path):
self.datapath = 'data/'+path
self.imgpath = 'img/'+path+"/"
if not os.path.exists(self.imgpath):
os.mkdir(self.imgpath)
self.onlyfiles = [f for f in listdir(self.datapath) if isfile(join(self.datapath, f)) ]
self.logfile = file(self.imgpath+"results-gtrend.txt", "w+")
if not os.path.isfile(self.datapath+'/selected_people.csv'):
print "Selected People File is missing! "
raise Exception("Selected People File is missing!")
# We could also create the file when it is missing
#self.allpeople = pd.read_csv('data/person_data.csv', delimiter=",", header=0)
#print self.allpeople.shape
#self.allpeople = self.create_filename_col(self.allpeople)
#self.create_selected_people_file()
self.people = pd.read_csv(self.datapath+'/selected_people.csv', delimiter=",", header=0, error_bad_lines=False)
self.people = self.create_filename_col( self.people)
#people_with_birthyear = self.people[self.people["birth_year"] > 0]
self.people = self.people[~pd.isnull(self.people.birth_year) & (self.people.birth_year <= 2015)]
self.people['birth_century'] = np.round(np.floor((self.people['birth_year']/100))*100)
def create_selected_people_file(self):
filenames=[]
for file in self.onlyfiles:
#print file
pos = file.find(".csv")
filenames.append(file[0:pos])
#print len(filenames)
selected_filenames = pd.DataFrame({"filename":filenames})
print selected_filenames.head(n=1)
print selected_filenames.shape
# add the computed statistics to the people file
selected_people = self.allpeople.merge(selected_filenames, right_on="filename", left_on="filename", how="inner")
print "AFTER MERGING selected_people"
print selected_people.head(n=1)
print selected_people.shape
selected_people.to_csv(self.datapath+"/selected_people.csv")
def create_filename_col(self, people):
i = 0
for ind,ser in people.iterrows():
#print ind
#print ser
name = ser["label"]
#print name
if "(" in name:
#remove additional info from name
pos = name.find("(")
name = name[0:pos]
# remove the letter-dot stuff
name = re.sub(r'\s[A-Z]\.\s', ' ', name)
# remove quoted stuff e.g. Katharina "kate" Mcking
name = re.sub(r'\s"[A-Za-z]+"\s', ' ', name)
# remove stuff after the comma e.g. James Dean, King from Sweden
name = re.sub(r',\s*.+', ' ', name)
#print name
people.ix[ind, "filename"] = name
i = i+1
if i % 10000 == 0:
print i
return people
def run(self):
regionalEntropy = {}
regionCount = {}
timeEntropy = {}
sumInterest = {}
posInterest = {}
##############################################################################
# PARSE GOOGLE TREND RESULT FILES
##############################################################################
quota_error = 0
for file in self.onlyfiles:
startFromLine = -1
startFromLineTime = -1
pos = file.find(".csv")
filename = file[0:pos]
linesCounter = 1
end = False
endTime = False
with open(self.datapath+"/"+file) as f:
content = f.readlines()
regions = {}
timeseries = {}
for line in content:
if line.startswith("<div id="):
quota_error += 1
print "quota error for %s"%filename
break;
if line.startswith("Region,"):
startFromLine = linesCounter
if line.startswith("Month,"):
startFromLineTime = linesCounter
if ((startFromLine > 0) and (linesCounter > startFromLine) and (not end)):
if line == "\n":
end = True
else:
items = line.split(",")
regions[items[0]] = items[1]
if ((startFromLineTime > 0) and (linesCounter > startFromLineTime) and (not endTime)):
print line
if line == "\n":
endTime = True
else:
items = line.split(",")
if items[1] == ' \n': # sometimes gtrends returns empty field rather than 0
timeseries[items[0]] = "0"
else:
timeseries[items[0]] = items[1]
linesCounter += 1
timeFrequs = map(int, timeseries.values())
regionFrequs = map(int,(regions.values()))
if linesCounter > 1:
sumInterest[filename] = np.sum(timeFrequs)
posInterest[filename] = np.count_nonzero(timeFrequs)
if posInterest[filename] > 0:
timeEntropy[filename] = stats.entropy(timeFrequs)
else:
timeEntropy[filename] = np.nan
regionCount[filename] = len(regionFrequs)
if(np.sum(regionFrequs) > 0):
regionalEntropy[filename] = stats.entropy(regionFrequs)
else:
regionalEntropy[filename] = np.nan
# store results into a dataframe
regionalEntropyDF = pd.DataFrame.from_dict(regionalEntropy.items())
regionalEntropyDF.columns=["filename", "entropy"]
print regionalEntropyDF.head()
regionCountDF = pd.DataFrame.from_dict(regionCount.items())
regionCountDF.columns=["filename", "numRegions"]
interestDF = pd.DataFrame.from_dict(sumInterest.items())
interestDF.columns=["filename", "timeInterest"]
timeEntropyDF = pd.DataFrame.from_dict(timeEntropy.items())
timeEntropyDF.columns=["filename", "timeEntropy"]
posInterestDF = pd.DataFrame.from_dict(posInterest.items())
posInterestDF.columns=["filename", "timePosInterest"]
#print "regionalEntropyDF"
#print regionalEntropyDF.head(n=1)
#print regionalEntropyDF.shape
#print "regionCountDF"
#print regionCountDF.head(n=1)
#print regionCountDF.shape
#print "self.people"
#print self.people.head(n=1)
#print self.people.shape
# add the computed statistics to the people file
self.people = self.people.merge(regionalEntropyDF, right_on="filename", left_on="filename", how="inner")
self.people = self.people.merge(regionCountDF, right_on="filename", left_on="filename", how="inner")
self.people = self.people.merge(timeEntropyDF, right_on="filename", left_on="filename", how="inner")
self.people = self.people.merge(interestDF, right_on="filename", left_on="filename", how="inner")
self.people = self.people.merge(posInterestDF, right_on="filename", left_on="filename", how="inner")
print "AFTER MERGING"
print self.people.head(n=1)
print self.people.shape
##############################################################################
# PLOTS NUM REGIONS
##############################################################################
men = self.people[self.people.gender =="male"]
women = self.people[self.people.gender =="female"]
labels = ['female ('+str(len(women.index))+')', 'male ('+str(len(men.index))+')']
data = [women.numRegions.values, men.numRegions.values]
self.logfile.write("\n Mann Withney U Num regions:")
U, p = stats.mstats.mannwhitneyu(women.numRegions.values, men.numRegions.values)
ut.write_mannwithneyu(U, p, women.numRegions.values, men.numRegions.values, self.logfile)
self.make_boxplot(data, labels, self.imgpath+"gtrend_num_regions_box.png", "num regions")
self.plot_ccdf(np.array(women.numRegions.values.tolist()), np.array(men.numRegions.values.tolist()), labels, self.imgpath+"gtrend_num_regions_ccdf.png", "Num Regions", 1, 0)
ut.plot_facet_dist(self.people, 'gender', 'numRegions', self.imgpath+"gtrend_num_regions.png")
ut.rank_size_plot(self.people, 'numRegions', 'Num Regions Gtrends', self.imgpath+"gtrend_num_regions_ranksize.png")
##############################################################################
# PLOTS TOTAL INTEREST
##############################################################################
data = [women.timeInterest.values, men.timeInterest.values]
self.logfile.write("\n \n Mann Withney U Temp Sum Interest:")
U, p = stats.mstats.mannwhitneyu(women.timeInterest.values, men.timeInterest.values)
ut.write_mannwithneyu(U, p, women.timeInterest.values, men.timeInterest.values, self.logfile)
self.make_boxplot(data, labels, self.imgpath+"gtrend_time_interest_box.png", "sum interest")
self.plot_ccdf(np.array(women.timeInterest.values.tolist()), np.array(men.timeInterest.values.tolist()), labels, self.imgpath+"gtrend_time_interest_ccdf.png", "Sum Interest", 1, 0)
ut.plot_facet_dist(self.people, 'gender', 'timeInterest', self.imgpath+"gtrend_time_interest.png")
data = [women.timePosInterest.values, men.timePosInterest.values]
self.logfile.write("\n\n Mann Withney U Temp Pos Interest:")
U, p = stats.mstats.mannwhitneyu(women.timePosInterest.values, men.timePosInterest.values)
ut.write_mannwithneyu(U, p, women.timePosInterest.values, men.timePosInterest.values, self.logfile)
self.make_boxplot(data, labels, self.imgpath+"gtrend_time_pos_interest_box.png", "num weeks with interest")
self.plot_ccdf(np.array(women.timePosInterest.values.tolist()), np.array(men.timePosInterest.values.tolist()), labels, self.imgpath+"gtrend_time_pos_interest_ccdf.png", "Num weeks with interest", 1, 0)
ut.plot_facet_dist(self.people, 'gender', 'timePosInterest', self.imgpath+"gtrend_time_pos_interest.png")
##############################################################################
# PLOT Entropy Temp INTEREST
##############################################################################
limPeople = self.people[np.isfinite(self.people['timeEntropy'])] #people[people.index not in inds]
men = limPeople[limPeople.gender =="male"]
women = limPeople[limPeople.gender =="female"]
data = [women.timeEntropy.values, men.timeEntropy.values]
self.logfile.write("\n\n Mann Withney U Time Entropy:")
U, p = stats.mstats.mannwhitneyu(women.timeEntropy.values, men.timeEntropy.values)
ut.write_mannwithneyu(U, p, women.timeEntropy.values, men.timeEntropy.values, self.logfile)
self.make_boxplot(data, labels, self.imgpath+"gtrend_time_entropy_box.png", "temporal entropy")
self.plot_ccdf(np.array(women.timeEntropy.values.tolist()), np.array(men.timeEntropy.values.tolist()), labels, self.imgpath+"gtrend_time_entropy_ccdf.png", "Temp Entropy", 1, 0)
ut.plot_facet_dist(self.people, 'gender', 'timeEntropy', self.imgpath+"gtrend_time_entropy.png")
##############################################################################
# PLOT ENTROPY
##############################################################################
# for entropy we need to remove the nan value. If we dont have data the result is nan
limPeople = self.people[np.isfinite(self.people['entropy'])] #people[people.index not in inds]
men = limPeople[limPeople.gender =="male"]
women = limPeople[limPeople.gender =="female"]
labels = ['female ('+str(len(women.index))+')', 'male ('+str(len(men.index))+')']
data = [women.entropy.values, men.entropy.values]
self.logfile.write("\n\n Mann Withney U Entropy:")
U, p = stats.mstats.mannwhitneyu(women.entropy.values, men.entropy.values)
ut.write_mannwithneyu(U, p, women.entropy.values, men.entropy.values, self.logfile)
self.make_boxplot(data, labels, "gtrend_region_entropy_box.png", "shannon entropy")
self.plot_ccdf(np.array(women.entropy.values.tolist()), np.array(men.entropy.values.tolist()), labels, self.imgpath+"gtrend_entropy_ccdf.png", "Entropy", 0, 0)
ut.plot_facet_dist(self.people, 'gender', 'entropy', self.imgpath+"gtrend_region_entropy.png")
self.regression()
def regression(self):
print self.people.head(n=1)
self.people.rename(columns={'class': 'dbpedia_class'}, inplace=True) # all_bios is the dataframe with the consolidated data. somehow it doesn't work if the class column is named "class"
self.logfile.write( "\n\n Num Regions NegativeBinomial")
m = glm("numRegions ~ C(gender,Treatment(reference='male')) ", # + C(dbpedia_class,Treatment(reference='http://dbpedia.org/ontology/Person')) + birth_century
data=self.people, family=families.NegativeBinomial()).fit()
self.logfile.write( "\n AIC "+str(m.aic))
self.logfile.write( "\n BIC "+str(m.bic))
for table in m.summary().tables:
self.logfile.write(table.as_latex_tabular())
#lim_people = self.people[self.people.numRegions>0]
self.logfile.write( "\n\n Num Regions OLS")
m = ols("numRegions ~ C(gender,Treatment(reference='male')) ", # + C(dbpedia_class,Treatment(reference='http://dbpedia.org/ontology/Person')) + birth_century
data=self.people).fit()
self.logfile.write( "\n AIC "+str(m.aic))
self.logfile.write( "\n BIC "+str(m.bic))
for table in m.summary().tables:
self.logfile.write(table.as_latex_tabular())
# we could use beta regression for normalized entropy
#print "\n\n Region Entropy"
#m = ols("entropy ~ C(gender,Treatment(reference='male')) ", #+ C(dbpedia_class,Treatment(reference='http://dbpedia.org/ontology/Person')) + birth_century
# data=self.people).fit()
#print m.summary() # <-- this gives you the table of coefficients with p-values, confidence intervals, and so on
self.logfile.write( "\n\n Sum Temp Interest")
m = ols("timeInterest ~ C(gender,Treatment(reference='male')) ", data=self.people).fit()
self.logfile.write( "\n AIC"+str(+m.aic))
for table in m.summary().tables:
self.logfile.write(table.as_latex_tabular())
self.logfile.write( "\n\n Pos Temp Interest")
m = glm("timePosInterest ~ C(gender,Treatment(reference='male')) ", data=self.people, family=families.NegativeBinomial()).fit()
self.logfile.write( "\n AIC "+str(m.aic))
self.logfile.write( "\n BIC "+str(m.bic))
for table in m.summary().tables:
self.logfile.write(table.as_latex_tabular())
#lim_people = self.people[self.people.timePosInterest>0]
self.logfile.write( "\n\n Pos Temp Interest OLS")
m = ols("timePosInterest ~ C(gender,Treatment(reference='male')) ", data=self.people).fit()
self.logfile.write( "\n AIC "+str(m.aic))
self.logfile.write( "\n BIC "+str(m.bic))
for table in m.summary().tables:
self.logfile.write(table.as_latex_tabular())
# Beta regression for normalized entropy could work
#print "\n\n Time Entropy"
#m = ols("timeEntropy ~ C(gender,Treatment(reference='male')) ", #+ C(dbpedia_class,Treatment(reference='http://dbpedia.org/ontology/Person')) + birth_century
# data=self.people).fit()
#print m.summary() # <-- this gives you the table of coefficients with p-values, confidence intervals, and so on
def make_boxplot(self, data, labels, filename, ylabel):
plt.figure()
plt.boxplot(data)
# mark the mean
means = [np.mean(x) for x in data]
print ylabel
print means
#print range(1, len(data)+1)
plt.scatter(range(1, len(data)+1), means, color="red", marker=">", s=20)
plt.ylabel(ylabel)
plt.xticks(range(1, len(data)+1), labels)
plt.savefig(filename)
#plt.figure()
#plt.boxplot(data)
## mark the mean
#means = [np.mean(x) for x in data]
#print "entropy means"
#print means
#print range(1, len(data)+1)
#plt.scatter(range(1, len(data)+1), means, color="red", marker=">", s=20)
#plt.ylabel('shannon entropy')
#plt.xticks(range(1, len(data)+1), labels)
#plt.savefig('./img/gtrend_region_entropy.png')
def plot_ccdf(self, women_values, men_values, labels, filename, xlabel, xlog, ylog):
item_frequency_female = itemfreq(women_values)
item_frequency_male = itemfreq(men_values)
ccdf= 1
ut.plot_cdf(list([item_frequency_female, item_frequency_male]), labels, ['pink','blue'], filename, xlabel, ccdf, xlog, ylog)
#item_frequency_female = itemfreq(np.array(women.entropy.values.tolist()))
#item_frequency_male = itemfreq(np.array(men.entropy.values.tolist()))
#ccdf = 1
#ut.plotCDF(list([item_frequency_female, item_frequency_male]), labels, ['pink','blue'], './img/gtrend_region_entropy_ccdf.png', 'Entropy', ccdf, 0, 0)
if __name__ == "__main__":
#analyzer = GoogleTrendAnalyzer('trends')
#analyzer.run()
#analyzer = GoogleTrendAnalyzer('trends-sample-birth1946')
#analyzer.run()
analyzer = GoogleTrendAnalyzer('trends-sample-birth1900')
analyzer.run() | mit |
abhisg/scikit-learn | sklearn/__init__.py | 2 | 3038 | """
Machine learning module for Python
==================================
sklearn is a Python module integrating classical machine
learning algorithms in the tightly-knit world of scientific Python
packages (numpy, scipy, matplotlib).
It aims to provide simple and efficient solutions to learning problems
that are accessible to everybody and reusable in various contexts:
machine-learning as a versatile tool for science and engineering.
See http://scikit-learn.org for complete documentation.
"""
import sys
import re
import warnings
# Make sure that DeprecationWarning within this package always gets printed
warnings.filterwarnings('always', category=DeprecationWarning,
module='^{0}\.'.format(re.escape(__name__)))
# PEP0440 compatible formatted version, see:
# https://www.python.org/dev/peps/pep-0440/
#
# Generic release markers:
# X.Y
# X.Y.Z # For bugfix releases
#
# Admissible pre-release markers:
# X.YaN # Alpha release
# X.YbN # Beta release
# X.YrcN # Release Candidate
# X.Y # Final release
#
# Dev branch marker is: 'X.Y.dev' or 'X.Y.devN' where N is an integer.
# 'X.Y.dev0' is the canonical version of 'X.Y.dev'
#
__version__ = '0.18.dev0'
try:
# This variable is injected in the __builtins__ by the build
# process. It used to enable importing subpackages of sklearn when
# the binaries are not built
__SKLEARN_SETUP__
except NameError:
__SKLEARN_SETUP__ = False
if __SKLEARN_SETUP__:
sys.stderr.write('Partial import of sklearn during the build process.\n')
# We are not importing the rest of the scikit during the build
# process, as it may not be compiled yet
else:
from . import __check_build
from .base import clone
__check_build # avoid flakes unused variable error
__all__ = ['calibration', 'cluster', 'covariance', 'cross_decomposition',
'cross_validation', 'datasets', 'decomposition', 'dummy',
'ensemble', 'externals', 'feature_extraction',
'feature_selection', 'gaussian_process', 'grid_search',
'isotonic', 'kernel_approximation', 'kernel_ridge',
'lda', 'learning_curve',
'linear_model', 'manifold', 'metrics', 'mixture', 'multiclass',
'naive_bayes', 'neighbors', 'neural_network', 'pipeline',
'preprocessing', 'qda', 'random_projection', 'semi_supervised',
'svm', 'tree', 'discriminant_analysis',
# Non-modules:
'clone']
def setup_module(module):
"""Fixture for the tests to assure globally controllable seeding of RNGs"""
import os
import numpy as np
import random
# It could have been provided in the environment
_random_seed = os.environ.get('SKLEARN_SEED', None)
if _random_seed is None:
_random_seed = np.random.uniform() * (2 ** 31 - 1)
_random_seed = int(_random_seed)
print("I: Seeding RNGs with %r" % _random_seed)
np.random.seed(_random_seed)
random.seed(_random_seed)
| bsd-3-clause |
vigilv/scikit-learn | examples/linear_model/plot_polynomial_interpolation.py | 251 | 1895 | #!/usr/bin/env python
"""
========================
Polynomial interpolation
========================
This example demonstrates how to approximate a function with a polynomial of
degree n_degree by using ridge regression. Concretely, from n_samples 1d
points, it suffices to build the Vandermonde matrix, which is n_samples x
n_degree+1 and has the following form:
[[1, x_1, x_1 ** 2, x_1 ** 3, ...],
[1, x_2, x_2 ** 2, x_2 ** 3, ...],
...]
Intuitively, this matrix can be interpreted as a matrix of pseudo features (the
points raised to some power). The matrix is akin to (but different from) the
matrix induced by a polynomial kernel.
This example shows that you can do non-linear regression with a linear model,
using a pipeline to add non-linear features. Kernel methods extend this idea
and can induce very high (even infinite) dimensional feature spaces.
"""
print(__doc__)
# Author: Mathieu Blondel
# Jake Vanderplas
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
def f(x):
""" function to approximate by polynomial interpolation"""
return x * np.sin(x)
# generate points used to plot
x_plot = np.linspace(0, 10, 100)
# generate points and keep a subset of them
x = np.linspace(0, 10, 100)
rng = np.random.RandomState(0)
rng.shuffle(x)
x = np.sort(x[:20])
y = f(x)
# create matrix versions of these arrays
X = x[:, np.newaxis]
X_plot = x_plot[:, np.newaxis]
plt.plot(x_plot, f(x_plot), label="ground truth")
plt.scatter(x, y, label="training points")
for degree in [3, 4, 5]:
model = make_pipeline(PolynomialFeatures(degree), Ridge())
model.fit(X, y)
y_plot = model.predict(X_plot)
plt.plot(x_plot, y_plot, label="degree %d" % degree)
plt.legend(loc='lower left')
plt.show()
| bsd-3-clause |
phageghost/pg_tools | pgtools/name_translation.py | 1 | 24079 | import os
import csv
import pandas
import argparse
import random
import collections
import itertools
import datetime
import numpy
from pgtools import toolbox
DATA_BASEPATH = toolbox.home_path('orthology/')
MODENCODE_TABLE_FNAME = os.path.join(DATA_BASEPATH, 'modencode/modencode.common.orth.txt')
def load_modencode(table_fname=MODENCODE_TABLE_FNAME, species_list=None, one_to_one=True):
"""
Given the filename of the modEncode otrhology table, return a table
of only the species in :species_list:, with gene names cleaned up.
Still don't know what the last two columns a
"""
modencode_orth = pandas.read_csv(table_fname, sep='\t', index_col=0,
names=['species_a', 'species_b', 'gene_a', 'gene_b', 'count_a', 'count_b'])
filtered_table = modencode_orth.copy()
if species_list:
filtered_table=filtered_table.loc[numpy.in1d(modencode_orth.species_a, species_list) & numpy.in1d(modencode_orth.species_b, species_list)]
if one_to_one:
filtered_table = filtered_table.loc[(filtered_table.count_a == 1) & (filtered_table.count_b == 1)]
filtered_table.gene_a = [gene.split('_')[-1] for gene in filtered_table.gene_a]
filtered_table.gene_b = [gene.split('_')[-1] for gene in filtered_table.gene_b]
return filtered_table
class ModencodeTranslator():
def _print(self, text):
if self.verbose:
print(text)
def __init__(self, table_fname=MODENCODE_TABLE_FNAME, species_list=('mouse', 'human'), one_to_one=True, verbose=True):
"""
Translates gene common names (only (for now)) between model organism species using data from the
modEncode project (http://compbio.mit.edu/modencode/orthologs/)
Pre-filter using :species_list: for a slight performance gain.
Currently doesn't handle one-to-many assignments properly, need to adapt code from Translator class.
"""
self.verbose = verbose
start_time = datetime.datetime.now()
self._print('Populating orthology data for {} from {} ...'.format(', '.join(species_list), table_fname))
if one_to_one:
self._print('\tUsing one-to-one orthologs only')
modencode_table = load_modencode(MODENCODE_TABLE_FNAME, species_list=species_list, one_to_one=one_to_one)
self._translation=collections.defaultdict(lambda: collections.defaultdict(lambda: {}))
for species_a, outer_group in modencode_table.groupby('species_a'):
for species_b, inner_group in outer_group.groupby('species_b'):
self._translation[species_a][species_b] = {gene_a:gene_b for gene_a, gene_b in zip(modencode_table.gene_a, modencode_table.gene_b)}
self._translation[species_b][species_a] = {gene_b:gene_a for gene_a, gene_b in zip(modencode_table.gene_a, modencode_table.gene_b)}
self._print('Done in {}'.format(datetime.datetime.now() - start_time))
def translate(self, gene_list, source_species, destination_species):
self._print('Translating {} gene names from {} to {}'.format(len(gene_list), source_species, destination_species))
assert source_species in self._translation, 'Source species {} not found!'.format(source_species)
assert destination_species in self._translation[source_species], 'Destination species {} not found!'.format(destination_species)
translated_list = []
for gene in gene_list:
if gene in self._translation[source_species][destination_species]:
translated_list.append(self._translation[source_species][destination_species][gene])
else:
translated_list.append('')
return translated_list
class Translator(object):
"""
This class aims to provide a single object that knows how to translate
various types of gene identifiers both between datasets and across species.
"""
SPECIES_LINEAN = {'human':'Homo sapiens', 'mouse': 'Mus musculus'}
SINGLE_SPECIES_FIELDNAMES = {'ensembl':['Ensembl Gene ID'],
'refseq':['RefSeq mRNA [e.g. NM_001195597]', 'RefSeq ncRNA [e.g. NR_002834]'],
'gene_name':['Associated Gene Name']}
ORTHOLOGY_BIOMART_FIELDNAMES = {'source_ensembl':'Ensembl Gene ID',
'destination_ensembl':'{} Ensembl Gene ID',
'confidence_score':'{} orthology confidence [0 low, 1 high]'}
VALID_ORTHOLOGY_SOURCES = ('compara', 'modencode', 'biomart')
VALID_MULTI = ('none', 'all', 'first', 'last', 'random')
VALID_FORMATS = set(['ensembl', 'refseq', 'gene_name'])
def __init__(self, species_list=('human', 'mouse'), data_basepath=DATA_BASEPATH, ensembl_release=84, orthology_source='compara', species_style='new', verbose=True):
self.verbose=verbose
self.data_basepath = data_basepath
self.ensembl_basepath = os.path.join(self.data_basepath, 'ensembl_{}'.format(ensembl_release))
self.species_translations = {}
for species in species_list:
self._populate_single_species(species, species_style=species_style)
if len(species_list) > 1:
self.orthology = collections.defaultdict(lambda: collections.defaultdict(lambda: {}))
self._populate_orthology(species_list=species_list, orthology_source=orthology_source)
def _print(self, text):
if self.verbose:
print(text)
def _populate_single_species_twofiles(self, species):
# ToDo: auto-generate files from SQL query if not present
print('Populating info for {}'.format(species))
self.species_translations[species] = {'ensembl': {'refseq':collections.defaultdict(lambda: set([])), 'gene_name':collections.defaultdict(lambda: set([]))},
'refseq': {'ensembl':collections.defaultdict(lambda: set([]))},
'gene_name': {'ensembl':collections.defaultdict(lambda: set([]))}}
refseq_fname = os.path.join(self.ensembl_basepath, '{}_ensembl_refseq_mrna.tsv'.format(species))
print(('\tPopulating RefSeq transcript IDs from {} ...'.format(refseq_fname)))
e_to_r = self.species_translations[species]['ensembl']['refseq']
r_to_e = self.species_translations[species]['refseq']['ensembl']
with open(refseq_fname, 'rt') as refseq_file:
header = refseq_file.readline()
for line in refseq_file:
ensembl_id, refseq_id = line.strip().split('\t')
e_to_r[ensembl_id].add(refseq_id)
r_to_e[refseq_id].add(ensembl_id)
gene_name_fname = os.path.join(self.ensembl_basepath, '{}_ensembl_to_gene_name.tsv'.format(species))
print('\tPopulating gene names from {} ...'.format(gene_name_fname))
e_to_gn = self.species_translations[species]['ensembl']['gene_name']
gn_to_e = self.species_translations[species]['gene_name']['ensembl']
with open(gene_name_fname, 'rt') as gene_name_file:
header = gene_name_file.readline()
for line in gene_name_file:
ensembl_id, gene_name = line.strip().split('\t')
e_to_gn[ensembl_id].add(gene_name)
gn_to_e[gene_name].add(ensembl_id)
# Make caseless dictionaries
for source_format in self.species_translations[species]:
for dest_format in self.species_translations[species][source_format]:
self.species_translations[species][source_format][dest_format] = toolbox.CaselessDict(self.species_translations[species][source_format][dest_format])
def _populate_single_species(self, species, make_caseless=False, species_style='new'):
if species_style=='new':
self._populate_single_species_onefile(species, make_caseless)
else:
self._populate_single_species_twofiles(species)
def _populate_single_species_onefile(self, species, make_caseless):
# ToDo: auto-generate files from SQL query if not present
print('Populating info for {}'.format(species))
self.species_translations[species] = {'ensembl': {'refseq':collections.defaultdict(lambda: set([])), 'gene_name':collections.defaultdict(lambda: set([]))},
'refseq': {'ensembl':collections.defaultdict(lambda: set([])), 'gene_name':collections.defaultdict(lambda: set([]))},
'gene_name': {'refseq':collections.defaultdict(lambda: set([])), 'ensembl':collections.defaultdict(lambda: set([]))}}
data_fname = os.path.join(self.ensembl_basepath, '{}_genes_biomart.tsv'.format(species))
print('\tPopulating info from {} ...'.format(data_fname))
with open(data_fname, 'rt') as data_file:
reader=csv.DictReader(data_file, dialect=csv.excel_tab)
#header = data_file.readline()
for line in reader:
for source_type, dest_type in itertools.permutations(self.SINGLE_SPECIES_FIELDNAMES, 2):
#print source_type, dest_type
for source_field, dest_field in itertools.product(self.SINGLE_SPECIES_FIELDNAMES[source_type], self.SINGLE_SPECIES_FIELDNAMES[dest_type]):
# print source_field, dest_field
if line[source_field] and line[dest_field]:
self.species_translations[species][source_type][dest_type][line[source_field]].add(line[dest_field])
if make_caseless:
# Make caseless dictionaries
for source_format in self.species_translations[species]:
for dest_format in self.species_translations[species][source_format]:
self.species_translations[species][source_format][dest_format] = toolbox.CaselessDict(self.species_translations[species][source_format][dest_format])
print('\tDone.')
def _populate_orthology(self, species_list, orthology_source='biomart', minimum_confidence=0):
toolbox.check_params('orthology_source', orthology_source, self.VALID_ORTHOLOGY_SOURCES)
print('Populating interspecies homologs for {} using {}'.format(', '.join(species_list), orthology_source))
if orthology_source in ('biomart', 'compara'):
for species_pair in itertools.permutations(species_list, 2):
self._populate_orthology_one_species(*species_pair, orthology_source=orthology_source)
elif orthology_source == 'modencode':
self._populate_orthology_modencode(species_list)
def _populate_orthology_modencode(self, species_list, one_to_one=True):
start_time = datetime.datetime.now()
modencode_table_fname=os.path.join(DATA_BASEPATH, 'modencode/modencode.orth.txt')
#self._print('Populating orthology data for {} from {} ...'.format(', '.join(species_list), modencode_table_fname))
if one_to_one:
self._print('\tUsing one-to-one orthologs only')
modencode_table = load_modencode(modencode_table_fname, species_list=species_list, one_to_one=one_to_one)
#self.orthology=collections.defaultdict(lambda: collections.defaultdict(lambda: {}))
for species_a, outer_group in modencode_table.groupby('species_a'):
for species_b, inner_group in outer_group.groupby('species_b'):
self.orthology[species_a][species_b] = {gene_a:gene_b for gene_a, gene_b in zip(modencode_table.gene_a, modencode_table.gene_b)}
self.orthology[species_b][species_a] = {gene_b:gene_a for gene_a, gene_b in zip(modencode_table.gene_a, modencode_table.gene_b)}
self._print('Done in {}'.format(datetime.datetime.now() - start_time))
def _populate_orthology_one_species(self, source_species, destination_species, orthology_source='biomart', minimum_confidence=0):
# ToDo: extend to one2many
# ToDo: load all source-destination pairs in same operation (more efficient)
#assert orthology_source in self.VALID_ORTHOLOGY_SOURCES, 'Invalid orthology source {}. Valid options are: {}'.format(orthology_source, ', '.join(self.VALID_ORTHOLOGY_SOURCES))
if source_species not in self.orthology:
self.orthology[source_species] = {}
if orthology_source == 'compara':
orthology_fname = os.path.join(self.ensembl_basepath,'{}_{}_compara_one2one.tsv'.format(*sorted((source_species, destination_species))))
orthology_data = pandas.read_csv(orthology_fname, sep='\t')
source_col_name = '_'.join(self.SPECIES_LINEAN[source_species].lower().split(' '))
dest_col_name = '_'.join(self.SPECIES_LINEAN[destination_species].lower().split(' '))
if orthology_data['gene1_organism'].iloc[0] == source_col_name and orthology_data['gene2_organism'].iloc[0] == dest_col_name:
source_num, destination_num = 1, 2
elif orthology_data['gene2_organism'].iloc[0] == source_col_name and orthology_data['gene1_organism'].iloc[0] == dest_col_name:
source_num, destination_num = 2, 1
else:
raise Exception('This does not appear to be a valid {} orthology file'.format(orthology_source))
self.orthology[source_species][destination_species] = dict([(ensembl_id1, set([ensembl_id2])) for ensembl_id1, ensembl_id2 in zip(orthology_data.loc[:, ('gene{}_id'.format(source_num))], orthology_data.loc[:, ('gene{}_id'.format(destination_num))])])
elif orthology_source == 'biomart':
orthology_fname = os.path.join(self.ensembl_basepath,'{}_to_{}_biomart.tsv'.format(source_species, destination_species))
orthology_data = pandas.read_csv(orthology_fname, sep='\t')
# lowercase the column names to simplify matching:
orthology_data.columns = [col.lower() for col in orthology_data.columns]
source_col_name = self.ORTHOLOGY_BIOMART_FIELDNAMES['source_ensembl'].lower()
destination_col_name = self.ORTHOLOGY_BIOMART_FIELDNAMES['destination_ensembl'].format(destination_species).lower()
confidence_col_name = self.ORTHOLOGY_BIOMART_FIELDNAMES['confidence_score'].format(destination_species).lower()
#print(orthology_data.columns)
#print(source_col_name, destination_col_name,confidence_col_name)
self.orthology[source_species][destination_species] = {source_ensembl_id:set([destination_ensembl_id]) for source_ensembl_id, destination_ensembl_id, confidence in zip(orthology_data[source_col_name], orthology_data[destination_col_name], orthology_data[confidence_col_name]) if confidence >= minimum_confidence}
#self.orthology[source_species][destination_species] = dict([(ensembl_id1, set([ensembl_id2])) for ensembl_id1, ensembl_id2 in zip(orthology_data.loc[:, ('gene{}_id'.format(source_num))], orthology_data.loc[:, ('gene{}_id'.format(destination_num))])])
def _robust_translate(self, identifier_list, translation_dict, multi='none'):
assert multi in self.VALID_MULTI, 'Invalid value {} for multi-mapping handling. Valid options are: {}'.format(multi, ', '.join(VALID_MULTI))
translated_list = []
for identifier in identifier_list:
if identifier in translation_dict:
destination_genes = list(translation_dict[identifier])
if len(destination_genes) > 1:
if multi == 'none':
# warn
print('Multiple translations for {}: {}'.format(identifier, ', '.join(sorted(translation_dict[identifier]))))
translated_list.append('')
elif multi == 'all':
translated_list += destination_genes
elif multi == 'first':
translated_list.append(destination_genes[0])
elif multi == 'last':
translated_list.append(destination_genes[-1])
elif multi == 'random':
translated_list.append(random.choice(destination_genes))
else:
translated_list.append(destination_genes[0])
else:
translated_list.append('')
return translated_list
def all_ids(self, species, id_format):
"""
Returns a list of all known identifiers for the given species in the given format.
"""
pass
def translate(self, identifier_list, source_species, source_format, destination_format=None, destination_species=None, use_source_if_dest_not_found=False, multi='none'):
destination_ids = self.translate_onestep(identifier_list=identifier_list,
source_species=source_species,
source_format=source_format,
destination_format=destination_format,
destination_species=destination_species,
multi=multi)
if use_source_if_dest_not_found:
for i in range(len(identifier_list)):
if not destination_ids[i]:
#print(identifier_list[i])
destination_ids[i] = identifier_list[i]
return destination_ids
def translate_twostep(self, identifier_list, source_species, source_format, destination_format=None, destination_species=None, multi=False):
assert destination_format or destination_species
assert source_format in self.VALID_FORMATS, 'Invalid source format {}. Valid formats are: {}'.format(source_format, ', '.join(self.VALID_FORMATS))
assert destination_format in self.VALID_FORMATS, 'Invalid destination format {}. Valid formats are: {}'.format(destination_format, ', '.join(self.VALID_FORMATS))
if not destination_species:
destination_species = source_species
if not destination_format:
destination_format = source_format
# first get to ensembl ids in the source species
if source_format != 'ensembl':
print('\tTranslating from {} {} to {} Ensembl IDs'.format(source_species, source_format, source_species))
source_species_ensembl_ids = self._robust_translate(identifier_list, self.species_translations[source_species][source_format]['ensembl'])
else:
source_species_ensembl_ids = identifier_list
# now move species if necessary
if destination_species != source_species:
print('\tTranslating Ensembl IDs from {} to {}'.format(source_species, destination_species))
destination_species_ensembl_ids = self._robust_translate(source_species_ensembl_ids, self.orthology[source_species][destination_species])
else:
destination_species_ensembl_ids = source_species_ensembl_ids
# now change formats in the destination species
if destination_format != 'ensembl':
print('\tTranslating from {} Ensembl IDs to {} {} '.format(destination_species, destination_species, destination_format))
destination_ids = self._robust_translate(destination_species_ensembl_ids, self.species_translations[destination_species]['ensembl'][destination_format])
else:
destination_ids = destination_species_ensembl_ids
return destination_ids
def translate_onestep(self, identifier_list, source_species, source_format, destination_format=None, destination_species=None, multi='none'):
assert destination_format or destination_species
assert source_format in self.VALID_FORMATS, 'Invalid source format {}. Valid formats are: {}'.format(source_format, ', '.join(self.VALID_FORMATS))
assert destination_format in self.VALID_FORMATS, 'Invalid destination format {}. Valid formats are: {}'.format(destination_format, ', '.join(self.VALID_FORMATS))
if not destination_species:
destination_species = source_species
if not destination_format:
destination_format = source_format
print('Converting {} gene identifiers from {} {} to {} {} ...'.format(len(identifier_list), source_species, source_format, destination_species, destination_format))
if destination_species == source_species:
print('Same species: {}'.format(destination_species))
return self._robust_translate(identifier_list, self.species_translations[source_species][source_format][destination_format], multi=multi)
else:
# first get to ensembl ids in the source species
if source_format != 'ensembl':
print('\tTranslating from {} {} to {} Ensembl IDs'.format(source_species, source_format, source_species))
source_species_ensembl_ids = self._robust_translate(identifier_list, self.species_translations[source_species][source_format]['ensembl'], multi=multi)
else:
source_species_ensembl_ids = identifier_list
#print source_species_ensembl_ids
print('\tTranslating Ensembl IDs from {} to {}'.format(source_species, destination_species))
destination_species_ensembl_ids = self._robust_translate(source_species_ensembl_ids, self.orthology[source_species][destination_species], multi=multi)
#print destination_species_ensembl_ids
# now change formats in the destination species
if destination_format != 'ensembl':
print('\tTranslating from {} Ensembl IDs to {} {} '.format(destination_species, destination_species, destination_format))
destination_ids = self._robust_translate(destination_species_ensembl_ids, self.species_translations[destination_species]['ensembl'][destination_format])
else:
destination_ids = destination_species_ensembl_ids
#print destination_ids
return destination_ids
def main():
arg_parser = argparse.ArgumentParser('Name translator')
arg_parser.add_argument('input_file', help='The name of a file containing a list of gene identifiers to be translated')
arg_parser.add_argument('source_species')
arg_parser.add_argument('source_format')
arg_parser.add_argument('destination_species')
arg_parser.add_argument('destination_format')
arg_parser.add_argument('multi', default='none', help='How to handle one_to_many mappings. Options are: none, all, first, last, random')
args = arg_parser.parse_args()
translator = Translator()
with open(args.input_file, 'rt') as in_file:
input_ids = [line.strip() for line in in_file.readlines()]
print('Loaded {} gene identifiers from {}'.format(len(input_ids), args.input_file))
output_ids = translator.translate(input_ids, source_species=args.source_species,
source_format=args.source_format,
destination_format=args.destination_format,
destination_species=args.destination_species,
use_source_if_dest_not_found=True,
multi=args.multi)
output_fname = args.input_file + '_translated'
print('Writing {} translated identifiers to {}'.format(len(output_ids), output_fname))
with open(output_fname, 'wt') as out_file:
for identifier in output_ids:
out_file.write(identifier + '\n')
if __name__ == '__main__':
main()
| mit |
djgagne/scikit-learn | sklearn/metrics/tests/test_ranking.py | 127 | 40813 | from __future__ import division, print_function
import numpy as np
from itertools import product
import warnings
from scipy.sparse import csr_matrix
from sklearn import datasets
from sklearn import svm
from sklearn import ensemble
from sklearn.datasets import make_multilabel_classification
from sklearn.random_projection import sparse_random_matrix
from sklearn.utils.validation import check_array, check_consistent_length
from sklearn.utils.validation import check_random_state
from sklearn.utils.testing import assert_raises, clean_warning_registry
from sklearn.utils.testing import assert_raise_message
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_warns
from sklearn.metrics import auc
from sklearn.metrics import average_precision_score
from sklearn.metrics import coverage_error
from sklearn.metrics import label_ranking_average_precision_score
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import label_ranking_loss
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
from sklearn.metrics.base import UndefinedMetricWarning
###############################################################################
# Utilities for testing
def make_prediction(dataset=None, binary=False):
"""Make some classification predictions on a toy dataset using a SVC
If binary is True restrict to a binary classification problem instead of a
multiclass classification problem
"""
if dataset is None:
# import some data to play with
dataset = datasets.load_iris()
X = dataset.data
y = dataset.target
if binary:
# restrict to a binary classification task
X, y = X[y < 2], y[y < 2]
n_samples, n_features = X.shape
p = np.arange(n_samples)
rng = check_random_state(37)
rng.shuffle(p)
X, y = X[p], y[p]
half = int(n_samples / 2)
# add noisy features to make the problem harder and avoid perfect results
rng = np.random.RandomState(0)
X = np.c_[X, rng.randn(n_samples, 200 * n_features)]
# run classifier, get class probabilities and label predictions
clf = svm.SVC(kernel='linear', probability=True, random_state=0)
probas_pred = clf.fit(X[:half], y[:half]).predict_proba(X[half:])
if binary:
# only interested in probabilities of the positive case
# XXX: do we really want a special API for the binary case?
probas_pred = probas_pred[:, 1]
y_pred = clf.predict(X[half:])
y_true = y[half:]
return y_true, y_pred, probas_pred
###############################################################################
# Tests
def _auc(y_true, y_score):
"""Alternative implementation to check for correctness of
`roc_auc_score`."""
pos_label = np.unique(y_true)[1]
# Count the number of times positive samples are correctly ranked above
# negative samples.
pos = y_score[y_true == pos_label]
neg = y_score[y_true != pos_label]
diff_matrix = pos.reshape(1, -1) - neg.reshape(-1, 1)
n_correct = np.sum(diff_matrix > 0)
return n_correct / float(len(pos) * len(neg))
def _average_precision(y_true, y_score):
"""Alternative implementation to check for correctness of
`average_precision_score`."""
pos_label = np.unique(y_true)[1]
n_pos = np.sum(y_true == pos_label)
order = np.argsort(y_score)[::-1]
y_score = y_score[order]
y_true = y_true[order]
score = 0
for i in range(len(y_score)):
if y_true[i] == pos_label:
# Compute precision up to document i
# i.e, percentage of relevant documents up to document i.
prec = 0
for j in range(0, i + 1):
if y_true[j] == pos_label:
prec += 1.0
prec /= (i + 1.0)
score += prec
return score / n_pos
def test_roc_curve():
# Test Area under Receiver Operating Characteristic (ROC) curve
y_true, _, probas_pred = make_prediction(binary=True)
fpr, tpr, thresholds = roc_curve(y_true, probas_pred)
roc_auc = auc(fpr, tpr)
expected_auc = _auc(y_true, probas_pred)
assert_array_almost_equal(roc_auc, expected_auc, decimal=2)
assert_almost_equal(roc_auc, roc_auc_score(y_true, probas_pred))
assert_equal(fpr.shape, tpr.shape)
assert_equal(fpr.shape, thresholds.shape)
def test_roc_curve_end_points():
# Make sure that roc_curve returns a curve start at 0 and ending and
# 1 even in corner cases
rng = np.random.RandomState(0)
y_true = np.array([0] * 50 + [1] * 50)
y_pred = rng.randint(3, size=100)
fpr, tpr, thr = roc_curve(y_true, y_pred)
assert_equal(fpr[0], 0)
assert_equal(fpr[-1], 1)
assert_equal(fpr.shape, tpr.shape)
assert_equal(fpr.shape, thr.shape)
def test_roc_returns_consistency():
# Test whether the returned threshold matches up with tpr
# make small toy dataset
y_true, _, probas_pred = make_prediction(binary=True)
fpr, tpr, thresholds = roc_curve(y_true, probas_pred)
# use the given thresholds to determine the tpr
tpr_correct = []
for t in thresholds:
tp = np.sum((probas_pred >= t) & y_true)
p = np.sum(y_true)
tpr_correct.append(1.0 * tp / p)
# compare tpr and tpr_correct to see if the thresholds' order was correct
assert_array_almost_equal(tpr, tpr_correct, decimal=2)
assert_equal(fpr.shape, tpr.shape)
assert_equal(fpr.shape, thresholds.shape)
def test_roc_nonrepeating_thresholds():
# Test to ensure that we don't return spurious repeating thresholds.
# Duplicated thresholds can arise due to machine precision issues.
dataset = datasets.load_digits()
X = dataset['data']
y = dataset['target']
# This random forest classifier can only return probabilities
# significant to two decimal places
clf = ensemble.RandomForestClassifier(n_estimators=100, random_state=0)
# How well can the classifier predict whether a digit is less than 5?
# This task contributes floating point roundoff errors to the probabilities
train, test = slice(None, None, 2), slice(1, None, 2)
probas_pred = clf.fit(X[train], y[train]).predict_proba(X[test])
y_score = probas_pred[:, :5].sum(axis=1) # roundoff errors begin here
y_true = [yy < 5 for yy in y[test]]
# Check for repeating values in the thresholds
fpr, tpr, thresholds = roc_curve(y_true, y_score)
assert_equal(thresholds.size, np.unique(np.round(thresholds, 2)).size)
def test_roc_curve_multi():
# roc_curve not applicable for multi-class problems
y_true, _, probas_pred = make_prediction(binary=False)
assert_raises(ValueError, roc_curve, y_true, probas_pred)
def test_roc_curve_confidence():
# roc_curve for confidence scores
y_true, _, probas_pred = make_prediction(binary=True)
fpr, tpr, thresholds = roc_curve(y_true, probas_pred - 0.5)
roc_auc = auc(fpr, tpr)
assert_array_almost_equal(roc_auc, 0.90, decimal=2)
assert_equal(fpr.shape, tpr.shape)
assert_equal(fpr.shape, thresholds.shape)
def test_roc_curve_hard():
# roc_curve for hard decisions
y_true, pred, probas_pred = make_prediction(binary=True)
# always predict one
trivial_pred = np.ones(y_true.shape)
fpr, tpr, thresholds = roc_curve(y_true, trivial_pred)
roc_auc = auc(fpr, tpr)
assert_array_almost_equal(roc_auc, 0.50, decimal=2)
assert_equal(fpr.shape, tpr.shape)
assert_equal(fpr.shape, thresholds.shape)
# always predict zero
trivial_pred = np.zeros(y_true.shape)
fpr, tpr, thresholds = roc_curve(y_true, trivial_pred)
roc_auc = auc(fpr, tpr)
assert_array_almost_equal(roc_auc, 0.50, decimal=2)
assert_equal(fpr.shape, tpr.shape)
assert_equal(fpr.shape, thresholds.shape)
# hard decisions
fpr, tpr, thresholds = roc_curve(y_true, pred)
roc_auc = auc(fpr, tpr)
assert_array_almost_equal(roc_auc, 0.78, decimal=2)
assert_equal(fpr.shape, tpr.shape)
assert_equal(fpr.shape, thresholds.shape)
def test_roc_curve_one_label():
y_true = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
y_pred = [0, 1, 0, 1, 0, 1, 0, 1, 0, 1]
# assert there are warnings
w = UndefinedMetricWarning
fpr, tpr, thresholds = assert_warns(w, roc_curve, y_true, y_pred)
# all true labels, all fpr should be nan
assert_array_equal(fpr,
np.nan * np.ones(len(thresholds)))
assert_equal(fpr.shape, tpr.shape)
assert_equal(fpr.shape, thresholds.shape)
# assert there are warnings
fpr, tpr, thresholds = assert_warns(w, roc_curve,
[1 - x for x in y_true],
y_pred)
# all negative labels, all tpr should be nan
assert_array_equal(tpr,
np.nan * np.ones(len(thresholds)))
assert_equal(fpr.shape, tpr.shape)
assert_equal(fpr.shape, thresholds.shape)
def test_roc_curve_toydata():
# Binary classification
y_true = [0, 1]
y_score = [0, 1]
tpr, fpr, _ = roc_curve(y_true, y_score)
roc_auc = roc_auc_score(y_true, y_score)
assert_array_almost_equal(tpr, [0, 1])
assert_array_almost_equal(fpr, [1, 1])
assert_almost_equal(roc_auc, 1.)
y_true = [0, 1]
y_score = [1, 0]
tpr, fpr, _ = roc_curve(y_true, y_score)
roc_auc = roc_auc_score(y_true, y_score)
assert_array_almost_equal(tpr, [0, 1, 1])
assert_array_almost_equal(fpr, [0, 0, 1])
assert_almost_equal(roc_auc, 0.)
y_true = [1, 0]
y_score = [1, 1]
tpr, fpr, _ = roc_curve(y_true, y_score)
roc_auc = roc_auc_score(y_true, y_score)
assert_array_almost_equal(tpr, [0, 1])
assert_array_almost_equal(fpr, [0, 1])
assert_almost_equal(roc_auc, 0.5)
y_true = [1, 0]
y_score = [1, 0]
tpr, fpr, _ = roc_curve(y_true, y_score)
roc_auc = roc_auc_score(y_true, y_score)
assert_array_almost_equal(tpr, [0, 1])
assert_array_almost_equal(fpr, [1, 1])
assert_almost_equal(roc_auc, 1.)
y_true = [1, 0]
y_score = [0.5, 0.5]
tpr, fpr, _ = roc_curve(y_true, y_score)
roc_auc = roc_auc_score(y_true, y_score)
assert_array_almost_equal(tpr, [0, 1])
assert_array_almost_equal(fpr, [0, 1])
assert_almost_equal(roc_auc, .5)
y_true = [0, 0]
y_score = [0.25, 0.75]
tpr, fpr, _ = roc_curve(y_true, y_score)
assert_raises(ValueError, roc_auc_score, y_true, y_score)
assert_array_almost_equal(tpr, [0., 0.5, 1.])
assert_array_almost_equal(fpr, [np.nan, np.nan, np.nan])
y_true = [1, 1]
y_score = [0.25, 0.75]
tpr, fpr, _ = roc_curve(y_true, y_score)
assert_raises(ValueError, roc_auc_score, y_true, y_score)
assert_array_almost_equal(tpr, [np.nan, np.nan])
assert_array_almost_equal(fpr, [0.5, 1.])
# Multi-label classification task
y_true = np.array([[0, 1], [0, 1]])
y_score = np.array([[0, 1], [0, 1]])
assert_raises(ValueError, roc_auc_score, y_true, y_score, average="macro")
assert_raises(ValueError, roc_auc_score, y_true, y_score,
average="weighted")
assert_almost_equal(roc_auc_score(y_true, y_score, average="samples"), 1.)
assert_almost_equal(roc_auc_score(y_true, y_score, average="micro"), 1.)
y_true = np.array([[0, 1], [0, 1]])
y_score = np.array([[0, 1], [1, 0]])
assert_raises(ValueError, roc_auc_score, y_true, y_score, average="macro")
assert_raises(ValueError, roc_auc_score, y_true, y_score,
average="weighted")
assert_almost_equal(roc_auc_score(y_true, y_score, average="samples"), 0.5)
assert_almost_equal(roc_auc_score(y_true, y_score, average="micro"), 0.5)
y_true = np.array([[1, 0], [0, 1]])
y_score = np.array([[0, 1], [1, 0]])
assert_almost_equal(roc_auc_score(y_true, y_score, average="macro"), 0)
assert_almost_equal(roc_auc_score(y_true, y_score, average="weighted"), 0)
assert_almost_equal(roc_auc_score(y_true, y_score, average="samples"), 0)
assert_almost_equal(roc_auc_score(y_true, y_score, average="micro"), 0)
y_true = np.array([[1, 0], [0, 1]])
y_score = np.array([[0.5, 0.5], [0.5, 0.5]])
assert_almost_equal(roc_auc_score(y_true, y_score, average="macro"), .5)
assert_almost_equal(roc_auc_score(y_true, y_score, average="weighted"), .5)
assert_almost_equal(roc_auc_score(y_true, y_score, average="samples"), .5)
assert_almost_equal(roc_auc_score(y_true, y_score, average="micro"), .5)
def test_auc():
# Test Area Under Curve (AUC) computation
x = [0, 1]
y = [0, 1]
assert_array_almost_equal(auc(x, y), 0.5)
x = [1, 0]
y = [0, 1]
assert_array_almost_equal(auc(x, y), 0.5)
x = [1, 0, 0]
y = [0, 1, 1]
assert_array_almost_equal(auc(x, y), 0.5)
x = [0, 1]
y = [1, 1]
assert_array_almost_equal(auc(x, y), 1)
x = [0, 0.5, 1]
y = [0, 0.5, 1]
assert_array_almost_equal(auc(x, y), 0.5)
def test_auc_duplicate_values():
# Test Area Under Curve (AUC) computation with duplicate values
# auc() was previously sorting the x and y arrays according to the indices
# from numpy.argsort(x), which was reordering the tied 0's in this example
# and resulting in an incorrect area computation. This test detects the
# error.
x = [-2.0, 0.0, 0.0, 0.0, 1.0]
y1 = [2.0, 0.0, 0.5, 1.0, 1.0]
y2 = [2.0, 1.0, 0.0, 0.5, 1.0]
y3 = [2.0, 1.0, 0.5, 0.0, 1.0]
for y in (y1, y2, y3):
assert_array_almost_equal(auc(x, y, reorder=True), 3.0)
def test_auc_errors():
# Incompatible shapes
assert_raises(ValueError, auc, [0.0, 0.5, 1.0], [0.1, 0.2])
# Too few x values
assert_raises(ValueError, auc, [0.0], [0.1])
# x is not in order
assert_raises(ValueError, auc, [1.0, 0.0, 0.5], [0.0, 0.0, 0.0])
def test_auc_score_non_binary_class():
# Test that roc_auc_score function returns an error when trying
# to compute AUC for non-binary class values.
rng = check_random_state(404)
y_pred = rng.rand(10)
# y_true contains only one class value
y_true = np.zeros(10, dtype="int")
assert_raise_message(ValueError, "ROC AUC score is not defined",
roc_auc_score, y_true, y_pred)
y_true = np.ones(10, dtype="int")
assert_raise_message(ValueError, "ROC AUC score is not defined",
roc_auc_score, y_true, y_pred)
y_true = -np.ones(10, dtype="int")
assert_raise_message(ValueError, "ROC AUC score is not defined",
roc_auc_score, y_true, y_pred)
# y_true contains three different class values
y_true = rng.randint(0, 3, size=10)
assert_raise_message(ValueError, "multiclass format is not supported",
roc_auc_score, y_true, y_pred)
clean_warning_registry()
with warnings.catch_warnings(record=True):
rng = check_random_state(404)
y_pred = rng.rand(10)
# y_true contains only one class value
y_true = np.zeros(10, dtype="int")
assert_raise_message(ValueError, "ROC AUC score is not defined",
roc_auc_score, y_true, y_pred)
y_true = np.ones(10, dtype="int")
assert_raise_message(ValueError, "ROC AUC score is not defined",
roc_auc_score, y_true, y_pred)
y_true = -np.ones(10, dtype="int")
assert_raise_message(ValueError, "ROC AUC score is not defined",
roc_auc_score, y_true, y_pred)
# y_true contains three different class values
y_true = rng.randint(0, 3, size=10)
assert_raise_message(ValueError, "multiclass format is not supported",
roc_auc_score, y_true, y_pred)
def test_precision_recall_curve():
y_true, _, probas_pred = make_prediction(binary=True)
_test_precision_recall_curve(y_true, probas_pred)
# Use {-1, 1} for labels; make sure original labels aren't modified
y_true[np.where(y_true == 0)] = -1
y_true_copy = y_true.copy()
_test_precision_recall_curve(y_true, probas_pred)
assert_array_equal(y_true_copy, y_true)
labels = [1, 0, 0, 1]
predict_probas = [1, 2, 3, 4]
p, r, t = precision_recall_curve(labels, predict_probas)
assert_array_almost_equal(p, np.array([0.5, 0.33333333, 0.5, 1., 1.]))
assert_array_almost_equal(r, np.array([1., 0.5, 0.5, 0.5, 0.]))
assert_array_almost_equal(t, np.array([1, 2, 3, 4]))
assert_equal(p.size, r.size)
assert_equal(p.size, t.size + 1)
def test_precision_recall_curve_pos_label():
y_true, _, probas_pred = make_prediction(binary=False)
pos_label = 2
p, r, thresholds = precision_recall_curve(y_true,
probas_pred[:, pos_label],
pos_label=pos_label)
p2, r2, thresholds2 = precision_recall_curve(y_true == pos_label,
probas_pred[:, pos_label])
assert_array_almost_equal(p, p2)
assert_array_almost_equal(r, r2)
assert_array_almost_equal(thresholds, thresholds2)
assert_equal(p.size, r.size)
assert_equal(p.size, thresholds.size + 1)
def _test_precision_recall_curve(y_true, probas_pred):
# Test Precision-Recall and aread under PR curve
p, r, thresholds = precision_recall_curve(y_true, probas_pred)
precision_recall_auc = auc(r, p)
assert_array_almost_equal(precision_recall_auc, 0.85, 2)
assert_array_almost_equal(precision_recall_auc,
average_precision_score(y_true, probas_pred))
assert_almost_equal(_average_precision(y_true, probas_pred),
precision_recall_auc, 1)
assert_equal(p.size, r.size)
assert_equal(p.size, thresholds.size + 1)
# Smoke test in the case of proba having only one value
p, r, thresholds = precision_recall_curve(y_true,
np.zeros_like(probas_pred))
precision_recall_auc = auc(r, p)
assert_array_almost_equal(precision_recall_auc, 0.75, 3)
assert_equal(p.size, r.size)
assert_equal(p.size, thresholds.size + 1)
def test_precision_recall_curve_errors():
# Contains non-binary labels
assert_raises(ValueError, precision_recall_curve,
[0, 1, 2], [[0.0], [1.0], [1.0]])
def test_precision_recall_curve_toydata():
with np.errstate(all="raise"):
# Binary classification
y_true = [0, 1]
y_score = [0, 1]
p, r, _ = precision_recall_curve(y_true, y_score)
auc_prc = average_precision_score(y_true, y_score)
assert_array_almost_equal(p, [1, 1])
assert_array_almost_equal(r, [1, 0])
assert_almost_equal(auc_prc, 1.)
y_true = [0, 1]
y_score = [1, 0]
p, r, _ = precision_recall_curve(y_true, y_score)
auc_prc = average_precision_score(y_true, y_score)
assert_array_almost_equal(p, [0.5, 0., 1.])
assert_array_almost_equal(r, [1., 0., 0.])
assert_almost_equal(auc_prc, 0.25)
y_true = [1, 0]
y_score = [1, 1]
p, r, _ = precision_recall_curve(y_true, y_score)
auc_prc = average_precision_score(y_true, y_score)
assert_array_almost_equal(p, [0.5, 1])
assert_array_almost_equal(r, [1., 0])
assert_almost_equal(auc_prc, .75)
y_true = [1, 0]
y_score = [1, 0]
p, r, _ = precision_recall_curve(y_true, y_score)
auc_prc = average_precision_score(y_true, y_score)
assert_array_almost_equal(p, [1, 1])
assert_array_almost_equal(r, [1, 0])
assert_almost_equal(auc_prc, 1.)
y_true = [1, 0]
y_score = [0.5, 0.5]
p, r, _ = precision_recall_curve(y_true, y_score)
auc_prc = average_precision_score(y_true, y_score)
assert_array_almost_equal(p, [0.5, 1])
assert_array_almost_equal(r, [1, 0.])
assert_almost_equal(auc_prc, .75)
y_true = [0, 0]
y_score = [0.25, 0.75]
assert_raises(Exception, precision_recall_curve, y_true, y_score)
assert_raises(Exception, average_precision_score, y_true, y_score)
y_true = [1, 1]
y_score = [0.25, 0.75]
p, r, _ = precision_recall_curve(y_true, y_score)
assert_almost_equal(average_precision_score(y_true, y_score), 1.)
assert_array_almost_equal(p, [1., 1., 1.])
assert_array_almost_equal(r, [1, 0.5, 0.])
# Multi-label classification task
y_true = np.array([[0, 1], [0, 1]])
y_score = np.array([[0, 1], [0, 1]])
assert_raises(Exception, average_precision_score, y_true, y_score,
average="macro")
assert_raises(Exception, average_precision_score, y_true, y_score,
average="weighted")
assert_almost_equal(average_precision_score(y_true, y_score,
average="samples"), 1.)
assert_almost_equal(average_precision_score(y_true, y_score,
average="micro"), 1.)
y_true = np.array([[0, 1], [0, 1]])
y_score = np.array([[0, 1], [1, 0]])
assert_raises(Exception, average_precision_score, y_true, y_score,
average="macro")
assert_raises(Exception, average_precision_score, y_true, y_score,
average="weighted")
assert_almost_equal(average_precision_score(y_true, y_score,
average="samples"), 0.625)
assert_almost_equal(average_precision_score(y_true, y_score,
average="micro"), 0.625)
y_true = np.array([[1, 0], [0, 1]])
y_score = np.array([[0, 1], [1, 0]])
assert_almost_equal(average_precision_score(y_true, y_score,
average="macro"), 0.25)
assert_almost_equal(average_precision_score(y_true, y_score,
average="weighted"), 0.25)
assert_almost_equal(average_precision_score(y_true, y_score,
average="samples"), 0.25)
assert_almost_equal(average_precision_score(y_true, y_score,
average="micro"), 0.25)
y_true = np.array([[1, 0], [0, 1]])
y_score = np.array([[0.5, 0.5], [0.5, 0.5]])
assert_almost_equal(average_precision_score(y_true, y_score,
average="macro"), 0.75)
assert_almost_equal(average_precision_score(y_true, y_score,
average="weighted"), 0.75)
assert_almost_equal(average_precision_score(y_true, y_score,
average="samples"), 0.75)
assert_almost_equal(average_precision_score(y_true, y_score,
average="micro"), 0.75)
def test_score_scale_invariance():
# Test that average_precision_score and roc_auc_score are invariant by
# the scaling or shifting of probabilities
y_true, _, probas_pred = make_prediction(binary=True)
roc_auc = roc_auc_score(y_true, probas_pred)
roc_auc_scaled = roc_auc_score(y_true, 100 * probas_pred)
roc_auc_shifted = roc_auc_score(y_true, probas_pred - 10)
assert_equal(roc_auc, roc_auc_scaled)
assert_equal(roc_auc, roc_auc_shifted)
pr_auc = average_precision_score(y_true, probas_pred)
pr_auc_scaled = average_precision_score(y_true, 100 * probas_pred)
pr_auc_shifted = average_precision_score(y_true, probas_pred - 10)
assert_equal(pr_auc, pr_auc_scaled)
assert_equal(pr_auc, pr_auc_shifted)
def check_lrap_toy(lrap_score):
# Check on several small example that it works
assert_almost_equal(lrap_score([[0, 1]], [[0.25, 0.75]]), 1)
assert_almost_equal(lrap_score([[0, 1]], [[0.75, 0.25]]), 1 / 2)
assert_almost_equal(lrap_score([[1, 1]], [[0.75, 0.25]]), 1)
assert_almost_equal(lrap_score([[0, 0, 1]], [[0.25, 0.5, 0.75]]), 1)
assert_almost_equal(lrap_score([[0, 1, 0]], [[0.25, 0.5, 0.75]]), 1 / 2)
assert_almost_equal(lrap_score([[0, 1, 1]], [[0.25, 0.5, 0.75]]), 1)
assert_almost_equal(lrap_score([[1, 0, 0]], [[0.25, 0.5, 0.75]]), 1 / 3)
assert_almost_equal(lrap_score([[1, 0, 1]], [[0.25, 0.5, 0.75]]),
(2 / 3 + 1 / 1) / 2)
assert_almost_equal(lrap_score([[1, 1, 0]], [[0.25, 0.5, 0.75]]),
(2 / 3 + 1 / 2) / 2)
assert_almost_equal(lrap_score([[0, 0, 1]], [[0.75, 0.5, 0.25]]), 1 / 3)
assert_almost_equal(lrap_score([[0, 1, 0]], [[0.75, 0.5, 0.25]]), 1 / 2)
assert_almost_equal(lrap_score([[0, 1, 1]], [[0.75, 0.5, 0.25]]),
(1 / 2 + 2 / 3) / 2)
assert_almost_equal(lrap_score([[1, 0, 0]], [[0.75, 0.5, 0.25]]), 1)
assert_almost_equal(lrap_score([[1, 0, 1]], [[0.75, 0.5, 0.25]]),
(1 + 2 / 3) / 2)
assert_almost_equal(lrap_score([[1, 1, 0]], [[0.75, 0.5, 0.25]]), 1)
assert_almost_equal(lrap_score([[1, 1, 1]], [[0.75, 0.5, 0.25]]), 1)
assert_almost_equal(lrap_score([[0, 0, 1]], [[0.5, 0.75, 0.25]]), 1 / 3)
assert_almost_equal(lrap_score([[0, 1, 0]], [[0.5, 0.75, 0.25]]), 1)
assert_almost_equal(lrap_score([[0, 1, 1]], [[0.5, 0.75, 0.25]]),
(1 + 2 / 3) / 2)
assert_almost_equal(lrap_score([[1, 0, 0]], [[0.5, 0.75, 0.25]]), 1 / 2)
assert_almost_equal(lrap_score([[1, 0, 1]], [[0.5, 0.75, 0.25]]),
(1 / 2 + 2 / 3) / 2)
assert_almost_equal(lrap_score([[1, 1, 0]], [[0.5, 0.75, 0.25]]), 1)
assert_almost_equal(lrap_score([[1, 1, 1]], [[0.5, 0.75, 0.25]]), 1)
# Tie handling
assert_almost_equal(lrap_score([[1, 0]], [[0.5, 0.5]]), 0.5)
assert_almost_equal(lrap_score([[0, 1]], [[0.5, 0.5]]), 0.5)
assert_almost_equal(lrap_score([[1, 1]], [[0.5, 0.5]]), 1)
assert_almost_equal(lrap_score([[0, 0, 1]], [[0.25, 0.5, 0.5]]), 0.5)
assert_almost_equal(lrap_score([[0, 1, 0]], [[0.25, 0.5, 0.5]]), 0.5)
assert_almost_equal(lrap_score([[0, 1, 1]], [[0.25, 0.5, 0.5]]), 1)
assert_almost_equal(lrap_score([[1, 0, 0]], [[0.25, 0.5, 0.5]]), 1 / 3)
assert_almost_equal(lrap_score([[1, 0, 1]], [[0.25, 0.5, 0.5]]),
(2 / 3 + 1 / 2) / 2)
assert_almost_equal(lrap_score([[1, 1, 0]], [[0.25, 0.5, 0.5]]),
(2 / 3 + 1 / 2) / 2)
assert_almost_equal(lrap_score([[1, 1, 1]], [[0.25, 0.5, 0.5]]), 1)
assert_almost_equal(lrap_score([[1, 1, 0]], [[0.5, 0.5, 0.5]]), 2 / 3)
assert_almost_equal(lrap_score([[1, 1, 1, 0]], [[0.5, 0.5, 0.5, 0.5]]),
3 / 4)
def check_zero_or_all_relevant_labels(lrap_score):
random_state = check_random_state(0)
for n_labels in range(2, 5):
y_score = random_state.uniform(size=(1, n_labels))
y_score_ties = np.zeros_like(y_score)
# No relevant labels
y_true = np.zeros((1, n_labels))
assert_equal(lrap_score(y_true, y_score), 1.)
assert_equal(lrap_score(y_true, y_score_ties), 1.)
# Only relevant labels
y_true = np.ones((1, n_labels))
assert_equal(lrap_score(y_true, y_score), 1.)
assert_equal(lrap_score(y_true, y_score_ties), 1.)
# Degenerate case: only one label
assert_almost_equal(lrap_score([[1], [0], [1], [0]],
[[0.5], [0.5], [0.5], [0.5]]), 1.)
def check_lrap_error_raised(lrap_score):
# Raise value error if not appropriate format
assert_raises(ValueError, lrap_score,
[0, 1, 0], [0.25, 0.3, 0.2])
assert_raises(ValueError, lrap_score, [0, 1, 2],
[[0.25, 0.75, 0.0], [0.7, 0.3, 0.0], [0.8, 0.2, 0.0]])
assert_raises(ValueError, lrap_score, [(0), (1), (2)],
[[0.25, 0.75, 0.0], [0.7, 0.3, 0.0], [0.8, 0.2, 0.0]])
# Check that that y_true.shape != y_score.shape raise the proper exception
assert_raises(ValueError, lrap_score, [[0, 1], [0, 1]], [0, 1])
assert_raises(ValueError, lrap_score, [[0, 1], [0, 1]], [[0, 1]])
assert_raises(ValueError, lrap_score, [[0, 1], [0, 1]], [[0], [1]])
assert_raises(ValueError, lrap_score, [[0, 1]], [[0, 1], [0, 1]])
assert_raises(ValueError, lrap_score, [[0], [1]], [[0, 1], [0, 1]])
assert_raises(ValueError, lrap_score, [[0, 1], [0, 1]], [[0], [1]])
def check_lrap_only_ties(lrap_score):
# Check tie handling in score
# Basic check with only ties and increasing label space
for n_labels in range(2, 10):
y_score = np.ones((1, n_labels))
# Check for growing number of consecutive relevant
for n_relevant in range(1, n_labels):
# Check for a bunch of positions
for pos in range(n_labels - n_relevant):
y_true = np.zeros((1, n_labels))
y_true[0, pos:pos + n_relevant] = 1
assert_almost_equal(lrap_score(y_true, y_score),
n_relevant / n_labels)
def check_lrap_without_tie_and_increasing_score(lrap_score):
# Check that Label ranking average precision works for various
# Basic check with increasing label space size and decreasing score
for n_labels in range(2, 10):
y_score = n_labels - (np.arange(n_labels).reshape((1, n_labels)) + 1)
# First and last
y_true = np.zeros((1, n_labels))
y_true[0, 0] = 1
y_true[0, -1] = 1
assert_almost_equal(lrap_score(y_true, y_score),
(2 / n_labels + 1) / 2)
# Check for growing number of consecutive relevant label
for n_relevant in range(1, n_labels):
# Check for a bunch of position
for pos in range(n_labels - n_relevant):
y_true = np.zeros((1, n_labels))
y_true[0, pos:pos + n_relevant] = 1
assert_almost_equal(lrap_score(y_true, y_score),
sum((r + 1) / ((pos + r + 1) * n_relevant)
for r in range(n_relevant)))
def _my_lrap(y_true, y_score):
"""Simple implementation of label ranking average precision"""
check_consistent_length(y_true, y_score)
y_true = check_array(y_true)
y_score = check_array(y_score)
n_samples, n_labels = y_true.shape
score = np.empty((n_samples, ))
for i in range(n_samples):
# The best rank correspond to 1. Rank higher than 1 are worse.
# The best inverse ranking correspond to n_labels.
unique_rank, inv_rank = np.unique(y_score[i], return_inverse=True)
n_ranks = unique_rank.size
rank = n_ranks - inv_rank
# Rank need to be corrected to take into account ties
# ex: rank 1 ex aequo means that both label are rank 2.
corr_rank = np.bincount(rank, minlength=n_ranks + 1).cumsum()
rank = corr_rank[rank]
relevant = y_true[i].nonzero()[0]
if relevant.size == 0 or relevant.size == n_labels:
score[i] = 1
continue
score[i] = 0.
for label in relevant:
# Let's count the number of relevant label with better rank
# (smaller rank).
n_ranked_above = sum(rank[r] <= rank[label] for r in relevant)
# Weight by the rank of the actual label
score[i] += n_ranked_above / rank[label]
score[i] /= relevant.size
return score.mean()
def check_alternative_lrap_implementation(lrap_score, n_classes=5,
n_samples=20, random_state=0):
_, y_true = make_multilabel_classification(n_features=1,
allow_unlabeled=False,
random_state=random_state,
n_classes=n_classes,
n_samples=n_samples)
# Score with ties
y_score = sparse_random_matrix(n_components=y_true.shape[0],
n_features=y_true.shape[1],
random_state=random_state)
if hasattr(y_score, "toarray"):
y_score = y_score.toarray()
score_lrap = label_ranking_average_precision_score(y_true, y_score)
score_my_lrap = _my_lrap(y_true, y_score)
assert_almost_equal(score_lrap, score_my_lrap)
# Uniform score
random_state = check_random_state(random_state)
y_score = random_state.uniform(size=(n_samples, n_classes))
score_lrap = label_ranking_average_precision_score(y_true, y_score)
score_my_lrap = _my_lrap(y_true, y_score)
assert_almost_equal(score_lrap, score_my_lrap)
def test_label_ranking_avp():
for fn in [label_ranking_average_precision_score, _my_lrap]:
yield check_lrap_toy, fn
yield check_lrap_without_tie_and_increasing_score, fn
yield check_lrap_only_ties, fn
yield check_zero_or_all_relevant_labels, fn
yield check_lrap_error_raised, label_ranking_average_precision_score
for n_samples, n_classes, random_state in product((1, 2, 8, 20),
(2, 5, 10),
range(1)):
yield (check_alternative_lrap_implementation,
label_ranking_average_precision_score,
n_classes, n_samples, random_state)
def test_coverage_error():
# Toy case
assert_almost_equal(coverage_error([[0, 1]], [[0.25, 0.75]]), 1)
assert_almost_equal(coverage_error([[0, 1]], [[0.75, 0.25]]), 2)
assert_almost_equal(coverage_error([[1, 1]], [[0.75, 0.25]]), 2)
assert_almost_equal(coverage_error([[0, 0]], [[0.75, 0.25]]), 0)
assert_almost_equal(coverage_error([[0, 0, 0]], [[0.25, 0.5, 0.75]]), 0)
assert_almost_equal(coverage_error([[0, 0, 1]], [[0.25, 0.5, 0.75]]), 1)
assert_almost_equal(coverage_error([[0, 1, 0]], [[0.25, 0.5, 0.75]]), 2)
assert_almost_equal(coverage_error([[0, 1, 1]], [[0.25, 0.5, 0.75]]), 2)
assert_almost_equal(coverage_error([[1, 0, 0]], [[0.25, 0.5, 0.75]]), 3)
assert_almost_equal(coverage_error([[1, 0, 1]], [[0.25, 0.5, 0.75]]), 3)
assert_almost_equal(coverage_error([[1, 1, 0]], [[0.25, 0.5, 0.75]]), 3)
assert_almost_equal(coverage_error([[1, 1, 1]], [[0.25, 0.5, 0.75]]), 3)
assert_almost_equal(coverage_error([[0, 0, 0]], [[0.75, 0.5, 0.25]]), 0)
assert_almost_equal(coverage_error([[0, 0, 1]], [[0.75, 0.5, 0.25]]), 3)
assert_almost_equal(coverage_error([[0, 1, 0]], [[0.75, 0.5, 0.25]]), 2)
assert_almost_equal(coverage_error([[0, 1, 1]], [[0.75, 0.5, 0.25]]), 3)
assert_almost_equal(coverage_error([[1, 0, 0]], [[0.75, 0.5, 0.25]]), 1)
assert_almost_equal(coverage_error([[1, 0, 1]], [[0.75, 0.5, 0.25]]), 3)
assert_almost_equal(coverage_error([[1, 1, 0]], [[0.75, 0.5, 0.25]]), 2)
assert_almost_equal(coverage_error([[1, 1, 1]], [[0.75, 0.5, 0.25]]), 3)
assert_almost_equal(coverage_error([[0, 0, 0]], [[0.5, 0.75, 0.25]]), 0)
assert_almost_equal(coverage_error([[0, 0, 1]], [[0.5, 0.75, 0.25]]), 3)
assert_almost_equal(coverage_error([[0, 1, 0]], [[0.5, 0.75, 0.25]]), 1)
assert_almost_equal(coverage_error([[0, 1, 1]], [[0.5, 0.75, 0.25]]), 3)
assert_almost_equal(coverage_error([[1, 0, 0]], [[0.5, 0.75, 0.25]]), 2)
assert_almost_equal(coverage_error([[1, 0, 1]], [[0.5, 0.75, 0.25]]), 3)
assert_almost_equal(coverage_error([[1, 1, 0]], [[0.5, 0.75, 0.25]]), 2)
assert_almost_equal(coverage_error([[1, 1, 1]], [[0.5, 0.75, 0.25]]), 3)
# Non trival case
assert_almost_equal(coverage_error([[0, 1, 0], [1, 1, 0]],
[[0.1, 10., -3], [0, 1, 3]]),
(1 + 3) / 2.)
assert_almost_equal(coverage_error([[0, 1, 0], [1, 1, 0], [0, 1, 1]],
[[0.1, 10, -3], [0, 1, 3], [0, 2, 0]]),
(1 + 3 + 3) / 3.)
assert_almost_equal(coverage_error([[0, 1, 0], [1, 1, 0], [0, 1, 1]],
[[0.1, 10, -3], [3, 1, 3], [0, 2, 0]]),
(1 + 3 + 3) / 3.)
def test_coverage_tie_handling():
assert_almost_equal(coverage_error([[0, 0]], [[0.5, 0.5]]), 0)
assert_almost_equal(coverage_error([[1, 0]], [[0.5, 0.5]]), 2)
assert_almost_equal(coverage_error([[0, 1]], [[0.5, 0.5]]), 2)
assert_almost_equal(coverage_error([[1, 1]], [[0.5, 0.5]]), 2)
assert_almost_equal(coverage_error([[0, 0, 0]], [[0.25, 0.5, 0.5]]), 0)
assert_almost_equal(coverage_error([[0, 0, 1]], [[0.25, 0.5, 0.5]]), 2)
assert_almost_equal(coverage_error([[0, 1, 0]], [[0.25, 0.5, 0.5]]), 2)
assert_almost_equal(coverage_error([[0, 1, 1]], [[0.25, 0.5, 0.5]]), 2)
assert_almost_equal(coverage_error([[1, 0, 0]], [[0.25, 0.5, 0.5]]), 3)
assert_almost_equal(coverage_error([[1, 0, 1]], [[0.25, 0.5, 0.5]]), 3)
assert_almost_equal(coverage_error([[1, 1, 0]], [[0.25, 0.5, 0.5]]), 3)
assert_almost_equal(coverage_error([[1, 1, 1]], [[0.25, 0.5, 0.5]]), 3)
def test_label_ranking_loss():
assert_almost_equal(label_ranking_loss([[0, 1]], [[0.25, 0.75]]), 0)
assert_almost_equal(label_ranking_loss([[0, 1]], [[0.75, 0.25]]), 1)
assert_almost_equal(label_ranking_loss([[0, 0, 1]], [[0.25, 0.5, 0.75]]),
0)
assert_almost_equal(label_ranking_loss([[0, 1, 0]], [[0.25, 0.5, 0.75]]),
1 / 2)
assert_almost_equal(label_ranking_loss([[0, 1, 1]], [[0.25, 0.5, 0.75]]),
0)
assert_almost_equal(label_ranking_loss([[1, 0, 0]], [[0.25, 0.5, 0.75]]),
2 / 2)
assert_almost_equal(label_ranking_loss([[1, 0, 1]], [[0.25, 0.5, 0.75]]),
1 / 2)
assert_almost_equal(label_ranking_loss([[1, 1, 0]], [[0.25, 0.5, 0.75]]),
2 / 2)
# Undefined metrics - the ranking doesn't matter
assert_almost_equal(label_ranking_loss([[0, 0]], [[0.75, 0.25]]), 0)
assert_almost_equal(label_ranking_loss([[1, 1]], [[0.75, 0.25]]), 0)
assert_almost_equal(label_ranking_loss([[0, 0]], [[0.5, 0.5]]), 0)
assert_almost_equal(label_ranking_loss([[1, 1]], [[0.5, 0.5]]), 0)
assert_almost_equal(label_ranking_loss([[0, 0, 0]], [[0.5, 0.75, 0.25]]),
0)
assert_almost_equal(label_ranking_loss([[1, 1, 1]], [[0.5, 0.75, 0.25]]),
0)
assert_almost_equal(label_ranking_loss([[0, 0, 0]], [[0.25, 0.5, 0.5]]),
0)
assert_almost_equal(label_ranking_loss([[1, 1, 1]], [[0.25, 0.5, 0.5]]), 0)
# Non trival case
assert_almost_equal(label_ranking_loss([[0, 1, 0], [1, 1, 0]],
[[0.1, 10., -3], [0, 1, 3]]),
(0 + 2 / 2) / 2.)
assert_almost_equal(label_ranking_loss(
[[0, 1, 0], [1, 1, 0], [0, 1, 1]],
[[0.1, 10, -3], [0, 1, 3], [0, 2, 0]]),
(0 + 2 / 2 + 1 / 2) / 3.)
assert_almost_equal(label_ranking_loss(
[[0, 1, 0], [1, 1, 0], [0, 1, 1]],
[[0.1, 10, -3], [3, 1, 3], [0, 2, 0]]),
(0 + 2 / 2 + 1 / 2) / 3.)
# Sparse csr matrices
assert_almost_equal(label_ranking_loss(
csr_matrix(np.array([[0, 1, 0], [1, 1, 0]])),
[[0.1, 10, -3], [3, 1, 3]]),
(0 + 2 / 2) / 2.)
def test_ranking_appropriate_input_shape():
# Check that that y_true.shape != y_score.shape raise the proper exception
assert_raises(ValueError, label_ranking_loss, [[0, 1], [0, 1]], [0, 1])
assert_raises(ValueError, label_ranking_loss, [[0, 1], [0, 1]], [[0, 1]])
assert_raises(ValueError, label_ranking_loss,
[[0, 1], [0, 1]], [[0], [1]])
assert_raises(ValueError, label_ranking_loss, [[0, 1]], [[0, 1], [0, 1]])
assert_raises(ValueError, label_ranking_loss,
[[0], [1]], [[0, 1], [0, 1]])
assert_raises(ValueError, label_ranking_loss, [[0, 1], [0, 1]], [[0], [1]])
def test_ranking_loss_ties_handling():
# Tie handling
assert_almost_equal(label_ranking_loss([[1, 0]], [[0.5, 0.5]]), 1)
assert_almost_equal(label_ranking_loss([[0, 1]], [[0.5, 0.5]]), 1)
assert_almost_equal(label_ranking_loss([[0, 0, 1]], [[0.25, 0.5, 0.5]]),
1 / 2)
assert_almost_equal(label_ranking_loss([[0, 1, 0]], [[0.25, 0.5, 0.5]]),
1 / 2)
assert_almost_equal(label_ranking_loss([[0, 1, 1]], [[0.25, 0.5, 0.5]]), 0)
assert_almost_equal(label_ranking_loss([[1, 0, 0]], [[0.25, 0.5, 0.5]]), 1)
assert_almost_equal(label_ranking_loss([[1, 0, 1]], [[0.25, 0.5, 0.5]]), 1)
assert_almost_equal(label_ranking_loss([[1, 1, 0]], [[0.25, 0.5, 0.5]]), 1)
| bsd-3-clause |
hugo-lorenzo-mato/meteo-galicia-db | pruebaPlot.py | 1 | 1757 | import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import NullFormatter
import string
import random
'''
# the random data
x = np.random.randn(1000)
y = np.random.randn(1000)
nullfmt = NullFormatter() # no labels
# definitions for the axes
left, width = 0.1, 0.65
bottom, height = 0.1, 0.65
bottom_h = left_h = left + width + 0.02
rect_scatter = [left, bottom, width, height]
rect_histx = [left, bottom_h, width, 0.2]
rect_histy = [left_h, bottom, 0.2, height]
# start with a rectangular Figure
plt.figure(1, figsize=(8, 8))
axScatter = plt.axes(rect_scatter)
axHistx = plt.axes(rect_histx)
axHisty = plt.axes(rect_histy)
# no labels
axHistx.xaxis.set_major_formatter(nullfmt)
axHisty.yaxis.set_major_formatter(nullfmt)
# the scatter plot:
axScatter.scatter(x, y)
# now determine nice limits by hand:
binwidth = 0.25
xymax = np.max([np.max(np.fabs(x)), np.max(np.fabs(y))])
lim = (int(xymax/binwidth) + 1) * binwidth
axScatter.set_xlim((-lim, lim))
axScatter.set_ylim((-lim, lim))
bins = np.arange(-lim, lim + binwidth, binwidth)
axHistx.hist(x, bins=bins)
axHisty.hist(y, bins=bins, orientation='horizontal')
axHistx.set_xlim(axScatter.get_xlim())
axHisty.set_ylim(axScatter.get_ylim())
plt.show()
'''
def generador_nombre(size=10, chars=string.ascii_uppercase + string.digits):
return ''.join(random.choice(chars) for _ in range(size))
test = generador_nombre()
print(test)
nombre_png = generador_nombre()
ruta_png = "/consulta/imagenes/" + nombre_png + ".png"
ruta = "/home/hugo/PycharmProjects/pintgrupo16/django/www/MeteoGaliciaDB/consulta/static/consulta/imagenes/" + nombre_png + ".png"
ruta_estatica = "<img src = '{% static '/consulta/imagenes/'" + nombre_png + ".png %}'>"
print(ruta_estatica) | mit |
tbabej/astropy | astropy/visualization/mpl_style.py | 4 | 3102 | # Licensed under a 3-clause BSD style license - see LICENSE.rst
"""
This module contains dictionaries that can be used to set a matplotlib
plotting style. It is mostly here to allow a consistent plotting style
in tutorials, but can be used to prepare any matplotlib figure.
"""
from ..utils import minversion
# This returns False if matplotlib cannot be imported
MATPLOTLIB_GE_1_5 = minversion('matplotlib', '1.5')
__all__ = ['astropy_mpl_style_1', 'astropy_mpl_style',
'astropy_mpl_docs_style']
# Version 1 astropy plotting style for matplotlib
astropy_mpl_style_1 = {
# Lines
'lines.linewidth': 1.7,
'lines.antialiased': True,
# Patches
'patch.linewidth': 1.0,
'patch.facecolor': '#348ABD',
'patch.edgecolor': '#CCCCCC',
'patch.antialiased': True,
# Images
'image.cmap': 'gist_heat',
'image.origin': 'upper',
# Font
'font.size': 12.0,
# Axes
'axes.facecolor': '#FFFFFF',
'axes.edgecolor': '#AAAAAA',
'axes.linewidth': 1.0,
'axes.grid': True,
'axes.titlesize': 'x-large',
'axes.labelsize': 'large',
'axes.labelcolor': 'k',
'axes.axisbelow': True,
# Ticks
'xtick.major.size': 0,
'xtick.minor.size': 0,
'xtick.major.pad': 6,
'xtick.minor.pad': 6,
'xtick.color': '#565656',
'xtick.direction': 'in',
'ytick.major.size': 0,
'ytick.minor.size': 0,
'ytick.major.pad': 6,
'ytick.minor.pad': 6,
'ytick.color': '#565656',
'ytick.direction': 'in',
# Legend
'legend.fancybox': True,
'legend.loc': 'best',
# Figure
'figure.figsize': [8, 6],
'figure.facecolor': '1.0',
'figure.edgecolor': '0.50',
'figure.subplot.hspace': 0.5,
# Other
'savefig.dpi': 72,
}
color_cycle = ['#348ABD', # blue
'#7A68A6', # purple
'#A60628', # red
'#467821', # green
'#CF4457', # pink
'#188487', # turquoise
'#E24A33'] # orange
if MATPLOTLIB_GE_1_5:
# This is a dependency of matplotlib, so should be present.
from cycler import cycler
astropy_mpl_style_1['axes.prop_cycle'] = cycler('color', color_cycle)
else:
astropy_mpl_style_1['axes.color_cycle'] = color_cycle
astropy_mpl_style = astropy_mpl_style_1
"""The most recent version of the astropy plotting style."""
astropy_mpl_docs_style = astropy_mpl_style_1.copy()
"""The plotting style used in the astropy documentation."""
color_cycle_docs = [
'#E24A33', # orange
'#348ABD', # blue
'#467821', # green
'#A60628', # red
'#7A68A6', # purple
'#CF4457', # pink
'#188487' # turquoise
]
if MATPLOTLIB_GE_1_5:
astropy_mpl_docs_style['axes.prop_cycle'] = cycler('color',
color_cycle_docs)
else:
astropy_mpl_docs_style['axes.color_cycle'] = color_cycle_docs
astropy_mpl_docs_style['axes.grid'] = False
astropy_mpl_docs_style['figure.figsize'] = (6, 6)
astropy_mpl_docs_style['savefig.facecolor'] = 'none'
astropy_mpl_docs_style['savefig.bbox'] = 'tight'
| bsd-3-clause |
nmayorov/scikit-learn | examples/manifold/plot_manifold_sphere.py | 23 | 5102 | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
=============================================
Manifold Learning methods on a severed sphere
=============================================
An application of the different :ref:`manifold` techniques
on a spherical data-set. Here one can see the use of
dimensionality reduction in order to gain some intuition
regarding the manifold learning methods. Regarding the dataset,
the poles are cut from the sphere, as well as a thin slice down its
side. This enables the manifold learning techniques to
'spread it open' whilst projecting it onto two dimensions.
For a similar example, where the methods are applied to the
S-curve dataset, see :ref:`example_manifold_plot_compare_methods.py`
Note that the purpose of the :ref:`MDS <multidimensional_scaling>` is
to find a low-dimensional representation of the data (here 2D) in
which the distances respect well the distances in the original
high-dimensional space, unlike other manifold-learning algorithms,
it does not seeks an isotropic representation of the data in
the low-dimensional space. Here the manifold problem matches fairly
that of representing a flat map of the Earth, as with
`map projection <http://en.wikipedia.org/wiki/Map_projection>`_
"""
# Author: Jaques Grobler <jaques.grobler@inria.fr>
# License: BSD 3 clause
print(__doc__)
from time import time
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.ticker import NullFormatter
from sklearn import manifold
from sklearn.utils import check_random_state
# Next line to silence pyflakes.
Axes3D
# Variables for manifold learning.
n_neighbors = 10
n_samples = 1000
# Create our sphere.
random_state = check_random_state(0)
p = random_state.rand(n_samples) * (2 * np.pi - 0.55)
t = random_state.rand(n_samples) * np.pi
# Sever the poles from the sphere.
indices = ((t < (np.pi - (np.pi / 8))) & (t > ((np.pi / 8))))
colors = p[indices]
x, y, z = np.sin(t[indices]) * np.cos(p[indices]), \
np.sin(t[indices]) * np.sin(p[indices]), \
np.cos(t[indices])
# Plot our dataset.
fig = plt.figure(figsize=(15, 8))
plt.suptitle("Manifold Learning with %i points, %i neighbors"
% (1000, n_neighbors), fontsize=14)
ax = fig.add_subplot(251, projection='3d')
ax.scatter(x, y, z, c=p[indices], cmap=plt.cm.rainbow)
try:
# compatibility matplotlib < 1.0
ax.view_init(40, -10)
except:
pass
sphere_data = np.array([x, y, z]).T
# Perform Locally Linear Embedding Manifold learning
methods = ['standard', 'ltsa', 'hessian', 'modified']
labels = ['LLE', 'LTSA', 'Hessian LLE', 'Modified LLE']
for i, method in enumerate(methods):
t0 = time()
trans_data = manifold\
.LocallyLinearEmbedding(n_neighbors, 2,
method=method).fit_transform(sphere_data).T
t1 = time()
print("%s: %.2g sec" % (methods[i], t1 - t0))
ax = fig.add_subplot(252 + i)
plt.scatter(trans_data[0], trans_data[1], c=colors, cmap=plt.cm.rainbow)
plt.title("%s (%.2g sec)" % (labels[i], t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis('tight')
# Perform Isomap Manifold learning.
t0 = time()
trans_data = manifold.Isomap(n_neighbors, n_components=2)\
.fit_transform(sphere_data).T
t1 = time()
print("%s: %.2g sec" % ('ISO', t1 - t0))
ax = fig.add_subplot(257)
plt.scatter(trans_data[0], trans_data[1], c=colors, cmap=plt.cm.rainbow)
plt.title("%s (%.2g sec)" % ('Isomap', t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis('tight')
# Perform Multi-dimensional scaling.
t0 = time()
mds = manifold.MDS(2, max_iter=100, n_init=1)
trans_data = mds.fit_transform(sphere_data).T
t1 = time()
print("MDS: %.2g sec" % (t1 - t0))
ax = fig.add_subplot(258)
plt.scatter(trans_data[0], trans_data[1], c=colors, cmap=plt.cm.rainbow)
plt.title("MDS (%.2g sec)" % (t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis('tight')
# Perform Spectral Embedding.
t0 = time()
se = manifold.SpectralEmbedding(n_components=2,
n_neighbors=n_neighbors)
trans_data = se.fit_transform(sphere_data).T
t1 = time()
print("Spectral Embedding: %.2g sec" % (t1 - t0))
ax = fig.add_subplot(259)
plt.scatter(trans_data[0], trans_data[1], c=colors, cmap=plt.cm.rainbow)
plt.title("Spectral Embedding (%.2g sec)" % (t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis('tight')
# Perform t-distributed stochastic neighbor embedding.
t0 = time()
tsne = manifold.TSNE(n_components=2, init='pca', random_state=0)
trans_data = tsne.fit_transform(sphere_data).T
t1 = time()
print("t-SNE: %.2g sec" % (t1 - t0))
ax = fig.add_subplot(2, 5, 10)
plt.scatter(trans_data[0], trans_data[1], c=colors, cmap=plt.cm.rainbow)
plt.title("t-SNE (%.2g sec)" % (t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis('tight')
plt.show()
| bsd-3-clause |
fengzhyuan/scikit-learn | examples/preprocessing/plot_robust_scaling.py | 221 | 2702 | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
=========================================================
Robust Scaling on Toy Data
=========================================================
Making sure that each Feature has approximately the same scale can be a
crucial preprocessing step. However, when data contains outliers,
:class:`StandardScaler <sklearn.preprocessing.StandardScaler>` can often
be mislead. In such cases, it is better to use a scaler that is robust
against outliers.
Here, we demonstrate this on a toy dataset, where one single datapoint
is a large outlier.
"""
from __future__ import print_function
print(__doc__)
# Code source: Thomas Unterthiner
# License: BSD 3 clause
import matplotlib.pyplot as plt
import numpy as np
from sklearn.preprocessing import StandardScaler, RobustScaler
# Create training and test data
np.random.seed(42)
n_datapoints = 100
Cov = [[0.9, 0.0], [0.0, 20.0]]
mu1 = [100.0, -3.0]
mu2 = [101.0, -3.0]
X1 = np.random.multivariate_normal(mean=mu1, cov=Cov, size=n_datapoints)
X2 = np.random.multivariate_normal(mean=mu2, cov=Cov, size=n_datapoints)
Y_train = np.hstack([[-1]*n_datapoints, [1]*n_datapoints])
X_train = np.vstack([X1, X2])
X1 = np.random.multivariate_normal(mean=mu1, cov=Cov, size=n_datapoints)
X2 = np.random.multivariate_normal(mean=mu2, cov=Cov, size=n_datapoints)
Y_test = np.hstack([[-1]*n_datapoints, [1]*n_datapoints])
X_test = np.vstack([X1, X2])
X_train[0, 0] = -1000 # a fairly large outlier
# Scale data
standard_scaler = StandardScaler()
Xtr_s = standard_scaler.fit_transform(X_train)
Xte_s = standard_scaler.transform(X_test)
robust_scaler = RobustScaler()
Xtr_r = robust_scaler.fit_transform(X_train)
Xte_r = robust_scaler.fit_transform(X_test)
# Plot data
fig, ax = plt.subplots(1, 3, figsize=(12, 4))
ax[0].scatter(X_train[:, 0], X_train[:, 1],
color=np.where(Y_train > 0, 'r', 'b'))
ax[1].scatter(Xtr_s[:, 0], Xtr_s[:, 1], color=np.where(Y_train > 0, 'r', 'b'))
ax[2].scatter(Xtr_r[:, 0], Xtr_r[:, 1], color=np.where(Y_train > 0, 'r', 'b'))
ax[0].set_title("Unscaled data")
ax[1].set_title("After standard scaling (zoomed in)")
ax[2].set_title("After robust scaling (zoomed in)")
# for the scaled data, we zoom in to the data center (outlier can't be seen!)
for a in ax[1:]:
a.set_xlim(-3, 3)
a.set_ylim(-3, 3)
plt.tight_layout()
plt.show()
# Classify using k-NN
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn.fit(Xtr_s, Y_train)
acc_s = knn.score(Xte_s, Y_test)
print("Testset accuracy using standard scaler: %.3f" % acc_s)
knn.fit(Xtr_r, Y_train)
acc_r = knn.score(Xte_r, Y_test)
print("Testset accuracy using robust scaler: %.3f" % acc_r)
| bsd-3-clause |
siconos/siconos | kernel/swig/tests/test_bouncing_ball.py | 4 | 10411 | #!/usr/bin/env python
from siconos.tests_setup import working_dir
import siconos.kernel as sk
import numpy as np
import os
def test_bouncing_ball1():
"""Run a complete simulation (Bouncing ball example)
LagrangianLinearTIDS, no plugins.
"""
t0 = 0. # start time
tend = 10. # end time
h = 0.005 # time step
r = 0.1 # ball radius
g = 9.81 # gravity
m = 1 # ball mass
e = 0.9 # restitution coeficient
theta = 0.5 # theta scheme
#
# dynamical system
#
x = np.zeros(3, dtype=np.float64)
x[0] = 1.
v = np.zeros_like(x)
# mass matrix
mass = np.eye(3, dtype=np.float64)
mass[2, 2] = 3. / 5 * r * r
# the dynamical system
ball = sk.LagrangianLinearTIDS(x, v, mass)
# set external forces
weight = np.zeros_like(x)
weight[0] = -m * g
ball.setFExtPtr(weight)
#
# Interactions
#
# ball-floor
H = np.zeros((1, 3), dtype=np.float64)
H[0, 0] = 1.
nslaw = sk.NewtonImpactNSL(e)
relation = sk.LagrangianLinearTIR(H)
inter = sk.Interaction(nslaw, relation)
#
# NSDS
#
bouncing_ball = sk.NonSmoothDynamicalSystem(t0, tend)
# add the dynamical system to the non smooth dynamical system
bouncing_ball.insertDynamicalSystem(ball)
# link the interaction and the dynamical system
bouncing_ball.link(inter, ball)
#
# Simulation
#
# (1) OneStepIntegrators
OSI = sk.MoreauJeanOSI(theta)
# (2) Time discretisation --
t = sk.TimeDiscretisation(t0, h)
# (3) one step non smooth problem
osnspb = sk.LCP()
# (4) Simulation setup with (1) (2) (3)
s = sk.TimeStepping(bouncing_ball,t, OSI, osnspb)
# end of model definition
#
# computation
#
#
# save and load data from xml and .dat
#
try:
from siconos.io import save
save(bouncing_ball, "bouncingBall.xml")
save(bouncing_ball, "bouncingBall.bin")
except:
print("Warning : could not import save from siconos.io")
# the number of time steps
nb_time_steps = int((tend - t0) / h + 1)
# Get the values to be plotted
# ->saved in a matrix dataPlot
data = np.empty((nb_time_steps, 5))
#
# numpy pointers on dense Siconos vectors
#
q = ball.q()
v = ball.velocity()
p = ball.p(1)
lambda_ = inter.lambda_(1)
#
# initial data
#
data[0, 0] = t0
data[0, 1] = q[0]
data[0, 2] = v[0]
data[0, 3] = p[0]
data[0, 4] = lambda_[0]
k = 1
# time loop
while(s.hasNextEvent()):
s.computeOneStep()
data[k, 0] = s.nextTime()
data[k, 1] = q[0]
data[k, 2] = v[0]
data[k, 3] = p[0]
data[k, 4] = lambda_[0]
k += 1
#print(s.nextTime())
s.nextStep()
#
# comparison with the reference file
#
ref = sk.getMatrix(sk.SimpleMatrix(
os.path.join(working_dir, "data/result.ref")))
assert (np.linalg.norm(data - ref) < 1e-12)
def xtest_bouncing_ball_from_xml():
assert False # just have to load from xml...
def xtest_bouncing_ball_from_binary():
assert False # just have to load from .dat...
def run_simulation_with_two_ds(ball, ball_d, t0):
T = 5 # end time
h = 0.005 # time step
e = 0.9 # restitution coeficient
theta = 0.5 # theta scheme
# ball-floor
H = np.zeros((1, 3), dtype=np.float64)
H[0, 0] = 1.
nslaw = sk.NewtonImpactNSL(e)
nslaw_d = sk.NewtonImpactNSL(e)
relation = sk.LagrangianLinearTIR(H)
relation_d = sk.LagrangianLinearTIR(H)
inter = sk.Interaction(nslaw, relation)
inter_d = sk.Interaction(nslaw_d, relation_d)
#
# NSDS
#
bouncing_ball = sk.NonSmoothDynamicalSystem(t0, T)
bouncing_ball_d = sk.NonSmoothDynamicalSystem(t0, T)
# add the dynamical system to the non smooth dynamical system
bouncing_ball.insertDynamicalSystem(ball)
bouncing_ball_d.insertDynamicalSystem(ball_d)
# link the interaction and the dynamical system
bouncing_ball.link(inter, ball)
bouncing_ball_d.link(inter_d, ball_d)
#
# Simulation
#
# (1) OneStepIntegrators
OSI = sk.MoreauJeanOSI(theta)
OSI_d = sk.MoreauJeanOSI(theta)
# (2) Time discretisation --
t = sk.TimeDiscretisation(t0, h)
t_d = sk.TimeDiscretisation(t0, h)
# (3) one step non smooth problem
osnspb = sk.LCP()
osnspb_d = sk.LCP()
# (4) Simulation setup with (1) (2) (3)
s = sk.TimeStepping(bouncing_ball,t, OSI, osnspb)
s_d = sk.TimeStepping(bouncing_ball_d,t_d, OSI_d, osnspb_d)
# end of model definition
#
# computation
#
# the number of time steps
nb_time_steps = int((T - t0) / h + 1)
# Get the values to be plotted
# ->saved in a matrix data
s_d.computeOneStep()
data = np.empty((nb_time_steps + 1, 5))
data_d = np.empty((nb_time_steps + 1, 5))
data[0, 0] = t0
data[0, 1] = ball.q()[0]
data[0, 2] = ball.velocity()[0]
data[0, 3] = ball.p(1)[0]
data[0, 4] = inter.lambda_(1)
data_d[0, 0] = t0
data_d[0, 1] = ball_d.q()[0]
data_d[0, 2] = ball_d.velocity()[0]
data_d[0, 3] = ball_d.p(1)[0]
data_d[0, 4] = inter_d.lambda_(1)
k = 1
# time loop
while(s.hasNextEvent()):
s.computeOneStep()
s_d.computeOneStep()
data[k, 0] = s.nextTime()
data[k, 1] = ball.q()[0]
data[k, 2] = ball.velocity()[0]
data[k, 3] = ball.p(1)[0]
data[k, 4] = inter.lambda_(1)[0]
data_d[k, 0] = s_d.nextTime()
data_d[k, 1] = ball_d.q()[0]
data_d[k, 2] = ball_d.velocity()[0]
data_d[k, 3] = ball_d.p(1)[0]
data_d[k, 4] = inter_d.lambda_(1)[0]
assert np.allclose(data[k, 1], data_d[k, 1])
#print(s.nextTime())
k += 1
s.nextStep()
s_d.nextStep()
data.resize(k,5)
view= False
if view:
import matplotlib.pyplot as plt
fig_size = [14, 14]
plt.rcParams["figure.figsize"] = fig_size
plt.subplot(411)
plt.title('displacement')
plt.plot(data[:, 0], data[:, 1])
plt.grid()
plt.subplot(412)
plt.title('velocity')
plt.plot(data[:, 0], data[:, 2])
plt.grid()
plt.subplot(413)
plt.plot(data[:, 0], data[:, 3])
plt.title('reaction')
plt.grid()
plt.subplot(414)
plt.plot(data[:, 0], data[:, 4])
plt.title('lambda')
plt.grid()
plt.show()
def test_bouncing_ball2():
"""Run a complete simulation (Bouncing ball example)
LagrangianLinearTIDS, plugged Fext.
"""
t0 = 0 # start time
r = 0.1 # ball radius
g = 9.81 # gravity
m = 1 # ball mass
#
# dynamical system
#
x = np.zeros(3, dtype=np.float64)
x[0] = 1.
v = np.zeros_like(x)
# mass matrix
mass = np.eye(3, dtype=np.float64)
mass[2, 2] = 3. / 5 * r * r
# the dynamical system
ball = sk.LagrangianLinearTIDS(x, v, mass)
weight = np.zeros(ball.dimension())
weight[0] = -m * g
ball.setFExtPtr(weight)
# a ball with its own computeFExt
class Ball(sk.LagrangianLinearTIDS):
def computeFExt(self, t):
"""External forces operator computation
"""
print("computing FExt at t=", t)
#self._fExt[0] = -m * g
weight = np.zeros(self.dimension())
weight[0] = -m * g
self.setFExtPtr(weight)
ball_d = Ball(x.copy(), v.copy(), mass)
ball_d.computeFExt(t0)
run_simulation_with_two_ds(ball, ball_d, t0)
def test_bouncing_ball3():
"""Run a complete simulation (Bouncing ball example)
LagrangianDS, plugged Fext.
"""
t0 = 0 # start time
r = 0.1 # ball radius
g = 9.81 # gravity
m = 1 # ball mass
#
# dynamical system
#
x = np.zeros(3, dtype=np.float64)
x[0] = 1.
v = np.zeros_like(x)
# mass matrix
mass = np.eye(3, dtype=np.float64)
mass[2, 2] = 3. / 5 * r * r
# the dynamical system
ball = sk.LagrangianLinearTIDS(x, v, mass)
weight = np.zeros(ball.dimension())
weight[0] = -m * g
ball.setFExtPtr(weight)
# a ball with its own computeFExt
class Ball(sk.LagrangianDS):
def computeFExt(self, t):
"""External forces operator computation
"""
print("computing FExt at t=", t)
#self._fExt[0] = -m * g
weight = np.zeros(self.dimension())
weight[0] = -m * g
self.setFExtPtr(weight)
ball_d = Ball(x.copy(), v.copy(), mass)
ball_d.computeFExt(t0)
run_simulation_with_two_ds(ball, ball_d, t0)
def test_bouncing_ball4():
"""Run a complete simulation (Bouncing ball example)
LagrangianDS, plugged Fext.
"""
t0 = 0 # start time
r = 0.1 # ball radius
g = 9.81 # gravity
m = 1 # ball mass
#
# dynamical system
#
x = np.zeros(3, dtype=np.float64)
x[0] = 1.
v = np.zeros_like(x)
# mass matrix
mass = np.eye(3, dtype=np.float64)
mass[2, 2] = 3. / 5 * r * r
# the dynamical system
ball = sk.LagrangianLinearTIDS(x, v, mass)
stiffness = np.eye(3, dtype=np.float64)
ball.setKPtr(stiffness)
weight = np.zeros(ball.dimension())
weight[0] = -m * g
#ball.setFExtPtr(weight)
# a ball with its own computeFExt
class Ball(sk.LagrangianLinearTIDS):
def __init__(self,x, v, mass, stiffness):
sk.LagrangianLinearTIDS.__init__(self,x,v,mass)
self.setKPtr(stiffness)
def computeFExt(self, t):
"""External forces operator computation
"""
print("computing FExt at t=", t)
#self._fExt[0] = -m * g
weight = np.zeros(self.dimension())
weight[0] = -m * g
#self.setFExtPtr(weight)
ball_d = Ball(x.copy(), v.copy(), mass, stiffness)
ball_d.computeFExt(t0)
run_simulation_with_two_ds(ball, ball_d, t0)
if __name__ == "__main__":
# execute only if run as a script
test_bouncing_ball1()
test_bouncing_ball2()
test_bouncing_ball3()
test_bouncing_ball4()
| apache-2.0 |
jcfr/mystic | scripts/mystic_model_plotter.py | 1 | 21412 | #!/usr/bin/env python
#
# Author: Mike McKerns (mmckerns @caltech and @uqfoundation)
# Copyright (c) 1997-2015 California Institute of Technology.
# License: 3-clause BSD. The full license text is available at:
# - http://trac.mystic.cacr.caltech.edu/project/mystic/browser/mystic/LICENSE
__doc__ = """
mystic_model_plotter.py [options] model (filename)
generate surface contour plots for model, specified by full import path
generate model trajectory from logfile (or solver restart file), if provided
The option "bounds" takes an indicator string, where the bounds should
be given as comma-separated slices. For example, using bounds = "-1:10, 0:20"
will set the lower and upper bounds for x to be (-1,10) and y to be (0,20).
The "step" can also be given, to control the number of lines plotted in the
grid. Thus "-1:10:.1, 0:20" would set the bounds as above, but use increments
of .1 along x and the default step along y. For models with > 2D, the bounds
can be used to specify 2 dimensions plus fixed values for remaining dimensions.
Thus, "-1:10, 0:20, 1.0" would plot the 2D surface where the z-axis was fixed
at z=1.0.
The option "label" takes comma-separated strings. For example, label = "x,y,"
will place 'x' on the x-axis, 'y' on the y-axis, and nothing on the z-axis.
LaTeX is also accepted. For example, label = "$ h $, $ {\\alpha}$, $ v$" will
label the axes with standard LaTeX math formatting. Note that the leading
space is required, while a trailing space aligns the text with the axis
instead of the plot frame.
The option "reduce" can be given to reduce the output of a model to a scalar,
thus converting 'model(params)' to 'reduce(model(params))'. A reducer is given
by the import path (e.g. 'numpy.add'). The option "scale" will convert the plot
to log-scale, and scale the cost by 'z=log(4*z*scale+1)+2'. This is useful for
visualizing small contour changes around the minimium. If using log-scale
produces negative numbers, the option "shift" can be used to shift the cost
by 'z=z+shift'. Both shift and scale are intended to help visualize contours.
Required Inputs:
model full import path for the model (e.g. mystic.models.rosen)
Additional Inputs:
filename name of the convergence logfile (e.g. log.txt)
"""
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
from matplotlib import cm
from mystic.munge import read_history
from mystic.munge import logfile_reader, raw_to_support
#XXX: better if reads single id only? (e.g. same interface as read_history)
def get_history(source, ids=None):
"""get params and cost from the given source
source is the name of the trajectory logfile (or solver instance)
if provided, ids are the list of 'run ids' to select
"""
try: # if it's a logfile, it might be multi-id
step, param, cost = logfile_reader(source)
except: # it's not a logfile, so read and return
param, cost = read_history(source)
return [param],[cost]
# split (i,id) into iteration and id
multinode = len(step[0]) - 1 #XXX: what if step = []?
if multinode: id = [i[1] for i in step]
else: id = [0 for i in step]
params = [[] for i in range(max(id) + 1)]
costs = [[] for i in range(len(params))]
# populate params for each id with the corresponding (param,cost)
for i in range(len(id)):
if ids is None or id[i] in ids: # take only the selected 'id'
params[id[i]].append(param[i])
costs[id[i]].append(cost[i])
params = [r for r in params if len(r)] # only keep selected 'ids'
costs = [r for r in costs if len(r)] # only keep selected 'ids'
# convert to support format
for i in range(len(params)):
params[i], costs[i] = raw_to_support(params[i], costs[i])
return params, costs
def get_instance(location, *args, **kwds):
"""given the import location of a model or model class, return the model
args and kwds will be passed to the constructor of the model class
"""
package, target = location.rsplit('.',1)
exec "from %s import %s as model" % (package, target)
import inspect
if inspect.isclass(model):
model = model(*args, **kwds)
return model
def parse_input(option):
"""parse 'option' string into 'select', 'axes', and 'mask'
select contains the dimension specifications on which to plot
axes holds the indicies of the parameters selected to plot
mask is a dictionary of the parameter indicies and fixed values
For example,
>>> select, axes, mask = parse_input("-1:10:.1, 0.0, 5.0, -50:50:.5")
>>> select
[0, 3]
>>> axes
"-1:10:.1, -50:50:.5"
>>> mask
{1: 0.0, 2: 5.0}
"""
option = option.split(',')
select = []
axes = []
mask = {}
for index,value in enumerate(option):
if ":" in value:
select.append(index)
axes.append(value)
else:
mask.update({index:float(value)})
axes = ','.join(axes)
return select, axes, mask
def parse_axes(option, grid=True):
"""parse option string into grid axes; using modified numpy.ogrid notation
For example:
option='-1:10:.1, 0:10:.1' yields x,y=ogrid[-1:10:.1,0:10:.1],
If grid is False, accept options suitable for line plotting.
For example:
option='-1:10' yields x=ogrid[-1:10] and y=0,
option='-1:10, 2' yields x=ogrid[-1:10] and y=2,
Returns tuple (x,y) with 'x,y' defined above.
"""
import numpy
option = option.split(',')
opt = dict(zip(['x','y','z'],option))
if len(option) > 2 or len(option) < 1:
raise ValueError("invalid format string: '%s'" % ','.join(option))
z = bool(grid)
if len(option) == 1: opt['y'] = '0'
xd = True if ':' in opt['x'] else False
yd = True if ':' in opt['y'] else False
#XXX: accepts option='3:1', '1:1', and '1:2:10' (try to catch?)
if xd and yd:
try: # x,y form a 2D grid
exec('x,y = numpy.ogrid[%s,%s]' % (opt['x'],opt['y']))
except: # AttributeError:
raise ValueError("invalid format string: '%s'" % ','.join(option))
elif xd and not z:
try:
exec('x = numpy.ogrid[%s]' % opt['x'])
y = float(opt['y'])
except: # (AttributeError, SyntaxError, ValueError):
raise ValueError("invalid format string: '%s'" % ','.join(option))
elif yd and not z:
try:
x = float(opt['x'])
exec('y = numpy.ogrid[%s]' % opt['y'])
except: # (AttributeError, SyntaxError, ValueError):
raise ValueError("invalid format string: '%s'" % ','.join(option))
else:
raise ValueError("invalid format string: '%s'" % ','.join(option))
if not x.size or not y.size:
raise ValueError("invalid format string: '%s'" % ','.join(option))
return x,y
def draw_projection(x, cost, scale=True, shift=False, style=None, figure=None):
"""draw a solution trajectory (for overlay on a 1D plot)
x is the sequence of values for one parameter (i.e. a parameter trajectory)
cost is the sequence of costs (i.e. the solution trajectory)
if scale is provided, scale the intensity as 'z = log(4*z*scale+1)+2'
if shift is provided, shift the intensity as 'z = z+shift' (useful for -z's)
if style is provided, set the line style (e.g. 'w-o', 'k-', 'ro')
if figure is provided, plot to an existing figure
"""
if not figure: figure = plt.figure()
ax = figure.gca()
ax.autoscale(tight=True)
if style in [None, False]:
style = 'k-o'
import numpy
if shift:
if shift is True: #NOTE: MAY NOT be the exact minimum
shift = max(-numpy.min(cost), 0.0) + 0.5 # a good guess
cost = numpy.asarray(cost)+shift
cost = numpy.asarray(cost)
if scale:
cost = numpy.log(4*cost*scale+1)+2
ax.plot(x,cost, style, linewidth=2, markersize=4)
#XXX: need to 'correct' the z-axis (or provide easy conversion)
return figure
def draw_trajectory(x, y, cost=None, scale=True, shift=False, style=None, figure=None):
"""draw a solution trajectory (for overlay on a contour plot)
x is a sequence of values for one parameter (i.e. a parameter trajectory)
y is a sequence of values for one parameter (i.e. a parameter trajectory)
cost is the solution trajectory (i.e. costs); if provided, plot a 3D contour
if scale is provided, scale the intensity as 'z = log(4*z*scale+1)+2'
if shift is provided, shift the intensity as 'z = z+shift' (useful for -z's)
if style is provided, set the line style (e.g. 'w-o', 'k-', 'ro')
if figure is provided, plot to an existing figure
"""
if not figure: figure = plt.figure()
if cost: kwds = {'projection':'3d'} # 3D
else: kwds = {} # 2D
ax = figure.gca(**kwds)
if style in [None, False]:
style = 'w-o' #if not scale else 'k-o'
if cost: # is 3D, cost is needed
import numpy
if shift:
if shift is True: #NOTE: MAY NOT be the exact minimum
shift = max(-numpy.min(cost), 0.0) + 0.5 # a good guess
cost = numpy.asarray(cost)+shift
if scale:
cost = numpy.asarray(cost)
cost = numpy.log(4*cost*scale+1)+2
ax.plot(x,y,cost, style, linewidth=2, markersize=4)
#XXX: need to 'correct' the z-axis (or provide easy conversion)
else: # is 2D, cost not needed
ax.plot(x,y, style, linewidth=2, markersize=4)
return figure
def draw_slice(f, x, y=None, scale=True, shift=False):
"""plot a slice of a 2D function 'f' in 1D
x is an array used to set up the axis
y is a fixed value for the 2nd axis
if scale is provided, scale the intensity as 'z = log(4*z*scale+1)+2'
if shift is provided, shift the intensity as 'z = z+shift' (useful for -z's)
NOTE: when plotting the 'y-axis' at fixed 'x',
pass the array to 'y' and the fixed value to 'x'
"""
import numpy
if y is None:
y = 0.0
x, y = numpy.meshgrid(x, y)
plotx = True if numpy.all(y == y[0,0]) else False
z = 0*x
s,t = x.shape
for i in range(s):
for j in range(t):
xx,yy = x[i,j], y[i,j]
z[i,j] = f([xx,yy])
if shift:
if shift is True: shift = max(-numpy.min(z), 0.0) + 0.5 # exact minimum
z = z+shift
if scale: z = numpy.log(4*z*scale+1)+2
#XXX: need to 'correct' the z-axis (or provide easy conversion)
fig = plt.figure()
ax = fig.gca()
ax.autoscale(tight=True)
if plotx:
ax.plot(x.reshape(-1), z.reshape(-1))
else:
ax.plot(y.reshape(-1), z.reshape(-1))
return fig
def draw_contour(f, x, y=None, surface=False, fill=True, scale=True, shift=False, density=5):
"""draw a contour plot for a given 2D function 'f'
x and y are arrays used to set up a 2D mesh grid
if fill is True, color fill the contours
if surface is True, plot the contours as a 3D projection
if scale is provided, scale the intensity as 'z = log(4*z*scale+1)+2'
if shift is provided, shift the intensity as 'z = z+shift' (useful for -z's)
use density to adjust the number of contour lines
"""
import numpy
if y is None:
y = x
x, y = numpy.meshgrid(x, y)
z = 0*x
s,t = x.shape
for i in range(s):
for j in range(t):
xx,yy = x[i,j], y[i,j]
z[i,j] = f([xx,yy])
if shift:
if shift is True: shift = max(-numpy.min(z), 0.0) + 0.5 # exact minimum
z = z+shift
if scale: z = numpy.log(4*z*scale+1)+2
#XXX: need to 'correct' the z-axis (or provide easy conversion)
fig = plt.figure()
if surface and fill is None: # 'hidden' option; full 3D surface plot
ax = fig.gca(projection='3d')
d = max(11 - density, 1) # or 1/density ?
kwds = {'rstride':d,'cstride':d,'cmap':cm.jet,'linewidth':0}
ax.plot_surface(x, y, z, **kwds)
else:
if surface: kwds = {'projection':'3d'} # 3D
elif surface is None: # 1D
raise NotImplementedError('need to add an option string parser')
else: kwds = {} # 2D
ax = fig.gca(**kwds)
density = 10*density
if fill: plotf = ax.contourf # filled contours
else: plotf = ax.contour # wire contours
plotf(x, y, z, density, cmap=cm.jet)
return fig
if __name__ == '__main__':
#FIXME: should be able to:
# - apply a constraint as a region of NaN -- apply when 'xx,yy=x[ij],y[ij]'
# - apply a penalty by shifting the surface (plot w/alpha?) -- as above
# - build an appropriately-sized default grid (from logfile info)
# - move all mulit-id param/cost reading into read_history
#FIXME: current issues:
# - 1D slice and projection work for 2D function, but aren't "pretty"
# - 1D slice and projection for 1D function, is it meaningful and correct?
# - should be able to plot from solver.genealogy (multi-monitor?) [1D,2D,3D?]
# - should be able to scale 'z-axis' instead of scaling 'z' itself
# (see https://github.com/matplotlib/matplotlib/issues/209)
# - if trajectory outside contour grid, will increase bounds
# (see support_hypercube.py for how to fix bounds)
#XXX: note that 'argparse' is new as of python2.7
from optparse import OptionParser
parser = OptionParser(usage=__doc__)
parser.add_option("-b","--bounds",action="store",dest="bounds",\
metavar="STR",default="-5:5:.1, -5:5:.1",
help="indicator string to set plot bounds and density")
parser.add_option("-l","--label",action="store",dest="label",\
metavar="STR",default=",,",
help="string to assign label to axis")
parser.add_option("-n","--nid",action="store",dest="id",\
metavar="INT",default=None,
help="id # of the nth simultaneous points to plot")
parser.add_option("-i","--iter",action="store",dest="stop",\
metavar="STR",default=":",
help="string for smallest:largest iterations to plot")
parser.add_option("-r","--reduce",action="store",dest="reducer",\
metavar="STR",default="None",
help="import path of output reducer function")
parser.add_option("-x","--scale",action="store",dest="zscale",\
metavar="INT",default=0.0,
help="scale plotted cost by z=log(4*z*scale+1)+2")
parser.add_option("-z","--shift",action="store",dest="zshift",\
metavar="INT",default=0.0,
help="shift plotted cost by z=z+shift")
parser.add_option("-f","--fill",action="store_true",dest="fill",\
default=False,help="plot using filled contours")
parser.add_option("-d","--depth",action="store_true",dest="surface",\
default=False,help="plot contours showing depth in 3D")
parser.add_option("-o","--dots",action="store_true",dest="dots",\
default=False,help="show trajectory points in plot")
parser.add_option("-j","--join",action="store_true",dest="line",\
default=False,help="connect trajectory points in plot")
parsed_opts, parsed_args = parser.parse_args()
# get the import path for the model
model = parsed_args[0] # e.g. 'mystic.models.rosen'
if "None" == model: model = None #XXX: 'required'... allow this?
try: # get the name of the parameter log file
source = parsed_args[1] # e.g. 'log.txt'
except:
source = None
try: # select the bounds
options = parsed_opts.bounds # format is "-1:10:.1, -1:10:.1, 1.0"
except:
options = "-5:5:.1, -5:5:.1"
try: # plot using filled contours
fill = parsed_opts.fill
except:
fill = False
try: # plot contours showing depth in 3D
surface = parsed_opts.surface
except:
surface = False
#XXX: can't do '-x' with no argument given (use T/F instead?)
try: # scale plotted cost by z=log(4*z*scale+1)+2
scale = float(parsed_opts.zscale)
if not scale: scale = False
except:
scale = False
#XXX: can't do '-z' with no argument given
try: # shift plotted cost by z=z+shift
shift = float(parsed_opts.zshift)
if not shift: shift = False
except:
shift = False
try: # import path of output reducer function
reducer = parsed_opts.reducer # e.g. 'numpy.add'
if "None" == reducer: reducer = None
except:
reducer = None
style = '-' # default linestyle
if parsed_opts.dots:
mark = 'o' # marker=mark
# when using 'dots', also can turn off 'line'
if not parsed_opts.line:
style = '' # linestyle='None'
else:
mark = ''
color = 'w' if fill else 'k'
style = color + style + mark
try: # select labels for the axes
label = parsed_opts.label.split(',') # format is "x, y, z"
except:
label = ['','','']
try: # select which 'id' to plot results for
ids = (int(parsed_opts.id),) #XXX: allow selecting more than one id ?
except:
ids = None # i.e. 'all'
try: # select which iteration to stop plotting at
stop = parsed_opts.stop # format is "1:10:1"
stop = stop if ":" in stop else ":"+stop
except:
stop = ":"
#################################################
solver = None # set to 'mystic.solvers.fmin' (or similar) for 'live' fits
#NOTE: 'live' runs constrain params explicitly in the solver, then reduce
# dimensions appropriately so results can be 2D contour plotted.
# When working with legacy results that have more than 2 params,
# the trajectory WILL NOT follow the masked surface generated
# because the masked params were NOT fixed when the solver was run.
#################################################
from mystic.tools import reduced, masked, partial
# process inputs
select, spec, mask = parse_input(options)
x,y = parse_axes(spec, grid=True) # grid=False for 1D plots
#FIXME: does grid=False still make sense here...?
if reducer: reducer = get_instance(reducer)
if solver and (not source or not model):
raise RuntimeError('a model and results filename are required')
elif not source and not model:
raise RuntimeError('a model or a results file is required')
if model:
model = get_instance(model)
# need a reducer if model returns an array
if reducer: model = reduced(reducer, arraylike=False)(model)
if solver:
# if 'live'... pick a solver
solver = 'mystic.solvers.fmin'
solver = get_instance(solver)
xlen = len(select)+len(mask)
if solver.__name__.startswith('diffev'):
initial = [(-1,1)]*xlen
else:
initial = [0]*xlen
from mystic.monitors import VerboseLoggingMonitor
itermon = VerboseLoggingMonitor(filename=source, new=True)
# explicitly constrain parameters
model = partial(mask)(model)
# solve
sol = solver(model, x0=initial, itermon=itermon)
#-OVERRIDE-INPUTS-#
import numpy
# read trajectories from monitor (comment out to use logfile)
source = itermon
# if negative minimum, shift by the 'solved minimum' plus an epsilon
shift = max(-numpy.min(itermon.y), 0.0) + 0.5 # a good guess
#-----------------#
if model: # for plotting, implicitly constrain by reduction
model = masked(mask)(model)
## plot the surface in 1D
#if solver: v=sol[-1]
#elif source: v=cost[-1]
#else: v=None
#fig0 = draw_slice(model, x=x, y=v, scale=scale, shift=shift)
# plot the surface in 2D or 3D
fig = draw_contour(model, x, y, surface=surface, fill=fill, scale=scale, shift=shift)
else:
#fig0 = None
fig = None
if source:
# params are the parameter trajectories
# cost is the solution trajectory
params, cost = get_history(source, ids)
if len(cost) > 1: style = style[1:] # 'auto-color' #XXX: or grayscale?
for p,c in zip(params, cost):
## project trajectory on a 1D slice of model surface #XXX: useful?
#s = select[0] if len(select) else 0
#px = p[int(s)] # draw_projection requires one parameter
## ignore everything after 'stop'
#_c = eval('c[%s]' % stop)
#_x = eval('px[%s]' % stop)
#fig0 = draw_projection(_x,_c, style=style, scale=scale, shift=shift, figure=fig0)
# plot the trajectory on the model surface (2D or 3D)
# get two selected params #XXX: what if len(select)<2? or len(p)<2?
p = [p[int(i)] for i in select[:2]]
px,py = p # draw_trajectory requires two parameters
# ignore everything after 'stop'
_x = eval('px[%s]' % stop)
_y = eval('py[%s]' % stop)
_c = eval('c[%s]' % stop) if surface else None
fig = draw_trajectory(_x,_y,_c, style=style, scale=scale, shift=shift, figure=fig)
# add labels to the axes
if surface: kwds = {'projection':'3d'} # 3D
else: kwds = {} # 2D
ax = fig.gca(**kwds)
ax.set_xlabel(label[0])
ax.set_ylabel(label[1])
if surface: ax.set_zlabel(label[2])
plt.show()
# EOF
| bsd-3-clause |
RPGOne/Skynet | scikit-learn-0.18.1/sklearn/utils/random.py | 46 | 10523 | # Author: Hamzeh Alsalhi <ha258@cornell.edu>
#
# License: BSD 3 clause
from __future__ import division
import numpy as np
import scipy.sparse as sp
import operator
import array
from sklearn.utils import check_random_state
from sklearn.utils.fixes import astype
from ._random import sample_without_replacement
__all__ = ['sample_without_replacement', 'choice']
# This is a backport of np.random.choice from numpy 1.7
# The function can be removed when we bump the requirements to >=1.7
def choice(a, size=None, replace=True, p=None, random_state=None):
"""
choice(a, size=None, replace=True, p=None)
Generates a random sample from a given 1-D array
.. versionadded:: 1.7.0
Parameters
-----------
a : 1-D array-like or int
If an ndarray, a random sample is generated from its elements.
If an int, the random sample is generated as if a was np.arange(n)
size : int or tuple of ints, optional
Output shape. Default is None, in which case a single value is
returned.
replace : boolean, optional
Whether the sample is with or without replacement.
p : 1-D array-like, optional
The probabilities associated with each entry in a.
If not given the sample assumes a uniform distribution over all
entries in a.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
Returns
--------
samples : 1-D ndarray, shape (size,)
The generated random samples
Raises
-------
ValueError
If a is an int and less than zero, if a or p are not 1-dimensional,
if a is an array-like of size 0, if p is not a vector of
probabilities, if a and p have different lengths, or if
replace=False and the sample size is greater than the population
size
See Also
---------
randint, shuffle, permutation
Examples
---------
Generate a uniform random sample from np.arange(5) of size 3:
>>> np.random.choice(5, 3) # doctest: +SKIP
array([0, 3, 4])
>>> #This is equivalent to np.random.randint(0,5,3)
Generate a non-uniform random sample from np.arange(5) of size 3:
>>> np.random.choice(5, 3, p=[0.1, 0, 0.3, 0.6, 0]) # doctest: +SKIP
array([3, 3, 0])
Generate a uniform random sample from np.arange(5) of size 3 without
replacement:
>>> np.random.choice(5, 3, replace=False) # doctest: +SKIP
array([3,1,0])
>>> #This is equivalent to np.random.shuffle(np.arange(5))[:3]
Generate a non-uniform random sample from np.arange(5) of size
3 without replacement:
>>> np.random.choice(5, 3, replace=False, p=[0.1, 0, 0.3, 0.6, 0])
... # doctest: +SKIP
array([2, 3, 0])
Any of the above can be repeated with an arbitrary array-like
instead of just integers. For instance:
>>> aa_milne_arr = ['pooh', 'rabbit', 'piglet', 'Christopher']
>>> np.random.choice(aa_milne_arr, 5, p=[0.5, 0.1, 0.1, 0.3])
... # doctest: +SKIP
array(['pooh', 'pooh', 'pooh', 'Christopher', 'piglet'],
dtype='|S11')
"""
random_state = check_random_state(random_state)
# Format and Verify input
a = np.array(a, copy=False)
if a.ndim == 0:
try:
# __index__ must return an integer by python rules.
pop_size = operator.index(a.item())
except TypeError:
raise ValueError("a must be 1-dimensional or an integer")
if pop_size <= 0:
raise ValueError("a must be greater than 0")
elif a.ndim != 1:
raise ValueError("a must be 1-dimensional")
else:
pop_size = a.shape[0]
if pop_size is 0:
raise ValueError("a must be non-empty")
if p is not None:
p = np.array(p, dtype=np.double, ndmin=1, copy=False)
if p.ndim != 1:
raise ValueError("p must be 1-dimensional")
if p.size != pop_size:
raise ValueError("a and p must have same size")
if np.any(p < 0):
raise ValueError("probabilities are not non-negative")
if not np.allclose(p.sum(), 1):
raise ValueError("probabilities do not sum to 1")
shape = size
if shape is not None:
size = np.prod(shape, dtype=np.intp)
else:
size = 1
# Actual sampling
if replace:
if p is not None:
cdf = p.cumsum()
cdf /= cdf[-1]
uniform_samples = random_state.random_sample(shape)
idx = cdf.searchsorted(uniform_samples, side='right')
# searchsorted returns a scalar
idx = np.array(idx, copy=False)
else:
idx = random_state.randint(0, pop_size, size=shape)
else:
if size > pop_size:
raise ValueError("Cannot take a larger sample than "
"population when 'replace=False'")
if p is not None:
if np.sum(p > 0) < size:
raise ValueError("Fewer non-zero entries in p than size")
n_uniq = 0
p = p.copy()
found = np.zeros(shape, dtype=np.int)
flat_found = found.ravel()
while n_uniq < size:
x = random_state.rand(size - n_uniq)
if n_uniq > 0:
p[flat_found[0:n_uniq]] = 0
cdf = np.cumsum(p)
cdf /= cdf[-1]
new = cdf.searchsorted(x, side='right')
_, unique_indices = np.unique(new, return_index=True)
unique_indices.sort()
new = new.take(unique_indices)
flat_found[n_uniq:n_uniq + new.size] = new
n_uniq += new.size
idx = found
else:
idx = random_state.permutation(pop_size)[:size]
if shape is not None:
idx.shape = shape
if shape is None and isinstance(idx, np.ndarray):
# In most cases a scalar will have been made an array
idx = idx.item(0)
# Use samples as indices for a if a is array-like
if a.ndim == 0:
return idx
if shape is not None and idx.ndim == 0:
# If size == () then the user requested a 0-d array as opposed to
# a scalar object when size is None. However a[idx] is always a
# scalar and not an array. So this makes sure the result is an
# array, taking into account that np.array(item) may not work
# for object arrays.
res = np.empty((), dtype=a.dtype)
res[()] = a[idx]
return res
return a[idx]
def random_choice_csc(n_samples, classes, class_probability=None,
random_state=None):
"""Generate a sparse random matrix given column class distributions
Parameters
----------
n_samples : int,
Number of samples to draw in each column.
classes : list of size n_outputs of arrays of size (n_classes,)
List of classes for each column.
class_probability : list of size n_outputs of arrays of size (n_classes,)
Optional (default=None). Class distribution of each column. If None the
uniform distribution is assumed.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
Returns
-------
random_matrix : sparse csc matrix of size (n_samples, n_outputs)
"""
data = array.array('i')
indices = array.array('i')
indptr = array.array('i', [0])
for j in range(len(classes)):
classes[j] = np.asarray(classes[j])
if classes[j].dtype.kind != 'i':
raise ValueError("class dtype %s is not supported" %
classes[j].dtype)
classes[j] = astype(classes[j], np.int64, copy=False)
# use uniform distribution if no class_probability is given
if class_probability is None:
class_prob_j = np.empty(shape=classes[j].shape[0])
class_prob_j.fill(1 / classes[j].shape[0])
else:
class_prob_j = np.asarray(class_probability[j])
if np.sum(class_prob_j) != 1.0:
raise ValueError("Probability array at index {0} does not sum to "
"one".format(j))
if class_prob_j.shape[0] != classes[j].shape[0]:
raise ValueError("classes[{0}] (length {1}) and "
"class_probability[{0}] (length {2}) have "
"different length.".format(j,
classes[j].shape[0],
class_prob_j.shape[0]))
# If 0 is not present in the classes insert it with a probability 0.0
if 0 not in classes[j]:
classes[j] = np.insert(classes[j], 0, 0)
class_prob_j = np.insert(class_prob_j, 0, 0.0)
# If there are nonzero classes choose randomly using class_probability
rng = check_random_state(random_state)
if classes[j].shape[0] > 1:
p_nonzero = 1 - class_prob_j[classes[j] == 0]
nnz = int(n_samples * p_nonzero)
ind_sample = sample_without_replacement(n_population=n_samples,
n_samples=nnz,
random_state=random_state)
indices.extend(ind_sample)
# Normalize probabilites for the nonzero elements
classes_j_nonzero = classes[j] != 0
class_probability_nz = class_prob_j[classes_j_nonzero]
class_probability_nz_norm = (class_probability_nz /
np.sum(class_probability_nz))
classes_ind = np.searchsorted(class_probability_nz_norm.cumsum(),
rng.rand(nnz))
data.extend(classes[j][classes_j_nonzero][classes_ind])
indptr.append(len(indices))
return sp.csc_matrix((data, indices, indptr),
(n_samples, len(classes)),
dtype=int)
| bsd-3-clause |
elkingtonmcb/bcbio-nextgen | bcbio/variation/coverage_experimental.py | 1 | 7319 | import os
import pandas as pd
import subprocess
from collections import Counter
import numpy as np
import math
import pysam
import pybedtools
from bcbio.utils import (file_exists, tmpfile, chdir, splitext_plus,
max_command_length, robust_partition_all)
from bcbio.provenance import do
from bcbio.distributed.transaction import file_transaction
from bcbio.log import logger
from bcbio.pipeline import datadict as dd
from bcbio import broad
from bcbio.pipeline import config_utils
class cov_class:
def __init__(self, size, name, sample):
self.size = int(size)
self.name = name
self.position = ""
self.sample = sample
self.cov = {'4': 0, '10': 0, '20': 0, '50': 0}
self.total = Counter()
self.raw = 0
def update(self, size):
self.size += size
def save(self, cov, pt):
self.raw += cov
self.total[cov] = pt
for cut in [4, 10, 20, 50]:
if cov > cut:
self.cov[str(cut)] += pt
def save_coverage(self, cov, nt):
if cov > 100:
cov = 100
elif cov > 10:
cov = int(math.ceil(cov / 10.0)) * 10
# self.size += size
self.total[cov] += nt
def write_coverage(self, out_file):
# names = ["region", "size", "sample", "10", "25", "50"]
df = pd.DataFrame({'depth': self.total.keys(), 'nt': self.total.values()})
df["size"] = self.size
df["sample"] = self.sample
df.to_csv(out_file, mode='a', header=False, index=False, sep="\t")
def _noise(self):
m = np.average(map(int, self.total.keys()), weights=self.total.values())
x = []
[x.extend([k] * int(float(v) * self.size)) for k, v in self.total.items()]
sd = np.std(x)
return m, sd
def write_regions(self, out_file):
m, sd = self._noise()
with open(out_file, 'a') as out_handle:
print >>out_handle, "\t".join(map(str, [self.position, self.name, self.raw,
"+", self.size, self.sample, m, sd] + self.cov.values()))
def _get_exome_coverage_stats(fn, sample, out_file, total_cov):
tmp_region = ""
stats = ""
with open(fn) as in_handle:
for line in in_handle:
if line.startswith("all"):
continue
cols = line.strip().split()
cur_region = "_".join(cols[0:3]) if not isinstance(cols[3], str) else "_".join(cols[0:4])
if cur_region != tmp_region:
if tmp_region != "":
stats.write_regions(out_file)
stats = cov_class(cols[-2], cur_region, sample)
stats.position = "\t".join(cols[0:3])
stats.save(int(cols[-4]), float(cols[-1]))
total_cov.save_coverage(int(cols[-4]), int(cols[-3]))
tmp_region = cur_region
total_cov.update(int(cols[-2]))
stats.write_regions(out_file)
return total_cov
def _silence_run(cmd):
do._do_run(cmd, False)
def coverage(data):
AVERAGE_REGION_STRING_LENGTH = 100
bed_file = dd.get_coverage_experimental(data)
if not bed_file:
return data
work_dir = os.path.join(dd.get_work_dir(data), "report", "coverage")
batch_size = max_command_length() / AVERAGE_REGION_STRING_LENGTH
with chdir(work_dir):
in_bam = data['work_bam']
sample = dd.get_sample_name(data)
logger.debug("doing coverage for %s" % sample)
region_bed = pybedtools.BedTool(bed_file)
parse_file = os.path.join(sample + "_coverage.bed")
parse_total_file = os.path.join(sample + "_cov_total.tsv")
if not file_exists(parse_file):
total_cov = cov_class(0, None, sample)
with file_transaction(parse_file) as out_tx:
with open(out_tx, 'w') as out_handle:
HEADER = ["#chrom", "start", "end", "region", "reads",
"strand", "size", "sample", "mean", "sd", "cutoff10",
"cutoff20", "cutoff4", "cutoff50"]
out_handle.write("\t".join(HEADER) + "\n")
with tmpfile() as tx_tmp_file:
lcount = 0
for chunk in robust_partition_all(batch_size, region_bed):
coord_batch = []
line_batch = ""
for line in chunk:
lcount += 1
chrom = line.chrom
start = max(line.start, 0)
end = line.end
coords = "%s:%s-%s" % (chrom, start, end)
coord_batch.append(coords)
line_batch += str(line)
if not coord_batch:
continue
region_file = pybedtools.BedTool(line_batch,
from_string=True).saveas().fn
coord_string = " ".join(coord_batch)
cmd = ("samtools view -b {in_bam} {coord_string} | "
"bedtools coverage -a {region_file} -b - "
"-hist > {tx_tmp_file}")
_silence_run(cmd.format(**locals()))
total_cov = _get_exome_coverage_stats(os.path.abspath(tx_tmp_file), sample, out_tx, total_cov)
logger.debug("Processed %d regions." % lcount)
total_cov.write_coverage(parse_total_file)
data['coverage'] = os.path.abspath(parse_file)
return data
def variants(data):
if not "vrn_file" in data:
return data
in_vcf = data['vrn_file']
work_dir = os.path.join(dd.get_work_dir(data), "report", "variants")
with chdir(work_dir):
in_bam = data['work_bam']
ref_file = dd.get_ref_file(data)
assert ref_file, "Need the reference genome fasta file."
jvm_opts = broad.get_gatk_framework_opts(data['config'])
gatk_jar = config_utils.get_program("gatk", data['config'], "dir")
bed_file = dd.get_variant_regions(data)
sample = dd.get_sample_name(data)
in_bam = data["work_bam"]
cg_file = os.path.join(sample + "_with-gc.vcf.gz")
parse_file = os.path.join(sample + "_gc-depth-parse.tsv")
if not file_exists(cg_file):
with file_transaction(cg_file) as tx_out:
cmd = ("java -jar {gatk_jar}/GenomeAnalysisTK.jar -T VariantAnnotator -R {ref_file} "
"-L {bed_file} -I {in_bam} "
"-A GCContent --variant {in_vcf} --out {tx_out}")
do.run(cmd.format(**locals()), " GC bias for %s" % in_vcf)
if not file_exists(parse_file):
with file_transaction(parse_file) as out_tx:
with open(out_tx, 'w') as out_handle:
print >>out_handle, "CG\tdepth\tsample"
cmd = ("bcftools query -f '[%GC][\\t%DP][\\t%SAMPLE]\\n' -R {bed_file} {cg_file} >> {out_tx}")
do.run(cmd.format(**locals()), " query for %s" % in_vcf)
logger.debug('parsing coverage: %s' % sample)
# return df
return data
| mit |
marcsans/cnn-physics-perception | phy/lib/python2.7/site-packages/sklearn/neighbors/nearest_centroid.py | 37 | 7348 | # -*- coding: utf-8 -*-
"""
Nearest Centroid Classification
"""
# Author: Robert Layton <robertlayton@gmail.com>
# Olivier Grisel <olivier.grisel@ensta.org>
#
# License: BSD 3 clause
import warnings
import numpy as np
from scipy import sparse as sp
from ..base import BaseEstimator, ClassifierMixin
from ..metrics.pairwise import pairwise_distances
from ..preprocessing import LabelEncoder
from ..utils.validation import check_array, check_X_y, check_is_fitted
from ..utils.sparsefuncs import csc_median_axis_0
from ..utils.multiclass import check_classification_targets
class NearestCentroid(BaseEstimator, ClassifierMixin):
"""Nearest centroid classifier.
Each class is represented by its centroid, with test samples classified to
the class with the nearest centroid.
Read more in the :ref:`User Guide <nearest_centroid_classifier>`.
Parameters
----------
metric : string, or callable
The metric to use when calculating distance between instances in a
feature array. If metric is a string or callable, it must be one of
the options allowed by metrics.pairwise.pairwise_distances for its
metric parameter.
The centroids for the samples corresponding to each class is the point
from which the sum of the distances (according to the metric) of all
samples that belong to that particular class are minimized.
If the "manhattan" metric is provided, this centroid is the median and
for all other metrics, the centroid is now set to be the mean.
shrink_threshold : float, optional (default = None)
Threshold for shrinking centroids to remove features.
Attributes
----------
centroids_ : array-like, shape = [n_classes, n_features]
Centroid of each class
Examples
--------
>>> from sklearn.neighbors.nearest_centroid import NearestCentroid
>>> import numpy as np
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> y = np.array([1, 1, 1, 2, 2, 2])
>>> clf = NearestCentroid()
>>> clf.fit(X, y)
NearestCentroid(metric='euclidean', shrink_threshold=None)
>>> print(clf.predict([[-0.8, -1]]))
[1]
See also
--------
sklearn.neighbors.KNeighborsClassifier: nearest neighbors classifier
Notes
-----
When used for text classification with tf-idf vectors, this classifier is
also known as the Rocchio classifier.
References
----------
Tibshirani, R., Hastie, T., Narasimhan, B., & Chu, G. (2002). Diagnosis of
multiple cancer types by shrunken centroids of gene expression. Proceedings
of the National Academy of Sciences of the United States of America,
99(10), 6567-6572. The National Academy of Sciences.
"""
def __init__(self, metric='euclidean', shrink_threshold=None):
self.metric = metric
self.shrink_threshold = shrink_threshold
def fit(self, X, y):
"""
Fit the NearestCentroid model according to the given training data.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training vector, where n_samples in the number of samples and
n_features is the number of features.
Note that centroid shrinking cannot be used with sparse matrices.
y : array, shape = [n_samples]
Target values (integers)
"""
# If X is sparse and the metric is "manhattan", store it in a csc
# format is easier to calculate the median.
if self.metric == 'manhattan':
X, y = check_X_y(X, y, ['csc'])
else:
X, y = check_X_y(X, y, ['csr', 'csc'])
is_X_sparse = sp.issparse(X)
if is_X_sparse and self.shrink_threshold:
raise ValueError("threshold shrinking not supported"
" for sparse input")
check_classification_targets(y)
n_samples, n_features = X.shape
le = LabelEncoder()
y_ind = le.fit_transform(y)
self.classes_ = classes = le.classes_
n_classes = classes.size
if n_classes < 2:
raise ValueError('y has less than 2 classes')
# Mask mapping each class to its members.
self.centroids_ = np.empty((n_classes, n_features), dtype=np.float64)
# Number of clusters in each class.
nk = np.zeros(n_classes)
for cur_class in range(n_classes):
center_mask = y_ind == cur_class
nk[cur_class] = np.sum(center_mask)
if is_X_sparse:
center_mask = np.where(center_mask)[0]
# XXX: Update other averaging methods according to the metrics.
if self.metric == "manhattan":
# NumPy does not calculate median of sparse matrices.
if not is_X_sparse:
self.centroids_[cur_class] = np.median(X[center_mask], axis=0)
else:
self.centroids_[cur_class] = csc_median_axis_0(X[center_mask])
else:
if self.metric != 'euclidean':
warnings.warn("Averaging for metrics other than "
"euclidean and manhattan not supported. "
"The average is set to be the mean."
)
self.centroids_[cur_class] = X[center_mask].mean(axis=0)
if self.shrink_threshold:
dataset_centroid_ = np.mean(X, axis=0)
# m parameter for determining deviation
m = np.sqrt((1. / nk) + (1. / n_samples))
# Calculate deviation using the standard deviation of centroids.
variance = (X - self.centroids_[y_ind]) ** 2
variance = variance.sum(axis=0)
s = np.sqrt(variance / (n_samples - n_classes))
s += np.median(s) # To deter outliers from affecting the results.
mm = m.reshape(len(m), 1) # Reshape to allow broadcasting.
ms = mm * s
deviation = ((self.centroids_ - dataset_centroid_) / ms)
# Soft thresholding: if the deviation crosses 0 during shrinking,
# it becomes zero.
signs = np.sign(deviation)
deviation = (np.abs(deviation) - self.shrink_threshold)
deviation[deviation < 0] = 0
deviation *= signs
# Now adjust the centroids using the deviation
msd = ms * deviation
self.centroids_ = dataset_centroid_[np.newaxis, :] + msd
return self
def predict(self, X):
"""Perform classification on an array of test vectors X.
The predicted class C for each sample in X is returned.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
C : array, shape = [n_samples]
Notes
-----
If the metric constructor parameter is "precomputed", X is assumed to
be the distance matrix between the data to be predicted and
``self.centroids_``.
"""
check_is_fitted(self, 'centroids_')
X = check_array(X, accept_sparse='csr')
return self.classes_[pairwise_distances(
X, self.centroids_, metric=self.metric).argmin(axis=1)]
| mit |
Sentient07/scikit-learn | benchmarks/bench_plot_omp_lars.py | 72 | 4514 | """Benchmarks of orthogonal matching pursuit (:ref:`OMP`) versus least angle
regression (:ref:`least_angle_regression`)
The input data is mostly low rank but is a fat infinite tail.
"""
from __future__ import print_function
import gc
import sys
from time import time
import six
import numpy as np
from sklearn.linear_model import lars_path, orthogonal_mp
from sklearn.datasets.samples_generator import make_sparse_coded_signal
def compute_bench(samples_range, features_range):
it = 0
results = dict()
lars = np.empty((len(features_range), len(samples_range)))
lars_gram = lars.copy()
omp = lars.copy()
omp_gram = lars.copy()
max_it = len(samples_range) * len(features_range)
for i_s, n_samples in enumerate(samples_range):
for i_f, n_features in enumerate(features_range):
it += 1
n_informative = n_features / 10
print('====================')
print('Iteration %03d of %03d' % (it, max_it))
print('====================')
# dataset_kwargs = {
# 'n_train_samples': n_samples,
# 'n_test_samples': 2,
# 'n_features': n_features,
# 'n_informative': n_informative,
# 'effective_rank': min(n_samples, n_features) / 10,
# #'effective_rank': None,
# 'bias': 0.0,
# }
dataset_kwargs = {
'n_samples': 1,
'n_components': n_features,
'n_features': n_samples,
'n_nonzero_coefs': n_informative,
'random_state': 0
}
print("n_samples: %d" % n_samples)
print("n_features: %d" % n_features)
y, X, _ = make_sparse_coded_signal(**dataset_kwargs)
X = np.asfortranarray(X)
gc.collect()
print("benchmarking lars_path (with Gram):", end='')
sys.stdout.flush()
tstart = time()
G = np.dot(X.T, X) # precomputed Gram matrix
Xy = np.dot(X.T, y)
lars_path(X, y, Xy=Xy, Gram=G, max_iter=n_informative)
delta = time() - tstart
print("%0.3fs" % delta)
lars_gram[i_f, i_s] = delta
gc.collect()
print("benchmarking lars_path (without Gram):", end='')
sys.stdout.flush()
tstart = time()
lars_path(X, y, Gram=None, max_iter=n_informative)
delta = time() - tstart
print("%0.3fs" % delta)
lars[i_f, i_s] = delta
gc.collect()
print("benchmarking orthogonal_mp (with Gram):", end='')
sys.stdout.flush()
tstart = time()
orthogonal_mp(X, y, precompute=True,
n_nonzero_coefs=n_informative)
delta = time() - tstart
print("%0.3fs" % delta)
omp_gram[i_f, i_s] = delta
gc.collect()
print("benchmarking orthogonal_mp (without Gram):", end='')
sys.stdout.flush()
tstart = time()
orthogonal_mp(X, y, precompute=False,
n_nonzero_coefs=n_informative)
delta = time() - tstart
print("%0.3fs" % delta)
omp[i_f, i_s] = delta
results['time(LARS) / time(OMP)\n (w/ Gram)'] = (lars_gram / omp_gram)
results['time(LARS) / time(OMP)\n (w/o Gram)'] = (lars / omp)
return results
if __name__ == '__main__':
samples_range = np.linspace(1000, 5000, 5).astype(np.int)
features_range = np.linspace(1000, 5000, 5).astype(np.int)
results = compute_bench(samples_range, features_range)
max_time = max(np.max(t) for t in results.values())
import matplotlib.pyplot as plt
fig = plt.figure('scikit-learn OMP vs. LARS benchmark results')
for i, (label, timings) in enumerate(sorted(six.iteritems(results))):
ax = fig.add_subplot(1, 2, i+1)
vmax = max(1 - timings.min(), -1 + timings.max())
plt.matshow(timings, fignum=False, vmin=1 - vmax, vmax=1 + vmax)
ax.set_xticklabels([''] + [str(each) for each in samples_range])
ax.set_yticklabels([''] + [str(each) for each in features_range])
plt.xlabel('n_samples')
plt.ylabel('n_features')
plt.title(label)
plt.subplots_adjust(0.1, 0.08, 0.96, 0.98, 0.4, 0.63)
ax = plt.axes([0.1, 0.08, 0.8, 0.06])
plt.colorbar(cax=ax, orientation='horizontal')
plt.show()
| bsd-3-clause |
airazabal/smartemail | bin/python_parser_remove_tokens.py | 1 | 2463 | import csv
import re
import codecs
input_files = []
# input_files.append("v2_COI_Set_1_modified.csv")
# input_files.append("v2COI_Set_2_modified.csv")
# input_files.append("v2COI_Set_3_modified.csv")
# input_files.append("v2_Updated_Additional_COI.csv")
#input_files.append("v2COI_TrainingSet4_4_18_modified.csv")
input_files.append("additional_100_emails.csv")
## the below file is encoded in "ISO-8859-1" - must take care when opening it
#input_files.append("Additional_COI_50_modified_edited.csv")
pattern1 = re.compile("\.\.\.")
pattern2 = re.compile("\.\.")
pattern3 = re.compile("\.\s")
pattern4 = re.compile("\?")
pattern5 = re.compile("\?\s")
pattern6 = re.compile("\.\,")
pattern7 = re.compile("\r|\n")
pattern8 = re.compile("\n")
## extra work for the 100 emails file. starts as a weirdly encoded xlsx
# excelFileString = "IBM_EntityExtraction 1";
# import pandas as pd
# xls_file = pd.read_excel("email_data/input_files/"+excelFileString+".xlsx", sheetname="GroundTruth_Training")
# xls_file.to_csv("email_data/input_files/Training_Data/100_emails_intermediate_output.csv", index = False, encoding='utf-8')
# input_files.append("100_emails_intermediate_output.csv")
for input_file in input_files:
with codecs.open("email_data/input_files/Training_Data/output_transformed/transformed_" + input_file, "w", "utf8") as output:
writer = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_ALL) # define the output
print("beginning with " + input_file)
with codecs.open("email_data/input_files/Training_Data/"+input_file, "r", "utf-8") as f:
reader = csv.reader(f, delimiter=',', quotechar='"')
for row in reader:
#print("******* TESTING ********\n\n\n")
#print("input string")
#print(row[1])
outString = pattern1.sub(" ", row[1])
outString = pattern2.sub(" ", outString)
outString = pattern3.sub(" ", outString)
outString = pattern4.sub(" ", outString)
outString = pattern5.sub(" ", outString)
outString = pattern6.sub(",", outString)
outString = pattern7.sub(" ", outString)
outString = pattern8.sub(" ", outString)
#print ("******* output string *********")
#print(outString)
writer.writerow([row[0], outString])
print("finished with " + input_file)
| apache-2.0 |
hainm/statsmodels | statsmodels/base/model.py | 25 | 76781 | from __future__ import print_function
from statsmodels.compat.python import iterkeys, lzip, range, reduce
import numpy as np
from scipy import stats
from statsmodels.base.data import handle_data
from statsmodels.tools.tools import recipr, nan_dot
from statsmodels.stats.contrast import ContrastResults, WaldTestResults
from statsmodels.tools.decorators import resettable_cache, cache_readonly
import statsmodels.base.wrapper as wrap
from statsmodels.tools.numdiff import approx_fprime
from statsmodels.formula import handle_formula_data
from statsmodels.compat.numpy import np_matrix_rank
from statsmodels.base.optimizer import Optimizer
_model_params_doc = """
Parameters
----------
endog : array-like
1-d endogenous response variable. The dependent variable.
exog : array-like
A nobs x k array where `nobs` is the number of observations and `k`
is the number of regressors. An intercept is not included by default
and should be added by the user. See
:func:`statsmodels.tools.add_constant`."""
_missing_param_doc = """\
missing : str
Available options are 'none', 'drop', and 'raise'. If 'none', no nan
checking is done. If 'drop', any observations with nans are dropped.
If 'raise', an error is raised. Default is 'none.'"""
_extra_param_doc = """
hasconst : None or bool
Indicates whether the RHS includes a user-supplied constant. If True,
a constant is not checked for and k_constant is set to 1 and all
result statistics are calculated as if a constant is present. If
False, a constant is not checked for and k_constant is set to 0.
"""
class Model(object):
__doc__ = """
A (predictive) statistical model. Intended to be subclassed not used.
%(params_doc)s
%(extra_params_doc)s
Notes
-----
`endog` and `exog` are references to any data provided. So if the data is
already stored in numpy arrays and it is changed then `endog` and `exog`
will change as well.
""" % {'params_doc' : _model_params_doc,
'extra_params_doc' : _missing_param_doc + _extra_param_doc}
def __init__(self, endog, exog=None, **kwargs):
missing = kwargs.pop('missing', 'none')
hasconst = kwargs.pop('hasconst', None)
self.data = self._handle_data(endog, exog, missing, hasconst,
**kwargs)
self.k_constant = self.data.k_constant
self.exog = self.data.exog
self.endog = self.data.endog
self._data_attr = []
self._data_attr.extend(['exog', 'endog', 'data.exog', 'data.endog'])
if 'formula' not in kwargs: # won't be able to unpickle without these
self._data_attr.extend(['data.orig_endog', 'data.orig_exog'])
# store keys for extras if we need to recreate model instance
# we don't need 'missing', maybe we need 'hasconst'
self._init_keys = list(kwargs.keys())
if hasconst is not None:
self._init_keys.append('hasconst')
def _get_init_kwds(self):
"""return dictionary with extra keys used in model.__init__
"""
kwds = dict(((key, getattr(self, key, None))
for key in self._init_keys))
return kwds
def _handle_data(self, endog, exog, missing, hasconst, **kwargs):
data = handle_data(endog, exog, missing, hasconst, **kwargs)
# kwargs arrays could have changed, easier to just attach here
for key in kwargs:
if key in ['design_info', 'formula']: # leave attached to data
continue
# pop so we don't start keeping all these twice or references
try:
setattr(self, key, data.__dict__.pop(key))
except KeyError: # panel already pops keys in data handling
pass
return data
@classmethod
def from_formula(cls, formula, data, subset=None, *args, **kwargs):
"""
Create a Model from a formula and dataframe.
Parameters
----------
formula : str or generic Formula object
The formula specifying the model
data : array-like
The data for the model. See Notes.
subset : array-like
An array-like object of booleans, integers, or index values that
indicate the subset of df to use in the model. Assumes df is a
`pandas.DataFrame`
args : extra arguments
These are passed to the model
kwargs : extra keyword arguments
These are passed to the model with one exception. The
``eval_env`` keyword is passed to patsy. It can be either a
:class:`patsy:patsy.EvalEnvironment` object or an integer
indicating the depth of the namespace to use. For example, the
default ``eval_env=0`` uses the calling namespace. If you wish
to use a "clean" environment set ``eval_env=-1``.
Returns
-------
model : Model instance
Notes
------
data must define __getitem__ with the keys in the formula terms
args and kwargs are passed on to the model instantiation. E.g.,
a numpy structured or rec array, a dictionary, or a pandas DataFrame.
"""
#TODO: provide a docs template for args/kwargs from child models
#TODO: subset could use syntax. issue #469.
if subset is not None:
data = data.ix[subset]
eval_env = kwargs.pop('eval_env', None)
if eval_env is None:
eval_env = 2
elif eval_env == -1:
from patsy import EvalEnvironment
eval_env = EvalEnvironment({})
else:
eval_env += 1 # we're going down the stack again
missing = kwargs.get('missing', 'drop')
if missing == 'none': # with patys it's drop or raise. let's raise.
missing = 'raise'
tmp = handle_formula_data(data, None, formula, depth=eval_env,
missing=missing)
((endog, exog), missing_idx, design_info) = tmp
kwargs.update({'missing_idx': missing_idx,
'missing': missing,
'formula': formula, # attach formula for unpckling
'design_info': design_info})
mod = cls(endog, exog, *args, **kwargs)
mod.formula = formula
# since we got a dataframe, attach the original
mod.data.frame = data
return mod
@property
def endog_names(self):
return self.data.ynames
@property
def exog_names(self):
return self.data.xnames
def fit(self):
"""
Fit a model to data.
"""
raise NotImplementedError
def predict(self, params, exog=None, *args, **kwargs):
"""
After a model has been fit predict returns the fitted values.
This is a placeholder intended to be overwritten by individual models.
"""
raise NotImplementedError
class LikelihoodModel(Model):
"""
Likelihood model is a subclass of Model.
"""
def __init__(self, endog, exog=None, **kwargs):
super(LikelihoodModel, self).__init__(endog, exog, **kwargs)
self.initialize()
def initialize(self):
"""
Initialize (possibly re-initialize) a Model instance. For
instance, the design matrix of a linear model may change
and some things must be recomputed.
"""
pass
# TODO: if the intent is to re-initialize the model with new data then this
# method needs to take inputs...
def loglike(self, params):
"""
Log-likelihood of model.
"""
raise NotImplementedError
def score(self, params):
"""
Score vector of model.
The gradient of logL with respect to each parameter.
"""
raise NotImplementedError
def information(self, params):
"""
Fisher information matrix of model
Returns -Hessian of loglike evaluated at params.
"""
raise NotImplementedError
def hessian(self, params):
"""
The Hessian matrix of the model
"""
raise NotImplementedError
def fit(self, start_params=None, method='newton', maxiter=100,
full_output=True, disp=True, fargs=(), callback=None, retall=False,
skip_hessian=False, **kwargs):
"""
Fit method for likelihood based models
Parameters
----------
start_params : array-like, optional
Initial guess of the solution for the loglikelihood maximization.
The default is an array of zeros.
method : str, optional
The `method` determines which solver from `scipy.optimize`
is used, and it can be chosen from among the following strings:
- 'newton' for Newton-Raphson, 'nm' for Nelder-Mead
- 'bfgs' for Broyden-Fletcher-Goldfarb-Shanno (BFGS)
- 'lbfgs' for limited-memory BFGS with optional box constraints
- 'powell' for modified Powell's method
- 'cg' for conjugate gradient
- 'ncg' for Newton-conjugate gradient
- 'basinhopping' for global basin-hopping solver
The explicit arguments in `fit` are passed to the solver,
with the exception of the basin-hopping solver. Each
solver has several optional arguments that are not the same across
solvers. See the notes section below (or scipy.optimize) for the
available arguments and for the list of explicit arguments that the
basin-hopping solver supports.
maxiter : int, optional
The maximum number of iterations to perform.
full_output : bool, optional
Set to True to have all available output in the Results object's
mle_retvals attribute. The output is dependent on the solver.
See LikelihoodModelResults notes section for more information.
disp : bool, optional
Set to True to print convergence messages.
fargs : tuple, optional
Extra arguments passed to the likelihood function, i.e.,
loglike(x,*args)
callback : callable callback(xk), optional
Called after each iteration, as callback(xk), where xk is the
current parameter vector.
retall : bool, optional
Set to True to return list of solutions at each iteration.
Available in Results object's mle_retvals attribute.
skip_hessian : bool, optional
If False (default), then the negative inverse hessian is calculated
after the optimization. If True, then the hessian will not be
calculated. However, it will be available in methods that use the
hessian in the optimization (currently only with `"newton"`).
kwargs : keywords
All kwargs are passed to the chosen solver with one exception. The
following keyword controls what happens after the fit::
warn_convergence : bool, optional
If True, checks the model for the converged flag. If the
converged flag is False, a ConvergenceWarning is issued.
Notes
-----
The 'basinhopping' solver ignores `maxiter`, `retall`, `full_output`
explicit arguments.
Optional arguments for solvers (see returned Results.mle_settings)::
'newton'
tol : float
Relative error in params acceptable for convergence.
'nm' -- Nelder Mead
xtol : float
Relative error in params acceptable for convergence
ftol : float
Relative error in loglike(params) acceptable for
convergence
maxfun : int
Maximum number of function evaluations to make.
'bfgs'
gtol : float
Stop when norm of gradient is less than gtol.
norm : float
Order of norm (np.Inf is max, -np.Inf is min)
epsilon
If fprime is approximated, use this value for the step
size. Only relevant if LikelihoodModel.score is None.
'lbfgs'
m : int
This many terms are used for the Hessian approximation.
factr : float
A stop condition that is a variant of relative error.
pgtol : float
A stop condition that uses the projected gradient.
epsilon
If fprime is approximated, use this value for the step
size. Only relevant if LikelihoodModel.score is None.
maxfun : int
Maximum number of function evaluations to make.
bounds : sequence
(min, max) pairs for each element in x,
defining the bounds on that parameter.
Use None for one of min or max when there is no bound
in that direction.
'cg'
gtol : float
Stop when norm of gradient is less than gtol.
norm : float
Order of norm (np.Inf is max, -np.Inf is min)
epsilon : float
If fprime is approximated, use this value for the step
size. Can be scalar or vector. Only relevant if
Likelihoodmodel.score is None.
'ncg'
fhess_p : callable f'(x,*args)
Function which computes the Hessian of f times an arbitrary
vector, p. Should only be supplied if
LikelihoodModel.hessian is None.
avextol : float
Stop when the average relative error in the minimizer
falls below this amount.
epsilon : float or ndarray
If fhess is approximated, use this value for the step size.
Only relevant if Likelihoodmodel.hessian is None.
'powell'
xtol : float
Line-search error tolerance
ftol : float
Relative error in loglike(params) for acceptable for
convergence.
maxfun : int
Maximum number of function evaluations to make.
start_direc : ndarray
Initial direction set.
'basinhopping'
niter : integer
The number of basin hopping iterations.
niter_success : integer
Stop the run if the global minimum candidate remains the
same for this number of iterations.
T : float
The "temperature" parameter for the accept or reject
criterion. Higher "temperatures" mean that larger jumps
in function value will be accepted. For best results
`T` should be comparable to the separation (in function
value) between local minima.
stepsize : float
Initial step size for use in the random displacement.
interval : integer
The interval for how often to update the `stepsize`.
minimizer : dict
Extra keyword arguments to be passed to the minimizer
`scipy.optimize.minimize()`, for example 'method' - the
minimization method (e.g. 'L-BFGS-B'), or 'tol' - the
tolerance for termination. Other arguments are mapped from
explicit argument of `fit`:
- `args` <- `fargs`
- `jac` <- `score`
- `hess` <- `hess`
"""
Hinv = None # JP error if full_output=0, Hinv not defined
if start_params is None:
if hasattr(self, 'start_params'):
start_params = self.start_params
elif self.exog is not None:
# fails for shape (K,)?
start_params = [0] * self.exog.shape[1]
else:
raise ValueError("If exog is None, then start_params should "
"be specified")
# TODO: separate args from nonarg taking score and hessian, ie.,
# user-supplied and numerically evaluated estimate frprime doesn't take
# args in most (any?) of the optimize function
nobs = self.endog.shape[0]
f = lambda params, *args: -self.loglike(params, *args) / nobs
score = lambda params, *args: -self.score(params, *args) / nobs
try:
hess = lambda params, *args: -self.hessian(params, *args) / nobs
except:
hess = None
if method == 'newton':
score = lambda params, *args: self.score(params, *args) / nobs
hess = lambda params, *args: self.hessian(params, *args) / nobs
#TODO: why are score and hess positive?
warn_convergence = kwargs.pop('warn_convergence', True)
optimizer = Optimizer()
xopt, retvals, optim_settings = optimizer._fit(f, score, start_params,
fargs, kwargs,
hessian=hess,
method=method,
disp=disp,
maxiter=maxiter,
callback=callback,
retall=retall,
full_output=full_output)
#NOTE: this is for fit_regularized and should be generalized
cov_params_func = kwargs.setdefault('cov_params_func', None)
if cov_params_func:
Hinv = cov_params_func(self, xopt, retvals)
elif method == 'newton' and full_output:
Hinv = np.linalg.inv(-retvals['Hessian']) / nobs
elif not skip_hessian:
try:
Hinv = np.linalg.inv(-1 * self.hessian(xopt))
except:
#might want custom warning ResultsWarning? NumericalWarning?
from warnings import warn
warndoc = ('Inverting hessian failed, no bse or '
'cov_params available')
warn(warndoc, RuntimeWarning)
Hinv = None
if 'cov_type' in kwargs:
cov_kwds = kwargs.get('cov_kwds', {})
kwds = {'cov_type':kwargs['cov_type'], 'cov_kwds':cov_kwds}
else:
kwds = {}
if 'use_t' in kwargs:
kwds['use_t'] = kwargs['use_t']
#prints for debugging
#print('kwargs inLikelihoodModel.fit', kwargs)
#print('kwds inLikelihoodModel.fit', kwds)
#TODO: add Hessian approximation and change the above if needed
mlefit = LikelihoodModelResults(self, xopt, Hinv, scale=1., **kwds)
#TODO: hardcode scale?
if isinstance(retvals, dict):
mlefit.mle_retvals = retvals
if warn_convergence and not retvals['converged']:
from warnings import warn
from statsmodels.tools.sm_exceptions import ConvergenceWarning
warn("Maximum Likelihood optimization failed to converge. "
"Check mle_retvals", ConvergenceWarning)
mlefit.mle_settings = optim_settings
return mlefit
#TODO: the below is unfinished
class GenericLikelihoodModel(LikelihoodModel):
"""
Allows the fitting of any likelihood function via maximum likelihood.
A subclass needs to specify at least the log-likelihood
If the log-likelihood is specified for each observation, then results that
require the Jacobian will be available. (The other case is not tested yet.)
Notes
-----
Optimization methods that require only a likelihood function are 'nm' and
'powell'
Optimization methods that require a likelihood function and a
score/gradient are 'bfgs', 'cg', and 'ncg'. A function to compute the
Hessian is optional for 'ncg'.
Optimization method that require a likelihood function, a score/gradient,
and a Hessian is 'newton'
If they are not overwritten by a subclass, then numerical gradient,
Jacobian and Hessian of the log-likelihood are caclulated by numerical
forward differentiation. This might results in some cases in precision
problems, and the Hessian might not be positive definite. Even if the
Hessian is not positive definite the covariance matrix of the parameter
estimates based on the outer product of the Jacobian might still be valid.
Examples
--------
see also subclasses in directory miscmodels
import statsmodels.api as sm
data = sm.datasets.spector.load()
data.exog = sm.add_constant(data.exog)
# in this dir
from model import GenericLikelihoodModel
probit_mod = sm.Probit(data.endog, data.exog)
probit_res = probit_mod.fit()
loglike = probit_mod.loglike
score = probit_mod.score
mod = GenericLikelihoodModel(data.endog, data.exog, loglike, score)
res = mod.fit(method="nm", maxiter = 500)
import numpy as np
np.allclose(res.params, probit_res.params)
"""
def __init__(self, endog, exog=None, loglike=None, score=None,
hessian=None, missing='none', extra_params_names=None,
**kwds):
# let them be none in case user wants to use inheritance
if not loglike is None:
self.loglike = loglike
if not score is None:
self.score = score
if not hessian is None:
self.hessian = hessian
self.__dict__.update(kwds)
# TODO: data structures?
#TODO temporary solution, force approx normal
#self.df_model = 9999
#somewhere: CacheWriteWarning: 'df_model' cannot be overwritten
super(GenericLikelihoodModel, self).__init__(endog, exog,
missing=missing)
# this won't work for ru2nmnl, maybe np.ndim of a dict?
if exog is not None:
#try:
self.nparams = (exog.shape[1] if np.ndim(exog) == 2 else 1)
if extra_params_names is not None:
self._set_extra_params_names(extra_params_names)
def _set_extra_params_names(self, extra_params_names):
# check param_names
if extra_params_names is not None:
if self.exog is not None:
self.exog_names.extend(extra_params_names)
else:
self.data.xnames = extra_params_names
self.nparams = len(self.exog_names)
#this is redundant and not used when subclassing
def initialize(self):
if not self.score: # right now score is not optional
self.score = approx_fprime
if not self.hessian:
pass
else: # can use approx_hess_p if we have a gradient
if not self.hessian:
pass
#Initialize is called by
#statsmodels.model.LikelihoodModel.__init__
#and should contain any preprocessing that needs to be done for a model
from statsmodels.tools import tools
if self.exog is not None:
# assume constant
self.df_model = float(np_matrix_rank(self.exog) - 1)
self.df_resid = (float(self.exog.shape[0] -
np_matrix_rank(self.exog)))
else:
self.df_model = np.nan
self.df_resid = np.nan
super(GenericLikelihoodModel, self).initialize()
def expandparams(self, params):
'''
expand to full parameter array when some parameters are fixed
Parameters
----------
params : array
reduced parameter array
Returns
-------
paramsfull : array
expanded parameter array where fixed parameters are included
Notes
-----
Calling this requires that self.fixed_params and self.fixed_paramsmask
are defined.
*developer notes:*
This can be used in the log-likelihood to ...
this could also be replaced by a more general parameter
transformation.
'''
paramsfull = self.fixed_params.copy()
paramsfull[self.fixed_paramsmask] = params
return paramsfull
def reduceparams(self, params):
return params[self.fixed_paramsmask]
def loglike(self, params):
return self.loglikeobs(params).sum(0)
def nloglike(self, params):
return -self.loglikeobs(params).sum(0)
def loglikeobs(self, params):
return -self.nloglikeobs(params)
def score(self, params):
'''
Gradient of log-likelihood evaluated at params
'''
kwds = {}
kwds.setdefault('centered', True)
return approx_fprime(params, self.loglike, **kwds).ravel()
def score_obs(self, params, **kwds):
'''
Jacobian/Gradient of log-likelihood evaluated at params for each
observation.
'''
#kwds.setdefault('epsilon', 1e-4)
kwds.setdefault('centered', True)
return approx_fprime(params, self.loglikeobs, **kwds)
jac = np.deprecate(score_obs, 'jac', 'score_obs', "Use score_obs method."
" jac will be removed in 0.7.")
def hessian(self, params):
'''
Hessian of log-likelihood evaluated at params
'''
from statsmodels.tools.numdiff import approx_hess
# need options for hess (epsilon)
return approx_hess(params, self.loglike)
def fit(self, start_params=None, method='nm', maxiter=500, full_output=1,
disp=1, callback=None, retall=0, **kwargs):
"""
Fit the model using maximum likelihood.
The rest of the docstring is from
statsmodels.LikelihoodModel.fit
"""
if start_params is None:
if hasattr(self, 'start_params'):
start_params = self.start_params
else:
start_params = 0.1 * np.ones(self.nparams)
fit_method = super(GenericLikelihoodModel, self).fit
mlefit = fit_method(start_params=start_params,
method=method, maxiter=maxiter,
full_output=full_output,
disp=disp, callback=callback, **kwargs)
genericmlefit = GenericLikelihoodModelResults(self, mlefit)
#amend param names
exog_names = [] if (self.exog_names is None) else self.exog_names
k_miss = len(exog_names) - len(mlefit.params)
if not k_miss == 0:
if k_miss < 0:
self._set_extra_params_names(
['par%d' % i for i in range(-k_miss)])
else:
# I don't want to raise after we have already fit()
import warnings
warnings.warn('more exog_names than parameters', UserWarning)
return genericmlefit
#fit.__doc__ += LikelihoodModel.fit.__doc__
class Results(object):
"""
Class to contain model results
Parameters
----------
model : class instance
the previously specified model instance
params : array
parameter estimates from the fit model
"""
def __init__(self, model, params, **kwd):
self.__dict__.update(kwd)
self.initialize(model, params, **kwd)
self._data_attr = []
def initialize(self, model, params, **kwd):
self.params = params
self.model = model
if hasattr(model, 'k_constant'):
self.k_constant = model.k_constant
def predict(self, exog=None, transform=True, *args, **kwargs):
"""
Call self.model.predict with self.params as the first argument.
Parameters
----------
exog : array-like, optional
The values for which you want to predict.
transform : bool, optional
If the model was fit via a formula, do you want to pass
exog through the formula. Default is True. E.g., if you fit
a model y ~ log(x1) + log(x2), and transform is True, then
you can pass a data structure that contains x1 and x2 in
their original form. Otherwise, you'd need to log the data
first.
args, kwargs :
Some models can take additional arguments or keywords, see the
predict method of the model for the details.
Returns
-------
prediction : ndarray or pandas.Series
See self.model.predict
"""
if transform and hasattr(self.model, 'formula') and exog is not None:
from patsy import dmatrix
exog = dmatrix(self.model.data.design_info.builder,
exog)
if exog is not None:
exog = np.asarray(exog)
if exog.ndim == 1 and (self.model.exog.ndim == 1 or
self.model.exog.shape[1] == 1):
exog = exog[:, None]
exog = np.atleast_2d(exog) # needed in count model shape[1]
return self.model.predict(self.params, exog, *args, **kwargs)
#TODO: public method?
class LikelihoodModelResults(Results):
"""
Class to contain results from likelihood models
Parameters
-----------
model : LikelihoodModel instance or subclass instance
LikelihoodModelResults holds a reference to the model that is fit.
params : 1d array_like
parameter estimates from estimated model
normalized_cov_params : 2d array
Normalized (before scaling) covariance of params. (dot(X.T,X))**-1
scale : float
For (some subset of models) scale will typically be the
mean square error from the estimated model (sigma^2)
Returns
-------
**Attributes**
mle_retvals : dict
Contains the values returned from the chosen optimization method if
full_output is True during the fit. Available only if the model
is fit by maximum likelihood. See notes below for the output from
the different methods.
mle_settings : dict
Contains the arguments passed to the chosen optimization method.
Available if the model is fit by maximum likelihood. See
LikelihoodModel.fit for more information.
model : model instance
LikelihoodResults contains a reference to the model that is fit.
params : ndarray
The parameters estimated for the model.
scale : float
The scaling factor of the model given during instantiation.
tvalues : array
The t-values of the standard errors.
Notes
-----
The covariance of params is given by scale times normalized_cov_params.
Return values by solver if full_output is True during fit:
'newton'
fopt : float
The value of the (negative) loglikelihood at its
minimum.
iterations : int
Number of iterations performed.
score : ndarray
The score vector at the optimum.
Hessian : ndarray
The Hessian at the optimum.
warnflag : int
1 if maxiter is exceeded. 0 if successful convergence.
converged : bool
True: converged. False: did not converge.
allvecs : list
List of solutions at each iteration.
'nm'
fopt : float
The value of the (negative) loglikelihood at its
minimum.
iterations : int
Number of iterations performed.
warnflag : int
1: Maximum number of function evaluations made.
2: Maximum number of iterations reached.
converged : bool
True: converged. False: did not converge.
allvecs : list
List of solutions at each iteration.
'bfgs'
fopt : float
Value of the (negative) loglikelihood at its minimum.
gopt : float
Value of gradient at minimum, which should be near 0.
Hinv : ndarray
value of the inverse Hessian matrix at minimum. Note
that this is just an approximation and will often be
different from the value of the analytic Hessian.
fcalls : int
Number of calls to loglike.
gcalls : int
Number of calls to gradient/score.
warnflag : int
1: Maximum number of iterations exceeded. 2: Gradient
and/or function calls are not changing.
converged : bool
True: converged. False: did not converge.
allvecs : list
Results at each iteration.
'lbfgs'
fopt : float
Value of the (negative) loglikelihood at its minimum.
gopt : float
Value of gradient at minimum, which should be near 0.
fcalls : int
Number of calls to loglike.
warnflag : int
Warning flag:
- 0 if converged
- 1 if too many function evaluations or too many iterations
- 2 if stopped for another reason
converged : bool
True: converged. False: did not converge.
'powell'
fopt : float
Value of the (negative) loglikelihood at its minimum.
direc : ndarray
Current direction set.
iterations : int
Number of iterations performed.
fcalls : int
Number of calls to loglike.
warnflag : int
1: Maximum number of function evaluations. 2: Maximum number
of iterations.
converged : bool
True : converged. False: did not converge.
allvecs : list
Results at each iteration.
'cg'
fopt : float
Value of the (negative) loglikelihood at its minimum.
fcalls : int
Number of calls to loglike.
gcalls : int
Number of calls to gradient/score.
warnflag : int
1: Maximum number of iterations exceeded. 2: Gradient and/
or function calls not changing.
converged : bool
True: converged. False: did not converge.
allvecs : list
Results at each iteration.
'ncg'
fopt : float
Value of the (negative) loglikelihood at its minimum.
fcalls : int
Number of calls to loglike.
gcalls : int
Number of calls to gradient/score.
hcalls : int
Number of calls to hessian.
warnflag : int
1: Maximum number of iterations exceeded.
converged : bool
True: converged. False: did not converge.
allvecs : list
Results at each iteration.
"""
# by default we use normal distribution
# can be overwritten by instances or subclasses
use_t = False
def __init__(self, model, params, normalized_cov_params=None, scale=1.,
**kwargs):
super(LikelihoodModelResults, self).__init__(model, params)
self.normalized_cov_params = normalized_cov_params
self.scale = scale
# robust covariance
# We put cov_type in kwargs so subclasses can decide in fit whether to
# use this generic implementation
if 'use_t' in kwargs:
use_t = kwargs['use_t']
if use_t is not None:
self.use_t = use_t
if 'cov_type' in kwargs:
cov_type = kwargs.get('cov_type', 'nonrobust')
cov_kwds = kwargs.get('cov_kwds', {})
if cov_type == 'nonrobust':
self.cov_type = 'nonrobust'
self.cov_kwds = {'description' : 'Standard Errors assume that the ' +
'covariance matrix of the errors is correctly ' +
'specified.'}
else:
from statsmodels.base.covtype import get_robustcov_results
if cov_kwds is None:
cov_kwds = {}
use_t = self.use_t
# TODO: we shouldn't need use_t in get_robustcov_results
get_robustcov_results(self, cov_type=cov_type, use_self=True,
use_t=use_t, **cov_kwds)
def normalized_cov_params(self):
raise NotImplementedError
def _get_robustcov_results(self, cov_type='nonrobust', use_self=True,
use_t=None, **cov_kwds):
from statsmodels.base.covtype import get_robustcov_results
if cov_kwds is None:
cov_kwds = {}
if cov_type == 'nonrobust':
self.cov_type = 'nonrobust'
self.cov_kwds = {'description' : 'Standard Errors assume that the ' +
'covariance matrix of the errors is correctly ' +
'specified.'}
else:
# TODO: we shouldn't need use_t in get_robustcov_results
get_robustcov_results(self, cov_type=cov_type, use_self=True,
use_t=use_t, **cov_kwds)
@cache_readonly
def llf(self):
return self.model.loglike(self.params)
@cache_readonly
def bse(self):
return np.sqrt(np.diag(self.cov_params()))
@cache_readonly
def tvalues(self):
"""
Return the t-statistic for a given parameter estimate.
"""
return self.params / self.bse
@cache_readonly
def pvalues(self):
if self.use_t:
df_resid = getattr(self, 'df_resid_inference', self.df_resid)
return stats.t.sf(np.abs(self.tvalues), df_resid)*2
else:
return stats.norm.sf(np.abs(self.tvalues))*2
def cov_params(self, r_matrix=None, column=None, scale=None, cov_p=None,
other=None):
"""
Returns the variance/covariance matrix.
The variance/covariance matrix can be of a linear contrast
of the estimates of params or all params multiplied by scale which
will usually be an estimate of sigma^2. Scale is assumed to be
a scalar.
Parameters
----------
r_matrix : array-like
Can be 1d, or 2d. Can be used alone or with other.
column : array-like, optional
Must be used on its own. Can be 0d or 1d see below.
scale : float, optional
Can be specified or not. Default is None, which means that
the scale argument is taken from the model.
other : array-like, optional
Can be used when r_matrix is specified.
Returns
-------
cov : ndarray
covariance matrix of the parameter estimates or of linear
combination of parameter estimates. See Notes.
Notes
-----
(The below are assumed to be in matrix notation.)
If no argument is specified returns the covariance matrix of a model
``(scale)*(X.T X)^(-1)``
If contrast is specified it pre and post-multiplies as follows
``(scale) * r_matrix (X.T X)^(-1) r_matrix.T``
If contrast and other are specified returns
``(scale) * r_matrix (X.T X)^(-1) other.T``
If column is specified returns
``(scale) * (X.T X)^(-1)[column,column]`` if column is 0d
OR
``(scale) * (X.T X)^(-1)[column][:,column]`` if column is 1d
"""
if (hasattr(self, 'mle_settings') and
self.mle_settings['optimizer'] in ['l1', 'l1_cvxopt_cp']):
dot_fun = nan_dot
else:
dot_fun = np.dot
if (cov_p is None and self.normalized_cov_params is None and
not hasattr(self, 'cov_params_default')):
raise ValueError('need covariance of parameters for computing '
'(unnormalized) covariances')
if column is not None and (r_matrix is not None or other is not None):
raise ValueError('Column should be specified without other '
'arguments.')
if other is not None and r_matrix is None:
raise ValueError('other can only be specified with r_matrix')
if cov_p is None:
if hasattr(self, 'cov_params_default'):
cov_p = self.cov_params_default
else:
if scale is None:
scale = self.scale
cov_p = self.normalized_cov_params * scale
if column is not None:
column = np.asarray(column)
if column.shape == ():
return cov_p[column, column]
else:
#return cov_p[column][:, column]
return cov_p[column[:, None], column]
elif r_matrix is not None:
r_matrix = np.asarray(r_matrix)
if r_matrix.shape == ():
raise ValueError("r_matrix should be 1d or 2d")
if other is None:
other = r_matrix
else:
other = np.asarray(other)
tmp = dot_fun(r_matrix, dot_fun(cov_p, np.transpose(other)))
return tmp
else: # if r_matrix is None and column is None:
return cov_p
#TODO: make sure this works as needed for GLMs
def t_test(self, r_matrix, cov_p=None, scale=None,
use_t=None):
"""
Compute a t-test for a each linear hypothesis of the form Rb = q
Parameters
----------
r_matrix : array-like, str, tuple
- array : If an array is given, a p x k 2d array or length k 1d
array specifying the linear restrictions. It is assumed
that the linear combination is equal to zero.
- str : The full hypotheses to test can be given as a string.
See the examples.
- tuple : A tuple of arrays in the form (R, q). If q is given,
can be either a scalar or a length p row vector.
cov_p : array-like, optional
An alternative estimate for the parameter covariance matrix.
If None is given, self.normalized_cov_params is used.
scale : float, optional
An optional `scale` to use. Default is the scale specified
by the model fit.
use_t : bool, optional
If use_t is None, then the default of the model is used.
If use_t is True, then the p-values are based on the t
distribution.
If use_t is False, then the p-values are based on the normal
distribution.
Returns
-------
res : ContrastResults instance
The results for the test are attributes of this results instance.
The available results have the same elements as the parameter table
in `summary()`.
Examples
--------
>>> import numpy as np
>>> import statsmodels.api as sm
>>> data = sm.datasets.longley.load()
>>> data.exog = sm.add_constant(data.exog)
>>> results = sm.OLS(data.endog, data.exog).fit()
>>> r = np.zeros_like(results.params)
>>> r[5:] = [1,-1]
>>> print(r)
[ 0. 0. 0. 0. 0. 1. -1.]
r tests that the coefficients on the 5th and 6th independent
variable are the same.
>>> T_test = results.t_test(r)
>>> print(T_test)
<T contrast: effect=-1829.2025687192481, sd=455.39079425193762,
t=-4.0167754636411717, p=0.0015163772380899498, df_denom=9>
>>> T_test.effect
-1829.2025687192481
>>> T_test.sd
455.39079425193762
>>> T_test.tvalue
-4.0167754636411717
>>> T_test.pvalue
0.0015163772380899498
Alternatively, you can specify the hypothesis tests using a string
>>> from statsmodels.formula.api import ols
>>> dta = sm.datasets.longley.load_pandas().data
>>> formula = 'TOTEMP ~ GNPDEFL + GNP + UNEMP + ARMED + POP + YEAR'
>>> results = ols(formula, dta).fit()
>>> hypotheses = 'GNPDEFL = GNP, UNEMP = 2, YEAR/1829 = 1'
>>> t_test = results.t_test(hypotheses)
>>> print(t_test)
See Also
---------
tvalues : individual t statistics
f_test : for F tests
patsy.DesignInfo.linear_constraint
"""
from patsy import DesignInfo
names = self.model.data.param_names
LC = DesignInfo(names).linear_constraint(r_matrix)
r_matrix, q_matrix = LC.coefs, LC.constants
num_ttests = r_matrix.shape[0]
num_params = r_matrix.shape[1]
if (cov_p is None and self.normalized_cov_params is None and
not hasattr(self, 'cov_params_default')):
raise ValueError('Need covariance of parameters for computing '
'T statistics')
if num_params != self.params.shape[0]:
raise ValueError('r_matrix and params are not aligned')
if q_matrix is None:
q_matrix = np.zeros(num_ttests)
else:
q_matrix = np.asarray(q_matrix)
q_matrix = q_matrix.squeeze()
if q_matrix.size > 1:
if q_matrix.shape[0] != num_ttests:
raise ValueError("r_matrix and q_matrix must have the same "
"number of rows")
if use_t is None:
#switch to use_t false if undefined
use_t = (hasattr(self, 'use_t') and self.use_t)
_t = _sd = None
_effect = np.dot(r_matrix, self.params)
# nan_dot multiplies with the convention nan * 0 = 0
# Perform the test
if num_ttests > 1:
_sd = np.sqrt(np.diag(self.cov_params(
r_matrix=r_matrix, cov_p=cov_p)))
else:
_sd = np.sqrt(self.cov_params(r_matrix=r_matrix, cov_p=cov_p))
_t = (_effect - q_matrix) * recipr(_sd)
df_resid = getattr(self, 'df_resid_inference', self.df_resid)
if use_t:
return ContrastResults(effect=_effect, t=_t, sd=_sd,
df_denom=df_resid)
else:
return ContrastResults(effect=_effect, statistic=_t, sd=_sd,
df_denom=df_resid,
distribution='norm')
def f_test(self, r_matrix, cov_p=None, scale=1.0, invcov=None):
"""
Compute the F-test for a joint linear hypothesis.
This is a special case of `wald_test` that always uses the F
distribution.
Parameters
----------
r_matrix : array-like, str, or tuple
- array : An r x k array where r is the number of restrictions to
test and k is the number of regressors. It is assumed
that the linear combination is equal to zero.
- str : The full hypotheses to test can be given as a string.
See the examples.
- tuple : A tuple of arrays in the form (R, q), ``q`` can be
either a scalar or a length k row vector.
cov_p : array-like, optional
An alternative estimate for the parameter covariance matrix.
If None is given, self.normalized_cov_params is used.
scale : float, optional
Default is 1.0 for no scaling.
invcov : array-like, optional
A q x q array to specify an inverse covariance matrix based on a
restrictions matrix.
Returns
-------
res : ContrastResults instance
The results for the test are attributes of this results instance.
Examples
--------
>>> import numpy as np
>>> import statsmodels.api as sm
>>> data = sm.datasets.longley.load()
>>> data.exog = sm.add_constant(data.exog)
>>> results = sm.OLS(data.endog, data.exog).fit()
>>> A = np.identity(len(results.params))
>>> A = A[1:,:]
This tests that each coefficient is jointly statistically
significantly different from zero.
>>> print(results.f_test(A))
<F contrast: F=330.28533923463488, p=4.98403052872e-10,
df_denom=9, df_num=6>
Compare this to
>>> results.fvalue
330.2853392346658
>>> results.f_pvalue
4.98403096572e-10
>>> B = np.array(([0,0,1,-1,0,0,0],[0,0,0,0,0,1,-1]))
This tests that the coefficient on the 2nd and 3rd regressors are
equal and jointly that the coefficient on the 5th and 6th regressors
are equal.
>>> print(results.f_test(B))
<F contrast: F=9.740461873303655, p=0.00560528853174, df_denom=9,
df_num=2>
Alternatively, you can specify the hypothesis tests using a string
>>> from statsmodels.datasets import longley
>>> from statsmodels.formula.api import ols
>>> dta = longley.load_pandas().data
>>> formula = 'TOTEMP ~ GNPDEFL + GNP + UNEMP + ARMED + POP + YEAR'
>>> results = ols(formula, dta).fit()
>>> hypotheses = '(GNPDEFL = GNP), (UNEMP = 2), (YEAR/1829 = 1)'
>>> f_test = results.f_test(hypotheses)
>>> print(f_test)
See Also
--------
statsmodels.stats.contrast.ContrastResults
wald_test
t_test
patsy.DesignInfo.linear_constraint
Notes
-----
The matrix `r_matrix` is assumed to be non-singular. More precisely,
r_matrix (pX pX.T) r_matrix.T
is assumed invertible. Here, pX is the generalized inverse of the
design matrix of the model. There can be problems in non-OLS models
where the rank of the covariance of the noise is not full.
"""
res = self.wald_test(r_matrix, cov_p=cov_p, scale=scale,
invcov=invcov, use_f=True)
return res
#TODO: untested for GLMs?
def wald_test(self, r_matrix, cov_p=None, scale=1.0, invcov=None,
use_f=None):
"""
Compute a Wald-test for a joint linear hypothesis.
Parameters
----------
r_matrix : array-like, str, or tuple
- array : An r x k array where r is the number of restrictions to
test and k is the number of regressors. It is assumed that the
linear combination is equal to zero.
- str : The full hypotheses to test can be given as a string.
See the examples.
- tuple : A tuple of arrays in the form (R, q), ``q`` can be
either a scalar or a length p row vector.
cov_p : array-like, optional
An alternative estimate for the parameter covariance matrix.
If None is given, self.normalized_cov_params is used.
scale : float, optional
Default is 1.0 for no scaling.
invcov : array-like, optional
A q x q array to specify an inverse covariance matrix based on a
restrictions matrix.
use_f : bool
If True, then the F-distribution is used. If False, then the
asymptotic distribution, chisquare is used. If use_f is None, then
the F distribution is used if the model specifies that use_t is True.
The test statistic is proportionally adjusted for the distribution
by the number of constraints in the hypothesis.
Returns
-------
res : ContrastResults instance
The results for the test are attributes of this results instance.
See also
--------
statsmodels.stats.contrast.ContrastResults
f_test
t_test
patsy.DesignInfo.linear_constraint
Notes
-----
The matrix `r_matrix` is assumed to be non-singular. More precisely,
r_matrix (pX pX.T) r_matrix.T
is assumed invertible. Here, pX is the generalized inverse of the
design matrix of the model. There can be problems in non-OLS models
where the rank of the covariance of the noise is not full.
"""
if use_f is None:
#switch to use_t false if undefined
use_f = (hasattr(self, 'use_t') and self.use_t)
from patsy import DesignInfo
names = self.model.data.param_names
LC = DesignInfo(names).linear_constraint(r_matrix)
r_matrix, q_matrix = LC.coefs, LC.constants
if (self.normalized_cov_params is None and cov_p is None and
invcov is None and not hasattr(self, 'cov_params_default')):
raise ValueError('need covariance of parameters for computing '
'F statistics')
cparams = np.dot(r_matrix, self.params[:, None])
J = float(r_matrix.shape[0]) # number of restrictions
if q_matrix is None:
q_matrix = np.zeros(J)
else:
q_matrix = np.asarray(q_matrix)
if q_matrix.ndim == 1:
q_matrix = q_matrix[:, None]
if q_matrix.shape[0] != J:
raise ValueError("r_matrix and q_matrix must have the same "
"number of rows")
Rbq = cparams - q_matrix
if invcov is None:
cov_p = self.cov_params(r_matrix=r_matrix, cov_p=cov_p)
if np.isnan(cov_p).max():
raise ValueError("r_matrix performs f_test for using "
"dimensions that are asymptotically "
"non-normal")
invcov = np.linalg.inv(cov_p)
if (hasattr(self, 'mle_settings') and
self.mle_settings['optimizer'] in ['l1', 'l1_cvxopt_cp']):
F = nan_dot(nan_dot(Rbq.T, invcov), Rbq)
else:
F = np.dot(np.dot(Rbq.T, invcov), Rbq)
df_resid = getattr(self, 'df_resid_inference', self.df_resid)
if use_f:
F /= J
return ContrastResults(F=F, df_denom=df_resid,
df_num=invcov.shape[0])
else:
return ContrastResults(chi2=F, df_denom=J, statistic=F,
distribution='chi2', distargs=(J,))
def wald_test_terms(self, skip_single=False, extra_constraints=None,
combine_terms=None):
"""
Compute a sequence of Wald tests for terms over multiple columns
This computes joined Wald tests for the hypothesis that all
coefficients corresponding to a `term` are zero.
`Terms` are defined by the underlying formula or by string matching.
Parameters
----------
skip_single : boolean
If true, then terms that consist only of a single column and,
therefore, refers only to a single parameter is skipped.
If false, then all terms are included.
extra_constraints : ndarray
not tested yet
combine_terms : None or list of strings
Each string in this list is matched to the name of the terms or
the name of the exogenous variables. All columns whose name
includes that string are combined in one joint test.
Returns
-------
test_result : result instance
The result instance contains `table` which is a pandas DataFrame
with the test results: test statistic, degrees of freedom and
pvalues.
Examples
--------
>>> res_ols = ols("np.log(Days+1) ~ C(Duration, Sum)*C(Weight, Sum)",
data).fit()
>>> res_ols.wald_test_terms()
<class 'statsmodels.stats.contrast.WaldTestResults'>
F P>F df constraint df denom
Intercept 279.754525 2.37985521351e-22 1 51
C(Duration, Sum) 5.367071 0.0245738436636 1 51
C(Weight, Sum) 12.432445 3.99943118767e-05 2 51
C(Duration, Sum):C(Weight, Sum) 0.176002 0.83912310946 2 51
>>> res_poi = Poisson.from_formula("Days ~ C(Weight) * C(Duration)",
data).fit(cov_type='HC0')
>>> wt = res_poi.wald_test_terms(skip_single=False,
combine_terms=['Duration', 'Weight'])
>>> print(wt)
chi2 P>chi2 df constraint
Intercept 15.695625 7.43960374424e-05 1
C(Weight) 16.132616 0.000313940174705 2
C(Duration) 1.009147 0.315107378931 1
C(Weight):C(Duration) 0.216694 0.897315972824 2
Duration 11.187849 0.010752286833 3
Weight 30.263368 4.32586407145e-06 4
"""
# lazy import
from collections import defaultdict
result = self
if extra_constraints is None:
extra_constraints = []
if combine_terms is None:
combine_terms = []
design_info = getattr(result.model.data.orig_exog, 'design_info', None)
if design_info is None and extra_constraints is None:
raise ValueError('no constraints, nothing to do')
identity = np.eye(len(result.params))
constraints = []
combined = defaultdict(list)
if design_info is not None:
for term in design_info.terms:
cols = design_info.slice(term)
name = term.name()
constraint_matrix = identity[cols]
# check if in combined
for cname in combine_terms:
if cname in name:
combined[cname].append(constraint_matrix)
k_constraint = constraint_matrix.shape[0]
if skip_single:
if k_constraint == 1:
continue
constraints.append((name, constraint_matrix))
combined_constraints = []
for cname in combine_terms:
combined_constraints.append((cname, np.vstack(combined[cname])))
else:
# check by exog/params names if there is no formula info
for col, name in enumerate(result.model.exog_names):
constraint_matrix = identity[col]
# check if in combined
for cname in combine_terms:
if cname in name:
combined[cname].append(constraint_matrix)
if skip_single:
continue
constraints.append((name, constraint_matrix))
combined_constraints = []
for cname in combine_terms:
combined_constraints.append((cname, np.vstack(combined[cname])))
use_t = result.use_t
distribution = ['chi2', 'F'][use_t]
res_wald = []
index = []
for name, constraint in constraints + combined_constraints + extra_constraints:
wt = result.wald_test(constraint)
row = [wt.statistic.item(), wt.pvalue, constraint.shape[0]]
if use_t:
row.append(wt.df_denom)
res_wald.append(row)
index.append(name)
# distribution nerutral names
col_names = ['statistic', 'pvalue', 'df_constraint']
if use_t:
col_names.append('df_denom')
# TODO: maybe move DataFrame creation to results class
from pandas import DataFrame
table = DataFrame(res_wald, index=index, columns=col_names)
res = WaldTestResults(None, distribution, None, table=table)
# TODO: remove temp again, added for testing
res.temp = constraints + combined_constraints + extra_constraints
return res
def conf_int(self, alpha=.05, cols=None, method='default'):
"""
Returns the confidence interval of the fitted parameters.
Parameters
----------
alpha : float, optional
The significance level for the confidence interval.
ie., The default `alpha` = .05 returns a 95% confidence interval.
cols : array-like, optional
`cols` specifies which confidence intervals to return
method : string
Not Implemented Yet
Method to estimate the confidence_interval.
"Default" : uses self.bse which is based on inverse Hessian for MLE
"hjjh" :
"jac" :
"boot-bse"
"boot_quant"
"profile"
Returns
--------
conf_int : array
Each row contains [lower, upper] limits of the confidence interval
for the corresponding parameter. The first column contains all
lower, the second column contains all upper limits.
Examples
--------
>>> import statsmodels.api as sm
>>> data = sm.datasets.longley.load()
>>> data.exog = sm.add_constant(data.exog)
>>> results = sm.OLS(data.endog, data.exog).fit()
>>> results.conf_int()
array([[-5496529.48322745, -1467987.78596704],
[ -177.02903529, 207.15277984],
[ -0.1115811 , 0.03994274],
[ -3.12506664, -0.91539297],
[ -1.5179487 , -0.54850503],
[ -0.56251721, 0.460309 ],
[ 798.7875153 , 2859.51541392]])
>>> results.conf_int(cols=(2,3))
array([[-0.1115811 , 0.03994274],
[-3.12506664, -0.91539297]])
Notes
-----
The confidence interval is based on the standard normal distribution.
Models wish to use a different distribution should overwrite this
method.
"""
bse = self.bse
if self.use_t:
dist = stats.t
df_resid = getattr(self, 'df_resid_inference', self.df_resid)
q = dist.ppf(1 - alpha / 2, df_resid)
else:
dist = stats.norm
q = dist.ppf(1 - alpha / 2)
if cols is None:
lower = self.params - q * bse
upper = self.params + q * bse
else:
cols = np.asarray(cols)
lower = self.params[cols] - q * bse[cols]
upper = self.params[cols] + q * bse[cols]
return np.asarray(lzip(lower, upper))
def save(self, fname, remove_data=False):
'''
save a pickle of this instance
Parameters
----------
fname : string or filehandle
fname can be a string to a file path or filename, or a filehandle.
remove_data : bool
If False (default), then the instance is pickled without changes.
If True, then all arrays with length nobs are set to None before
pickling. See the remove_data method.
In some cases not all arrays will be set to None.
Notes
-----
If remove_data is true and the model result does not implement a
remove_data method then this will raise an exception.
'''
from statsmodels.iolib.smpickle import save_pickle
if remove_data:
self.remove_data()
save_pickle(self, fname)
@classmethod
def load(cls, fname):
'''
load a pickle, (class method)
Parameters
----------
fname : string or filehandle
fname can be a string to a file path or filename, or a filehandle.
Returns
-------
unpickled instance
'''
from statsmodels.iolib.smpickle import load_pickle
return load_pickle(fname)
def remove_data(self):
'''remove data arrays, all nobs arrays from result and model
This reduces the size of the instance, so it can be pickled with less
memory. Currently tested for use with predict from an unpickled
results and model instance.
.. warning:: Since data and some intermediate results have been removed
calculating new statistics that require them will raise exceptions.
The exception will occur the first time an attribute is accessed
that has been set to None.
Not fully tested for time series models, tsa, and might delete too much
for prediction or not all that would be possible.
The list of arrays to delete is maintained as an attribute of the
result and model instance, except for cached values. These lists could
be changed before calling remove_data.
'''
def wipe(obj, att):
#get to last element in attribute path
p = att.split('.')
att_ = p.pop(-1)
try:
obj_ = reduce(getattr, [obj] + p)
#print(repr(obj), repr(att))
#print(hasattr(obj_, att_))
if hasattr(obj_, att_):
#print('removing3', att_)
setattr(obj_, att_, None)
except AttributeError:
pass
model_attr = ['model.' + i for i in self.model._data_attr]
for att in self._data_attr + model_attr:
#print('removing', att)
wipe(self, att)
data_in_cache = getattr(self, 'data_in_cache', [])
data_in_cache += ['fittedvalues', 'resid', 'wresid']
for key in data_in_cache:
try:
self._cache[key] = None
except (AttributeError, KeyError):
pass
class LikelihoodResultsWrapper(wrap.ResultsWrapper):
_attrs = {
'params': 'columns',
'bse': 'columns',
'pvalues': 'columns',
'tvalues': 'columns',
'resid': 'rows',
'fittedvalues': 'rows',
'normalized_cov_params': 'cov',
}
_wrap_attrs = _attrs
_wrap_methods = {
'cov_params': 'cov',
'conf_int': 'columns'
}
wrap.populate_wrapper(LikelihoodResultsWrapper,
LikelihoodModelResults)
class ResultMixin(object):
@cache_readonly
def df_modelwc(self):
# collect different ways of defining the number of parameters, used for
# aic, bic
if hasattr(self, 'df_model'):
if hasattr(self, 'hasconst'):
hasconst = self.hasconst
else:
# default assumption
hasconst = 1
return self.df_model + hasconst
else:
return self.params.size
@cache_readonly
def aic(self):
return -2 * self.llf + 2 * (self.df_modelwc)
@cache_readonly
def bic(self):
return -2 * self.llf + np.log(self.nobs) * (self.df_modelwc)
@cache_readonly
def score_obsv(self):
'''cached Jacobian of log-likelihood
'''
return self.model.score_obs(self.params)
jacv = np.deprecate(score_obsv, 'jacv', 'score_obsv',
"Use score_obsv attribute."
" jacv will be removed in 0.7.")
@cache_readonly
def hessv(self):
'''cached Hessian of log-likelihood
'''
return self.model.hessian(self.params)
@cache_readonly
def covjac(self):
'''
covariance of parameters based on outer product of jacobian of
log-likelihood
'''
## if not hasattr(self, '_results'):
## raise ValueError('need to call fit first')
## #self.fit()
## self.jacv = jacv = self.jac(self._results.params)
jacv = self.score_obsv
return np.linalg.inv(np.dot(jacv.T, jacv))
@cache_readonly
def covjhj(self):
'''covariance of parameters based on HJJH
dot product of Hessian, Jacobian, Jacobian, Hessian of likelihood
name should be covhjh
'''
jacv = self.score_obsv
hessv = self.hessv
hessinv = np.linalg.inv(hessv)
## self.hessinv = hessin = self.cov_params()
return np.dot(hessinv, np.dot(np.dot(jacv.T, jacv), hessinv))
@cache_readonly
def bsejhj(self):
'''standard deviation of parameter estimates based on covHJH
'''
return np.sqrt(np.diag(self.covjhj))
@cache_readonly
def bsejac(self):
'''standard deviation of parameter estimates based on covjac
'''
return np.sqrt(np.diag(self.covjac))
def bootstrap(self, nrep=100, method='nm', disp=0, store=1):
"""simple bootstrap to get mean and variance of estimator
see notes
Parameters
----------
nrep : int
number of bootstrap replications
method : str
optimization method to use
disp : bool
If true, then optimization prints results
store : bool
If true, then parameter estimates for all bootstrap iterations
are attached in self.bootstrap_results
Returns
-------
mean : array
mean of parameter estimates over bootstrap replications
std : array
standard deviation of parameter estimates over bootstrap
replications
Notes
-----
This was mainly written to compare estimators of the standard errors of
the parameter estimates. It uses independent random sampling from the
original endog and exog, and therefore is only correct if observations
are independently distributed.
This will be moved to apply only to models with independently
distributed observations.
"""
results = []
print(self.model.__class__)
hascloneattr = True if hasattr(self, 'cloneattr') else False
for i in range(nrep):
rvsind = np.random.randint(self.nobs, size=self.nobs)
#this needs to set startparam and get other defining attributes
#need a clone method on model
fitmod = self.model.__class__(self.endog[rvsind],
self.exog[rvsind, :])
if hascloneattr:
for attr in self.model.cloneattr:
setattr(fitmod, attr, getattr(self.model, attr))
fitres = fitmod.fit(method=method, disp=disp)
results.append(fitres.params)
results = np.array(results)
if store:
self.bootstrap_results = results
return results.mean(0), results.std(0), results
def get_nlfun(self, fun):
#I think this is supposed to get the delta method that is currently
#in miscmodels count (as part of Poisson example)
pass
class GenericLikelihoodModelResults(LikelihoodModelResults, ResultMixin):
"""
A results class for the discrete dependent variable models.
..Warning :
The following description has not been updated to this version/class.
Where are AIC, BIC, ....? docstring looks like copy from discretemod
Parameters
----------
model : A DiscreteModel instance
mlefit : instance of LikelihoodResults
This contains the numerical optimization results as returned by
LikelihoodModel.fit(), in a superclass of GnericLikelihoodModels
Returns
-------
*Attributes*
Warning most of these are not available yet
aic : float
Akaike information criterion. -2*(`llf` - p) where p is the number
of regressors including the intercept.
bic : float
Bayesian information criterion. -2*`llf` + ln(`nobs`)*p where p is the
number of regressors including the intercept.
bse : array
The standard errors of the coefficients.
df_resid : float
See model definition.
df_model : float
See model definition.
fitted_values : array
Linear predictor XB.
llf : float
Value of the loglikelihood
llnull : float
Value of the constant-only loglikelihood
llr : float
Likelihood ratio chi-squared statistic; -2*(`llnull` - `llf`)
llr_pvalue : float
The chi-squared probability of getting a log-likelihood ratio
statistic greater than llr. llr has a chi-squared distribution
with degrees of freedom `df_model`.
prsquared : float
McFadden's pseudo-R-squared. 1 - (`llf`/`llnull`)
"""
def __init__(self, model, mlefit):
self.model = model
self.endog = model.endog
self.exog = model.exog
self.nobs = model.endog.shape[0]
# TODO: possibly move to model.fit()
# and outsource together with patching names
if hasattr(model, 'df_model'):
self.df_model = model.df_model
else:
self.df_model = len(mlefit.params)
# retrofitting the model, used in t_test TODO: check design
self.model.df_model = self.df_model
if hasattr(model, 'df_resid'):
self.df_resid = model.df_resid
else:
self.df_resid = self.endog.shape[0] - self.df_model
# retrofitting the model, used in t_test TODO: check design
self.model.df_resid = self.df_resid
self._cache = resettable_cache()
self.__dict__.update(mlefit.__dict__)
def summary(self, yname=None, xname=None, title=None, alpha=.05):
"""Summarize the Regression Results
Parameters
-----------
yname : string, optional
Default is `y`
xname : list of strings, optional
Default is `var_##` for ## in p the number of regressors
title : string, optional
Title for the top table. If not None, then this replaces the
default title
alpha : float
significance level for the confidence intervals
Returns
-------
smry : Summary instance
this holds the summary tables and text, which can be printed or
converted to various output formats.
See Also
--------
statsmodels.iolib.summary.Summary : class to hold summary
results
"""
top_left = [('Dep. Variable:', None),
('Model:', None),
('Method:', ['Maximum Likelihood']),
('Date:', None),
('Time:', None),
('No. Observations:', None),
('Df Residuals:', None), # [self.df_resid]),
('Df Model:', None), # [self.df_model])
]
top_right = [ # ('R-squared:', ["%#8.3f" % self.rsquared]),
# ('Adj. R-squared:', ["%#8.3f" % self.rsquared_adj]),
# ('F-statistic:', ["%#8.4g" % self.fvalue] ),
# ('Prob (F-statistic):', ["%#6.3g" % self.f_pvalue]),
('Log-Likelihood:', None), # ["%#6.4g" % self.llf]),
('AIC:', ["%#8.4g" % self.aic]),
('BIC:', ["%#8.4g" % self.bic])
]
if title is None:
title = self.model.__class__.__name__ + ' ' + "Results"
#create summary table instance
from statsmodels.iolib.summary import Summary
smry = Summary()
smry.add_table_2cols(self, gleft=top_left, gright=top_right,
yname=yname, xname=xname, title=title)
smry.add_table_params(self, yname=yname, xname=xname, alpha=alpha,
use_t=False)
return smry
| bsd-3-clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.