code
stringlengths
2.5k
150k
kind
stringclasses
1 value
# Zipline Pipeline ### Introduction On any given trading day, the entire universe of stocks consists of thousands of securities. Usually, you will not be interested in investing in all the stocks in the entire universe, but rather, you will likely select only a subset of these to invest. For example, you may only want to invest in stocks that have a 10-day average closing price of \$10.00 or less. Or you may only want to invest in the top 500 securities ranked by some factor. In order to avoid spending a lot of time doing data wrangling to select only the securities you are interested in, people often use **pipelines**. In general, a pipeline is a placeholder for a series of data operations used to filter and rank data according to some factor or factors. In this notebook, you will learn how to work with the **Zipline Pipeline**. Zipline is an open-source algorithmic trading simulator developed by *Quantopian*. We will learn how to use the Zipline Pipeline to filter stock data according to factors. ### Install Packages ``` conda install -c Quantopian zipline import sys !{sys.executable} -m pip install -r requirements.txt ``` # Loading Data with Zipline Before we build our pipeline with Zipline, we will first see how we can load the stock data we are going to use into Zipline. Zipline uses **Data Bundles** to make it easy to use different data sources. A data bundle is a collection of pricing data, adjustment data, and an asset database. Zipline employs data bundles to preload data used to run backtests and store data for future runs. Zipline comes with a few data bundles by default but it also has the ability to ingest new bundles. The first step to using a data bundle is to ingest the data. Zipline's ingestion process will start by downloading the data or by loading data files from your local machine. It will then pass the data to a set of writer objects that converts the original data to Zipline’s internal format (`bcolz` for pricing data, and `SQLite` for split/merger/dividend data) that hs been optimized for speed. This new data is written to a standard location that Zipline can find. By default, the new data is written to a subdirectory of `ZIPLINE_ROOT/data/<bundle>`, where `<bundle>` is the name given to the bundle ingested and the subdirectory is named with the current date. This allows Zipline to look at older data and run backtests on older copies of the data. Running a backtest with an old ingestion makes it easier to reproduce backtest results later. In this notebook, we will be using stock data from **Quotemedia**. In the Udacity Workspace you will find that the stock data from Quotemedia has already been ingested into Zipline. Therefore, in the code below we will use Zipline's `bundles.load()` function to load our previously ingested stock data from Quotemedia. In order to use the `bundles.load()` function we first need to do a couple of things. First, we need to specify the name of the bundle previously ingested. In this case, the name of the Quotemedia data bundle is `eod-quotemedia`: ``` # Specify the bundle name bundle_name = 'eod-quotemedia' ``` Second, we need to register the data bundle and its ingest function with Zipline, using the `bundles.register()` function. The ingest function is responsible for loading the data into memory and passing it to a set of writer objects provided by Zipline to convert the data to Zipline’s internal format. Since the original Quotemedia data was contained in `.csv` files, we will use the `csvdir_equities()` function to generate the ingest function for our Quotemedia data bundle. In addition, since Quotemedia's `.csv` files contained daily stock data, we will set the time frame for our ingest function, to `daily`. ``` from zipline.data import bundles from zipline.data.bundles.csvdir import csvdir_equities # Create an ingest function ingest_func = csvdir_equities(['daily'], bundle_name) # Register the data bundle and its ingest function bundles.register(bundle_name, ingest_func); ``` Once our data bundle and ingest function are registered, we can load our data using the `bundles.load()` function. Since this function loads our previously ingested data, we need to set `ZIPLINE_ROOT` to the path of the most recent ingested data. The most recent data is located in the `cwd/../../data/project_4_eod/` directory, where `cwd` is the current working directory. We will specify this location using the `os.environ[]` command. ``` import os # Set environment variable 'ZIPLINE_ROOT' to the path where the most recent data is located os.environ['ZIPLINE_ROOT'] = os.path.join(os.getcwd(),'project_4_eod') # Load the data bundle bundle_data = bundles.load(bundle_name) ``` # Building an Empty Pipeline Once we have loaded our data, we can start building our Zipline pipeline. We begin by creating an empty Pipeline object using Zipline's `Pipeline` class. A Pipeline object represents a collection of named expressions to be compiled and executed by a Pipeline Engine. The `Pipeline(columns=None, screen=None)` class takes two optional parameters, `columns` and `screen`. The `columns` parameter is a dictionary used to indicate the intial columns to use, and the `screen` parameter is used to setup a screen to exclude unwanted data. In the code below we will create a `screen` for our pipeline using Zipline's built-in `.AverageDollarVolume()` class. We will use the `.AverageDollarVolume()` class to produce a 60-day Average Dollar Volume of closing prices for every stock in our universe. We then use the `.top(10)` attribute to specify that we want to filter down our universe each day to just the top 10 assets. Therefore, this screen will act as a filter to exclude data from our stock universe each day. The average dollar volume is a good first pass filter to avoid illiquid assets. ``` from zipline.pipeline import Pipeline from zipline.pipeline.factors import AverageDollarVolume # Create a screen for our Pipeline universe = AverageDollarVolume(window_length = 60).top(10) # Create an empty Pipeline with the given screen pipeline = Pipeline(screen = universe) ``` In the code above we have named our Pipeline object `pipeline` so that we can identify it later when we make computations. Remember a Pipeline is an object that represents computations we would like to perform every day. A freshly-constructed pipeline, like the one we just created, is empty. This means it doesn’t yet know how to compute anything, and it won’t produce any values if we ask for its outputs. In the sections below, we will see how to provide our Pipeline with expressions to compute. # Factors and Filters The `.AverageDollarVolume()` class used above is an example of a factor. In this section we will take a look at two types of computations that can be expressed in a pipeline: **Factors** and **Filters**. In general, factors and filters represent functions that produce a value from an asset in a moment in time, but are distinguished by the types of values they produce. Let's start by looking at factors. ### Factors In general, a **Factor** is a function from an asset at a particular moment of time to a numerical value. A simple example of a factor is the most recent price of a security. Given a security and a specific moment in time, the most recent price is a number. Another example is the 10-day average trading volume of a security. Factors are most commonly used to assign values to securities which can then be combined with filters or other factors. The fact that you can combine multiple factors makes it easy for you to form new custom factors that can be as complex as you like. For example, constructing a Factor that computes the average of two other Factors can be simply illustrated usingthe pseudocode below: ```python f1 = factor1(...) f2 = factor2(...) average = (f1 + f2) / 2.0 ``` ### Filters In general, a **Filter** is a function from an asset at a particular moment in time to a boolean value (True of False). An example of a filter is a function indicating whether a security's price is below \$5. Given a security and a specific moment in time, this evaluates to either **True** or **False**. Filters are most commonly used for selecting sets of securities to include or exclude from your stock universe. Filters are usually applied using comparison operators, such as <, <=, !=, ==, >, >=. # Viewing the Pipeline as a Diagram Zipline's Pipeline class comes with the attribute `.show_graph()` that allows you to render the Pipeline as a Directed Acyclic Graph (DAG). This graph is specified using the DOT language and consequently we need a DOT graph layout program to view the rendered image. In the code below, we will use the Graphviz pakage to render the graph produced by the `.show_graph()` attribute. Graphviz is an open-source package for drawing graphs specified in DOT language scripts. ``` import graphviz # Render the pipeline as a DAG pipeline.show_graph() ``` Right now, our pipeline is empty and it only contains a screen. Therefore, when we rendered our `pipeline`, we only see the diagram of our `screen`: ```python AverageDollarVolume(window_length = 60).top(10) ``` By default, the `.AverageDollarVolume()` class uses the `USEquityPricing` dataset, containing daily trading prices and volumes, to compute the average dollar volume: ```python average_dollar_volume = np.nansum(close_price * volume, axis=0) / len(close_price) ``` The top of the diagram reflects the fact that the `.AverageDollarVolume()` class gets its inputs (closing price and volume) from the `USEquityPricing` dataset. The bottom of the diagram shows that the output is determined by the expression `x_0 <= 10`. This expression reflects the fact that we used `.top(10)` as a filter in our `screen`. We refer to each box in the diagram as a Term. # Datasets and Dataloaders One of the features of Zipline's Pipeline is that it separates the actual source of the stock data from the abstract description of that dataset. Therefore, Zipline employs **DataSets** and **Loaders** for those datasets. `DataSets` are just abstract collections of sentinel values describing the columns/types for a particular dataset. While a `loader` is an object which, given a request for a particular chunk of a dataset, can actually get the requested data. For example, the loader used for the `USEquityPricing` dataset, is the `USEquityPricingLoader` class. The `USEquityPricingLoader` class will delegate the loading of baselines and adjustments to lower-level subsystems that know how to get the pricing data in the default formats used by Zipline (`bcolz` for pricing data, and `SQLite` for split/merger/dividend data). As we saw in the beginning of this notebook, data bundles automatically convert the stock data into `bcolz` and `SQLite` formats. It is important to note that the `USEquityPricingLoader` class can also be used to load daily OHLCV data from other datasets, not just from the `USEquityPricing` dataset. Simliarly, it is also possible to write different loaders for the same dataset and use those instead of the default loader. Zipline contains lots of other loaders to allow you to load data from different datasets. In the code below, we will use `USEquityPricingLoader(BcolzDailyBarWriter, SQLiteAdjustmentWriter)` to create a loader from a `bcolz` equity pricing directory and a `SQLite` adjustments path. Both the `BcolzDailyBarWriter` and `SQLiteAdjustmentWriter` determine the path of the pricing and adjustment data. Since we will be using the Quotemedia data bundle, we will use the `bundle_data.equity_daily_bar_reader` and the `bundle_data.adjustment_reader` as our `BcolzDailyBarWriter` and `SQLiteAdjustmentWriter`, respectively. ``` from zipline.pipeline.loaders import USEquityPricingLoader # Set the dataloader pricing_loader = USEquityPricingLoader(bundle_data.equity_daily_bar_reader, bundle_data.adjustment_reader) ``` # Pipeline Engine Zipline employs computation engines for executing Pipelines. In the code below we will use Zipline's `SimplePipelineEngine()` class as the engine to execute our pipeline. The `SimplePipelineEngine(get_loader, calendar, asset_finder)` class associates the chosen data loader with the corresponding dataset and a trading calendar. The `get_loader` parameter must be a callable function that is given a loadable term and returns a `PipelineLoader` to use to retrieve the raw data for that term in the pipeline. In our case, we will be using the `pricing_loader` defined above, we therefore, create a function called `choose_loader` that returns our `pricing_loader`. The function also checks that the data that is being requested corresponds to OHLCV data, otherwise it retunrs an error. The `calendar` parameter must be a `DatetimeIndex` array of dates to consider as trading days when computing a range between a fixed `start_date` and `end_date`. In our case, we will be using the same trading days as those used in the NYSE. We will use Zipline's `get_calendar('NYSE')` function to retrieve the trading days used by the NYSE. We then use the `.all_sessions` attribute to get the `DatetimeIndex` from our `trading_calendar` and pass it to the `calendar` parameter. Finally, the `asset_finder` parameter determines which assets are in the top-level universe of our stock data at any point in time. Since we are using the Quotemedia data bundle, we set this parameter to the `bundle_data.asset_finder`. ``` from zipline.utils.calendars import get_calendar from zipline.pipeline.data import USEquityPricing from zipline.pipeline.engine import SimplePipelineEngine # Define the function for the get_loader parameter def choose_loader(column): if column not in USEquityPricing.columns: raise Exception('Column not in USEquityPricing') return pricing_loader # Set the trading calendar trading_calendar = get_calendar('NYSE') # Create a Pipeline engine engine = SimplePipelineEngine(get_loader = choose_loader, calendar = trading_calendar.all_sessions, asset_finder = bundle_data.asset_finder) ``` # Running a Pipeline Once we have chosen our engine we are ready to run or execute our pipeline. We can run our pipeline by using the `.run_pipeline()` attribute of the `SimplePipelineEngine` class. In particular, the `SimplePipelineEngine.run_pipeline(pipeline, start_date, end_date)` implements the following algorithm for executing pipelines: 1. Build a dependency graph of all terms in the `pipeline`. In this step, the graph is sorted topologically to determine the order in which we can compute the terms. 2. Ask our AssetFinder for a “lifetimes matrix”, which should contain, for each date between `start_date` and `end_date`, a boolean value for each known asset indicating whether the asset existed on that date. 3. Compute each term in the dependency order determined in step 1, caching the results in a a dictionary so that they can be fed into future terms. 4. For each date, determine the number of assets passing the `pipeline` screen. The sum, $N$, of all these values is the total number of rows in our output Pandas Dataframe, so we pre-allocate an output array of length $N$ for each factor in terms. 5. Fill in the arrays allocated in step 4 by copying computed values from our output cache into the corresponding rows. 6. Stick the values computed in step 5 into a Pandas DataFrame and return it. In the code below, we run our pipeline for a single day, so our `start_date` and `end_date` will be the same. We then print some information about our `pipeline_output`. ``` import pandas as pd # Set the start and end dates start_date = pd.Timestamp('2016-01-05', tz = 'utc') end_date = pd.Timestamp('2016-01-05', tz = 'utc') # Run our pipeline for the given start and end dates pipeline_output = engine.run_pipeline(pipeline, start_date, end_date) # We print information about the pipeline output print('The pipeline output has type:', type(pipeline_output), '\n') # We print whether the pipeline output is a MultiIndex Dataframe print('Is the pipeline output a MultiIndex Dataframe:', isinstance(pipeline_output.index, pd.core.index.MultiIndex), '\n') # If the pipeline output is a MultiIndex Dataframe we print the two levels of the index if isinstance(pipeline_output.index, pd.core.index.MultiIndex): # We print the index level 0 print('Index Level 0:\n\n', pipeline_output.index.get_level_values(0), '\n') # We print the index level 1 print('Index Level 1:\n\n', pipeline_output.index.get_level_values(1), '\n') ``` We can see above that the return value of `.run_pipeline()` is a `MultiIndex` Pandas DataFrame containing a row for each asset that passed our pipeline’s screen. We can also see that the 0th level of the index contains the date and the 1st level of the index contains the tickers. In general, the returned Pandas DataFrame will also contain a column for each factor and filter we add to the pipeline using `Pipeline.add()`. At this point we haven't added any factors or filters to our pipeline, consequently, the Pandas Dataframe will have no columns. In the following sections we will see how to add factors and filters to our pipeline. # Get Tickers We saw in the previous section, that the tickers of the stocks that passed our pipeline’s screen are contained in the 1st level of the index. Therefore, we can use the Pandas `.get_level_values(1).values.tolist()` method to get the tickers of those stocks and save them to a list. ``` # Get the values in index level 1 and save them to a list universe_tickers = pipeline_output.index.get_level_values(1).values.tolist() # Display the tickers universe_tickers ``` # Get Data Now that we have the tickers for the stocks that passed our pipeline’s screen, we can get the historical stock data for those tickers from our data bundle. In order to get the historical data we need to use Zipline's `DataPortal` class. A `DataPortal` is an interface to all of the data that a Zipline simulation needs. In the code below, we will create a `DataPortal` and `get_pricing` function to get historical stock prices for our tickers. We have already seen most of the parameters used below when we create the `DataPortal`, so we won't explain them again here. The only new parameter is `first_trading_day`. The `first_trading_day` parameter is a `pd.Timestamp` indicating the first trading day for the simulation. We will set the first trading day to the first trading day in the data bundle. For more information on the `DataPortal` class see the [Zipline documentation](https://www.zipline.io/appendix.html?highlight=dataportal#zipline.data.data_portal.DataPortal) ``` from zipline.data.data_portal import DataPortal # Create a data portal data_portal = DataPortal(bundle_data.asset_finder, trading_calendar = trading_calendar, first_trading_day = bundle_data.equity_daily_bar_reader.first_trading_day, equity_daily_reader = bundle_data.equity_daily_bar_reader, adjustment_reader = bundle_data.adjustment_reader) ``` Now that we have created a `data_portal` we will create a helper function, `get_pricing`, that gets the historical data from the `data_portal` for a given set of `start_date` and `end_date`. The `get_pricing` function takes various parameters: ```python def get_pricing(data_portal, trading_calendar, assets, start_date, end_date, field='close') ``` The first two parameters, `data_portal` and `trading_calendar`, have already been defined above. The third paramter, `assets`, is a list of tickers. In our case we will use the tickers from the output of our pipeline, namely, `universe_tickers`. The fourth and fifth parameters are strings specifying the `start_date` and `end_date`. The function converts these two strings into Timestamps with a Custom Business Day frequency. The last parameter, `field`, is a string used to indicate which field to return. In our case we want to get the closing price, so we set `field='close`. The function returns the historical stock price data using the `.get_history_window()` attribute of the `DataPortal` class. This attribute returns a Pandas Dataframe containing the requested history window with the data fully adjusted. The `bar_count` parameter is an integer indicating the number of days to return. The number of days determines the number of rows of the returned dataframe. Both the `frequency` and `data_frequency` parameters are strings that indicate the frequency of the data to query, *i.e.* whether the data is in `daily` or `minute` intervals. ``` def get_pricing(data_portal, trading_calendar, assets, start_date, end_date, field='close'): # Set the given start and end dates to Timestamps. The frequency string C is used to # indicate that a CustomBusinessDay DateOffset is used end_dt = pd.Timestamp(end_date, tz='UTC', freq='C') start_dt = pd.Timestamp(start_date, tz='UTC', freq='C') # Get the locations of the start and end dates end_loc = trading_calendar.closes.index.get_loc(end_dt) start_loc = trading_calendar.closes.index.get_loc(start_dt) # return the historical data for the given window return data_portal.get_history_window(assets=assets, end_dt=end_dt, bar_count=end_loc - start_loc, frequency='1d', field=field, data_frequency='daily') # Get the historical data for the given window historical_data = get_pricing(data_portal, trading_calendar, universe_tickers, start_date='2011-01-05', end_date='2016-01-05') # Display the historical data historical_data ``` # Date Alignment When pipeline returns with a date of, e.g., `2016-01-07` this includes data that would be known as of before the **market open** on `2016-01-07`. As such, if you ask for latest known values on each day, it will return the closing price from the day before and label the date `2016-01-07`. All factor values assume to be run prior to the open on the labeled day with data known before that point in time. # Adding Factors and Filters Now that you know how build a pipeline and execute it, in this section we will see how we can add factors and filters to our pipeline. These factors and filters will determine the computations we want our pipeline to compute each day. We can add both factors and filters to our pipeline using the `.add(column, name)` method of the `Pipeline` class. The `column` parameter represetns the factor or filter to add to the pipeline. The `name` parameter is a string that determines the name of the column in the output Pandas Dataframe for that factor of fitler. As mentioned earlier, each factor and filter will appear as a column in the output dataframe of our pipeline. Let's start by adding a factor to our pipeline. ### Factors In the code below, we will use Zipline's built-in `SimpleMovingAverage` factor to create a factor that computes the 15-day mean closing price of securities. We will then add this factor to our pipeline and use `.show_graph()` to see a diagram of our pipeline with the factor added. ``` from zipline.pipeline.factors import SimpleMovingAverage # Create a factor that computes the 15-day mean closing price of securities mean_close_15 = SimpleMovingAverage(inputs = [USEquityPricing.close], window_length = 15) # Add the factor to our pipeline pipeline.add(mean_close_15, '15 Day MCP') # Render the pipeline as a DAG pipeline.show_graph() ``` In the diagram above we can clearly see the factor we have added. Now, we can run our pipeline again and see its output. The pipeline is run in exactly the same way we did before. ``` # Set starting and end dates start_date = pd.Timestamp('2014-01-06', tz='utc') end_date = pd.Timestamp('2016-01-05', tz='utc') # Run our pipeline for the given start and end dates output = engine.run_pipeline(pipeline, start_date, end_date) # Display the pipeline output output.head() ``` We can see that now our output dataframe contains a column with the name `15 Day MCP`, which is the name we gave to our factor before. This ouput dataframe from our pipeline gives us the 15-day mean closing price of the securities that passed our `screen`. ### Filters Filters are created and added to the pipeline in the same way as factors. In the code below, we create a filter that returns `True` whenever the 15-day average closing price is above \$100. Remember, a filter produces a `True` or `False` value for each security every day. We will then add this filter to our pipeline and use `.show_graph()` to see a diagram of our pipeline with the filter added. ``` # Create a Filter that returns True whenever the 15-day average closing price is above $100 high_mean = mean_close_15 > 100 # Add the filter to our pipeline pipeline.add(high_mean, 'High Mean') # Render the pipeline as a DAG pipeline.show_graph() ``` In the diagram above we can clearly see the fiter we have added. Now, we can run our pipeline again and see its output. The pipeline is run in exactly the same way we did before. ``` # Set starting and end dates start_date = pd.Timestamp('2014-01-06', tz='utc') end_date = pd.Timestamp('2016-01-05', tz='utc') # Run our pipeline for the given start and end dates output = engine.run_pipeline(pipeline, start_date, end_date) # Display the pipeline output output.head() ``` We can see that now our output dataframe contains a two columns, one for the filter and one for the factor. The new column has the name `High Mean`, which is the name we gave to our filter before. Notice that the filter column only contains Boolean values, where only the securities with a 15-day average closing price above \$100 have `True` values.
github_jupyter
**0. Code for Colab Debugging** ``` from google.colab import drive drive.mount('/content/gdrive') %cd /content/gdrive/My Drive/lxmert/src/ !pip install transformers import torch print(torch.cuda.is_available()) ``` **1. Import pckgs & Set basic configs** ``` # Base packages import logging import math import os from dataclasses import dataclass, field from glob import glob from typing import Optional from torch.utils.data import ConcatDataset # Own implementation from utils.parameters import parser from utils.dataset import get_dataset from utils.data_collator import NodeMasking_DataCollator, NodeClassification_DataCollator, LiteralRegression_DataCollator from model import LxmertForPreTraining,LxmertForKGTokPredAndMaskedLM # From Huggingface transformers package from transformers import ( CONFIG_MAPPING, MODEL_WITH_LM_HEAD_MAPPING, LxmertConfig, LxmertTokenizer, PreTrainedTokenizer, HfArgumentParser, TrainingArguments, Trainer, set_seed, ) train_args = TrainingArguments(output_dir='test', do_train=True, do_eval=False, local_rank=-1, per_device_train_batch_size=4, learning_rate=1e-3, num_train_epochs=1) import easydict PATH = '/content/gdrive/My Drive/lxmert/' args = easydict.EasyDict({ "model_type":"lxmert", "model_name_or_path":None, "cache_dir":None, "config_name":PATH+"config/config.json", "tokenizer_name":"bert-base-uncased", "train_data_file":PATH+"data/masked_literal_prediction/train", "train_data_files":None, "eval_data_file":PATH+"data/masked_literal_prediction/valid", "output_dir":PATH+"pretrained_models/test", "mlm":True, "mlm_probability":0.15, "block_size":512, }) logger = logging.getLogger(__name__) MODEL_CONFIG_CLASSES = list(MODEL_WITH_LM_HEAD_MAPPING.keys()) MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES) #args, args, args = parser.parse_args_into_dataclasses() if args.eval_data_file is None and args.do_eval: raise ValueError( "Cannot do evaluation without an evaluation data file. Either supply a file to --eval_data_file " "or remove the --do_eval argument." ) if ( os.path.exists(args.output_dir) and os.listdir(args.output_dir) and args.do_train and not args.overwrite_output_dir ): raise ValueError( f"Output directory ({args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome." ) # Setup logging logging.basicConfig( format="%(asctime)s - %(message)s", datefmt="%m/%d %H:%M", level=logging.INFO if train_args.local_rank in [-1, 0] else logging.WARN, ) logger.warning( "Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s", train_args.local_rank, train_args.device, train_args.n_gpu, bool(train_args.local_rank != -1), train_args.fp16, ) logger.info("Training/evaluation parameters %s", args) # Set seed set_seed(train_args.seed) ``` **2. Load model configuration** ``` if args.config_name: config = LxmertConfig.from_pretrained(args.config_name, cache_dir=args.cache_dir) elif args.model_name_or_path: config = LxmertConfig.from_pretrained(args.model_name_or_path, cache_dir=args.cache_dir) else: config = CONFIG_MAPPING[args.model_type]() logger.warning("You are instantiating a new config instance from scratch.") ``` **3. Define tokenizer (or load pretrained one)** ``` if args.tokenizer_name: tokenizer = LxmertTokenizer.from_pretrained(args.tokenizer_name, cache_dir=args.cache_dir) elif args.model_name_or_path: tokenizer = LxmertTokenizer.from_pretrained(args.model_name_or_path, cache_dir=args.cache_dir) else: raise ValueError( "You are instantiating a new tokenizer from scratch. This is not supported, but you can do it from another script, save it," "and load it from here, using --tokenizer_name" ) ``` **4. Define model (or load pretrained one)** ``` if args.model_name_or_path: model = LxmertForKGTokPredAndMaskedLM.from_pretrained( args.model_name_or_path, from_tf=bool(".ckpt" in args.model_name_or_path), config=config, cache_dir=args.cache_dir, ) else: logger.info("Training new model from scratch") model = LxmertForKGTokPredAndMaskedLM(config) if config.model_type in ["bert", "roberta", "distilbert", "camembert"] and not args.mlm: raise ValueError( "BERT and RoBERTa-like models do not have LM heads but masked LM heads. They must be run using the" "--mlm flag (masked language modeling)." ) ``` **5. Build dataset & data loader** ``` if args.block_size <= 0: args.block_size = tokenizer.max_len # Our input block size will be the max possible for the model else: args.block_size = min(args.block_size, tokenizer.max_len) # Get datasets train_dataset = ( get_dataset(args, tokenizer=tokenizer,kg_pad=config.kg_special_token_ids["PAD"]) if train_args.do_train else None ) eval_dataset = ( get_dataset(args, tokenizer=tokenizer, kg_pad=config.kg_special_token_ids["PAD"], evaluate=True) if train_args.do_eval else None ) data_collator = NodeClassification_DataCollator(tokenizer=tokenizer, kg_special_token_ids=config.kg_special_token_ids, kg_size = config.vocab_size['kg']) ``` **6. Initialize trainer & Run training** > Use Huggingface [trainer.py](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py) ``` # Initialize our Trainer print(train_args) print(data_collator) print(train_dataset) trainer = Trainer( model=model, args=train_args, data_collator=data_collator, train_dataset=train_dataset, prediction_loss_only=True ) # Training if train_args.do_train: model_path = ( args.model_name_or_path if args.model_name_or_path is not None and os.path.isdir(args.model_name_or_path) else None ) trainer.train(model_path=model_path) trainer.save_model() # For convenience, we also re-save the tokenizer to the same directory, # so that you can share your model easily on huggingface.co/models =) if trainer.is_world_master(): tokenizer.save_pretrained(args.output_dir) ```
github_jupyter
``` %load_ext cypher import json import random geotweets = %cypher match (n:tweet) where n.coordinates is not null return n.tid, n.lang, n.country, n.name, n.coordinates, n.created_at geotweets = geotweets.get_dataframe() geotweets.head() json.loads(geotweets.ix[1]["n.coordinates"])[0][0] def get_random_coords(df): lats = [] lons = [] for row in df.iterrows(): row = row[1] coords = json.loads(row["n.coordinates"])[0] lat1 = coords[0][0] lat2 = coords[2][0] lon1 = coords[0][1] lon2 = coords[1][1] ran_lat = random.uniform(lat1, lat2) ran_lon = random.uniform(lon1, lon2) lats.append(ran_lat) lons.append(ran_lon) df["lat"] = lats df["lon"] = lons return df df = get_random_coords(geotweets) geotweets.columns = ["Id", "Lang", "Country", "City", "Coords", "Time", "Lon", "Lat"] geotweets["Label"] = "tweet" geotweets.head() geotweets.to_csv("data/geotweets.csv") edges_query = """match (t:tweet)-[:USES]->(h:hashtag) where t.coordinates is not null with h.tagid as hashtag, t.tid as tweet return hashtag, tweet """ geotweet_edges = %cypher match (t:tweet)-[:USES]->(h:hashtag) where t.coordinates is not null with h.tagid as hashtag, t.tid as tweet return tweet, hashtag geotweet_edges = geotweet_edges.get_dataframe() geotweet_edges.head() geotweet_edges.columns = ["Source", "Target"] geotweet_edges.to_csv("data/geoedges.csv") geoedges_nohash = %cypher match (t:tweet)--(n:tweet) where t.coordinates is not null and n.coordinates is not null return t.tid as Source, n.tid as Target geoedges_nohash = geoedges_nohash.get_dataframe() len(geoedges_nohash) geoedges_nohash.to_csv("data/geoedges_nohash.csv") geohash = %cypher match (t:tweet)-[r:USES]->(h:hashtag) where t.coordinates is not null with distinct h.tagid as Id, h.hashtag as Label, count(r) as deg return Id, Label order by deg desc limit 10 geohash = geohash.get_dataframe() geohash.head() labels = geohash["Label"].map(lambda x: "#" + x) geohash["Label"] = labels geohash.head() geohash.to_csv("data/geotags.csv") edges = %cypher match (t:tweet)-[:USES]-(h:hashtag {hashtag: "paris"}) where t.coordinates is not null return h.hashtag, collect(t.tid) import itertools import networkx as nx edges = edges.get_dataframe() edges["collect(t.tid)"] = edges["collect(t.tid)"].map(lambda x: list(itertools.combinations(x, 2))) edges.head() el = list(itertools.chain.from_iterable(edges["collect(t.tid)"])) len(el) el[1] len(el) g = nx.Graph(el) len(geotweet_edges) ```
github_jupyter
# Network waterfall generation ``` import pandas as pd import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt %matplotlib inline from math import sqrt import re, bisect from colorama import Fore ``` ## Select input file and experiment ID (~10 experiments per file) - ./startup : Application startup - ./startup_and_click : Application startup + click (single user interaction) - ./multiclick : Application statup + clicks (multiple user interactions) ### (*critical flows* and performance metrics available for *startup* and *startup_and_click* datasets) ``` ##example (YouTube) FNAME = "./startup/com.google.android.youtube_bursts.txt" EXPID = 1 ``` ## Load experiment data and plot waterfall ``` ##load experiment data d_exps = load_experiments(FNAME) df = d_exps[EXPID] print_head(FNAME) ##plot waterfall plot_waterfall(df, fvolume=None, title=FNAME, fname_png="output_waterfall.png") ``` ## A small library for plotting waterfalls, based on matplotlib ``` def load_experiments(fname): df = pd.read_csv(fname, sep = ' ', low_memory=False) ## split the single file in multiple dataframes based on experiment id d = {} for expid in df['expId'].unique(): df_tmp = df[df['expId'] == expid].copy() df_tmp = df_tmp.sort_values(by='t_start') cat = pd.Categorical(df_tmp['flow'], ordered=False) cat = cat.reorder_categories(df_tmp['flow'].unique()) df_tmp.loc[:, 'flowid'] = cat.codes d[expid] = df_tmp return d def _get_reference_times(df): tdt = df['TDT'].values[0] aft = df['AFT'].values[0] x_max = 0.5+max(df['t_end'].max(), aft, tdt) return { 'tdt' : tdt, 'aft' : aft, 'x_max' : x_max} def _get_max_time(df): x_max = 0.5+df['t_end'].max() return {'x_max' : x_max} def _get_lines_burst(df, x_lim=None): lines_burst = [] lines_burst_widths = [] for flowid, x_start, x_end, burst_bytes in df[['flowid', 't_start', 't_end', 'KB']].values: if x_lim is None: lines_burst.append([(x_start, flowid), (x_end, flowid)]) width = min(13, 2*sqrt(burst_bytes)) width = max(1, width) lines_burst_widths.append(width) else: el = [(x_lim[0], flowid), (x_lim[1], flowid)] if el not in lines_burst: lines_burst.append(el) return lines_burst, lines_burst_widths def _plot_aft_tdt_reference(ax, tdt, aft, no_legend=False): tdt_label = "TDT = " + str(tdt)[0:5] aft_label = "AFT = " + str(aft)[0:5] if no_legend: tdt_label = None aft_label = None ax.axvline(x=tdt, color="green", label=tdt_label, linewidth=2) #, ax = ax) ax.axvline(x=aft, color="purple", label=aft_label, linewidth=2) #, ax = ax) lgd = ax.legend(bbox_to_anchor=[1, 1]) def _plot_bursts(ax, df, lines_flow, lines_burst=None, lines_burst_critical=None, flow_kwargs={}, burst_kwargs={}, burst_critical_kwargs={}, title=None): ## flow lines ax.add_collection(mpl.collections.LineCollection(lines_flow, **flow_kwargs)) ## burst lines if lines_burst is not None: ax.add_collection(mpl.collections.LineCollection(lines_burst, **burst_kwargs)) if lines_burst_critical is not None: ax.add_collection(mpl.collections.LineCollection(lines_burst_critical, **burst_critical_kwargs)) if 'AFT' in df and 'TDT' in df: d_times = _get_reference_times(df) ## vertical reference lines _plot_aft_tdt_reference(ax, tdt=d_times['tdt'], aft=d_times['aft']) else: d_times = _get_max_time(df) ## axis lim x_max = d_times['x_max'] y_max = len(lines_flow)+1 ax.set_ylim((-1, y_max)) ax.set_xlim((0, x_max)) chess_lines = [[(0, y),(x_max, y)] for y in range(0, y_max, 2)] ax.add_collection(mpl.collections.LineCollection(chess_lines, linewidths=10, color='gray', alpha=0.1)) ## ticks ax.yaxis.set_major_locator(mpl.ticker.MultipleLocator(1)) ax.tick_params(axis='y', length=0) ## y-labels (clipping the long ones) labels = df[['flow', 'flowid']].sort_values(by='flowid').drop_duplicates()['flow'].values ax.set_yticklabels(['',''] + list(labels)) ## remove borders ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['bottom'].set_visible(False) ax.spines['left'].set_visible(False) ## grid ax.grid(axis='x', alpha=0.3) ax.legend().remove() def _plot_volume(ax, df, title=None, fvolume=None): ## get times if 'AFT' in df and 'TDT' in df: d_times = _get_reference_times(df) else: d_times = _get_max_time(df) if fvolume!=None: x=[] y=[] for line in open(fvolume): x.append(float(line[0:-1].split(' ')[0])) y.append(float(line[0:-1].split(' ')[1])) ax.step(x, y, color='gray', where='post', label='') else: # get volume cumulate df_tmp = df.copy() df_tmp = df_tmp.sort_values(by='t_end') df_tmp.loc[:, 'KB_cdf'] = df_tmp['KB'].cumsum() / df_tmp['KB'].sum() ax.step(x=df_tmp['t_end'], y=df_tmp['KB_cdf'], color='gray', where='post', label='') ## remove border ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['bottom'].set_visible(False) ax.spines['left'].set_visible(False) if 'AFT' in df and 'TDT' in df: _plot_aft_tdt_reference(ax, tdt=d_times['tdt'], aft=d_times['aft'], no_legend=False) ax.tick_params(labeltop=True, labelbottom=False, length=0.1, axis='x', direction='out') ax.set_xlim((0, d_times['x_max'])) ## grid ax.grid(axis='x', alpha=0.3) ax.yaxis.set_major_locator(mpl.ticker.MultipleLocator(0.5)) ax.set_ylabel('CDF Volume') ## title if title is not None: ax.set_title(title, pad=20) def print_head(fname): print("TDT = Transport Delivery Time\nAFT = Above-the-Fold Time") if 'multiclick' not in fname: print(Fore.RED + 'Critical flows') print(Fore.BLUE + 'Non-Critical flows') def plot_waterfall(df, fvolume=None, title=None, fname_png=None): #, ax=None): ## first start and end of each flow df_tmp = df.groupby('flowid').agg({'t_start':'min', 't_end':'max'}) ## ..and create lines lines_flow = [ [(x_start, y), (x_end, y)] for y, (x_start, x_end) in zip(df_tmp.index, df_tmp.values) ] ## lines for each burst lines_burst, lines_burst_widths = _get_lines_burst(df[df['KB'] > 0]) ## lines for each critical burst (if any info on critical domains in the input file) if 'critical' in df: lines_burst_critical, lines_burst_widths_critical = _get_lines_burst(df[(df['critical']) & (df['KB'] > 0)]) else: lines_burst_critical, lines_burst_widths_critical = [], [] ###################### fig_height = max(5, 0.25*len(lines_flow)) fig = plt.figure(figsize=(8, fig_height)) gs = mpl.gridspec.GridSpec(nrows=2, ncols=1, hspace=0.1, height_ratios=[1,3]) ax0 = plt.subplot(gs[0]) ax1 = plt.subplot(gs[1]) _plot_volume(ax0, df, title, fvolume) _plot_bursts(ax1, df, lines_flow, lines_burst, lines_burst_critical, flow_kwargs={ 'linewidths':2, 'color': 'gray', 'linestyle' : (0, (1, 1)), 'alpha':0.7}, burst_kwargs={ 'linewidths' : lines_burst_widths, 'color': 'blue'}, burst_critical_kwargs={ 'linewidths':lines_burst_widths_critical, 'color': 'red'}) ## add click timestamps (if any) if 'clicks' in df: for click_t in df['clicks'].values[0][1:-1].split(', '): if float(click_t)<40 and float(click_t)>35: continue plt.axvline(x=float(click_t), color="grey", linestyle="--") if fname_png is not None: plt.savefig(fname_png, bbox_inches='tight') ```
github_jupyter
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from sklearn.metrics import r2_score, mean_squared_error import seaborn as sns import statsmodels.api as sm %matplotlib inline ``` # 1. BUSINESS UNDERSTANDING In this project, i used Airbnb Seatle data and focus on answering three business questions using exploratory data analysis, data visualization and machine learning algorithm. 1- How are properties distributed among neighboorhood group? This will help stakeholder to understand what percent of the properties are located in what neighboorhood of Seattle. 2 - How many times room types are reviewed? What is the average review score rating? Is there a relationship room type price and number of review and review score rating? This will guide if there is any correlation between the variables. 3-Implement one of ML algoritims which is linear regression model to forecast price. This will help stakeholder to predict price based on independt predictors. # 2. DATA UNDERSTANDING ``` #In this project, i will be using Seattle Airbnb dataset #Load Seattle listing data df_listings = pd.read_csv('listings.csv') # Check the structure of the data after it's loaded. df_listings.shape #Checking if there is abnormal among the numberical columns. lincence does not have any data so it can be dropped df_listings.describe() #Checking the data type of each column and be sure if there is any string column is required to be float value. # host response rate and price which will be used in the following analysis are supposed to be float df_listings.dtypes ``` # 3 . DATA PREPARATION: CLEANING DATASET ``` #Redundant columns are dropped. They will not be used in the following data analysis process. df_listings.drop(['security_deposit','weekly_price','summary','square_feet','monthly_price','space','scrape_id','notes','neighborhood_overview','transit','last_scraped','experiences_offered','thumbnail_url','medium_url','picture_url','xl_picture_url', 'host_id','host_thumbnail_url','host_picture_url','host_has_profile_pic','host_identity_verified', 'license','square_feet','cleaning_fee'],axis =1,inplace = True) #There was formating issue in these two columns. Converting string column to float values df_listings['host_response_rate'] = df_listings['host_response_rate'].apply(lambda x : float(x.strip('%'))/100 if pd.notna(x)==True else None) df_listings['price'] = df_listings['price'].replace('[\$,]', '', regex=True).astype(float) nan_cols = (df_listings.isnull().sum()/df_listings.shape[0]).sort_values(ascending = False) ax = nan_cols.hist() ax.set_xlabel("% NaN") ax.set_ylabel("Column Count") nan_cols.head(n=10) #df_listings.dropna(axis = 0,how = 'any',inplace = True) def clean_data_replace(df): ''' INPUT: df - pandas dataframe which is df_listings OUTPUT: df - returns the clean data df - return the data without no missing values ''' #I use mean imputation here to keep same mean and the same sample size. for col in df: dt = df[col].dtype if dt == int or dt==float : df[col].fillna(df[col].mean(),inplace = True) else: df_listings[col].fillna(df[col].mode()[0],inplace=True) return df.columns[df.isnull().sum()>0] clean_data_replace(df_listings) ``` # 4.EXPLORATORY DATA ANALYSIS # Q1 : How are properties distributed among neighboorhood group? In this question, we would like to figure out how the properties are distributed among the neighboorhood. So, this will help us to figure out which neighboorhood has most properties. ``` nb_seattle = (df_listings['neighbourhood_cleansed'].value_counts()).sort_values().reset_index() nb_seattle nb_seattle_percent = (df_listings['neighbourhood_cleansed'].value_counts()/df_listings.shape[0]).sort_values(ascending = False).reset_index().head() nb_seattle_percent plt.figure(figsize = (20,8)); x = nb_seattle_percent['index']; y = nb_seattle_percent['neighbourhood_cleansed']; nb_seattle_percent.plot(x = 'index',y = 'neighbourhood_cleansed',kind = 'bar',figsize = (20,10),color='green'); plt.plot(x,y); plt.xlabel('Neighboorhood',fontsize = 15); plt.ylabel('Number of properties',fontsize = 15); plt.tick_params(axis='x', labelsize=15); plt.tick_params(axis='y', labelsize=15); ``` As easily seen it the graph below, Broadway has the most airbnb property in Seattle. There is a huge supply in Broadway relatively other neighboorhood ``` nb_seattle.plot(x = 'index',y = 'neighbourhood_cleansed',kind = 'bar',figsize = (25,10),color='salmon') plt.plot(nb_seattle['index'],nb_seattle['neighbourhood_cleansed']); plt.xlabel('Neighboorhood',fontsize = 12) plt.ylabel('Number of properties',fontsize = 12); plt.tick_params(axis='x', labelsize=12) plt.tick_params(axis='y', labelsize=12) ``` # Q2: How many times room types are reviewed? What is the average review score rating? Is there a relationship room type price and number of review and review score rating? We will try to figure out most prefered room types based on number of review and review score rating. Other important point to understand if there is any relationship between room type price and review rating. We will question if price and reviews are correlated. ``` #Here we selected specific columns that we will use in our explotory analysis. df_Seatle_price_dist = df_listings[['room_type','property_type','neighbourhood_group_cleansed','neighbourhood_cleansed','price','number_of_reviews','review_scores_rating']] df_Seatle_price_dist.sort_values(by = ['review_scores_rating','number_of_reviews']) #Most reviewed room type is entire home/apt df_Seatle_price_dist.groupby(['room_type'])['number_of_reviews'].sum().reset_index() #Score of the Entire home/apt and private room are almost same. Generally scores of the room types are almost same. df_Seatle_price_dist.groupby(['room_type'])['review_scores_rating'].mean().reset_index() ``` Most reviewed neighboorhood group are Other neighborhood, Downtown and Capital Hill . In these neighboorhood , entire home/apt is received high number of reviews. ``` df_Seatle_price_dist.groupby(['neighbourhood_group_cleansed','room_type'])['number_of_reviews'].sum().sort_values(ascending=False).reset_index().head(5) sns.catplot(x = 'neighbourhood_group_cleansed',y = 'number_of_reviews',hue = 'room_type',data =df_Seatle_price_dist, kind = 'swarm',height=6, aspect=2 ); plt.xticks(rotation=90); plt.xlabel('Neighboorhood Group',fontsize = 12); plt.ylabel('Number of Reviews',fontsize = 12); ``` Lets take a look at if there is any correlation between price and number of reviews and review_score_rating given chart below. It looks there is inverse correlation between price and number of reviews while there is direct relationship between price and review score rating but we cant say it is strong enough. ``` df_Seatle_price_dist.corr() sns.heatmap(df_Seatle_price_dist.corr(),cmap="YlGnBu",annot = True) sns.pairplot(df_Seatle_price_dist,x_vars=['number_of_reviews','review_scores_rating'],y_vars=['price'],hue='room_type',size = 4); ``` Given dataframe below shows how the price are distributed between room type in the different neighboorhood group. As seen in the graph to Magnolia is the most expensive neighboorhood if entire room/apt is rented. ``` df_Seatle_price_dist.groupby(['neighbourhood_group_cleansed','room_type'])['price'].mean().sort_values(ascending = False).reset_index().head() plt.figure(figsize=(20,10)) sns.barplot(x = "neighbourhood_group_cleansed", y = "price", hue = "room_type", data = df_Seatle_price_dist) plt.xticks(rotation=90,fontsize = 14); plt.xlabel('Neighboorhood Group',fontsize = 15); plt.ylabel('Price',fontsize = 15); plt.show() ``` # 5. DATA MODELING AND EVALUATION # Q3 : Implement linear regression model to apply ML algorithm to forecast price based on variables are selected. We performed a ML model to forecast if price are impacted under different circumstances. In order to proceed , we selected columns as indipendent variables. Dependent variable is here price. The list of independent variables are below. SELECTED INDEPENDENT VARIABLES ::: host_response_time, host_response_rate, property_type, room_type, accommodates, bathrooms, bedrooms, beds, bed_type, minimum_nights, maximum_nights, availability_365, number_of_reviews, review_scores_rating, review_scores_accuracy, review_scores_cleanliness, review_scores_checkin, review_scores_communication, review_scores_location, review_scores_value, instant_bookable, cancellation_policy, require_guest_profile_picture, require_guest_phone_verification, calculated_host_listings_count, reviews_per_month, neighbourhood_cleansed DEPENDENT VARIABLE ::: price ``` #Useful colunms for ML application has been selected df_selected_vars = df_listings[['host_response_time','host_response_rate','property_type','room_type','accommodates','bathrooms', 'bedrooms','beds','bed_type','minimum_nights','maximum_nights','availability_365','number_of_reviews', 'review_scores_rating','review_scores_accuracy','review_scores_cleanliness','neighbourhood_cleansed', 'review_scores_checkin','review_scores_communication','review_scores_location','review_scores_value','instant_bookable', 'cancellation_policy','require_guest_profile_picture','require_guest_phone_verification', 'calculated_host_listings_count','reviews_per_month','price']] df_selected_vars.head() ``` In this part of the modeling, we created Dummy variables. Right after that,dummy variables are merged with numerical columns. ``` #CREATE DUMMY VARIABLES cat_df = df_selected_vars.select_dtypes(include=['object']) cat_df_copy = cat_df.copy() cat_cols_lst = cat_df.columns def create_dummy_df(df,cat_cols): ''' INPUT: df - pandas dataframe with categorical variables you want to dummy cat_cols - list of strings that are associated with names of the categorical columns OUTPUT: df - a new dataframe that has the following characteristics: 1. contains all columns that were not specified as categorical 2. removes all the original columns in cat_cols 3. dummy columns for each of the categorical columns in cat_cols 4. Use a prefix of the column name with an underscore (_) for separating ''' for var in cat_cols: try: df = pd.concat([df.drop(var, axis=1), pd.get_dummies(df[var], prefix=var, prefix_sep='_', drop_first=True)], axis=1) except: continue return df #Pull a list of the column names of the categorical variables cat_df = df_selected_vars.select_dtypes(include=['object']) cat_cols_lst = cat_df.columns df_new = create_dummy_df(df_selected_vars, cat_df) #Use your newly created function # Show a header of df_new to check df_new.head() def fit_linear_mod(df,response_col,test_size=.3, rand_state=42): ''' INPUT: df - a dataframe holding all the variables of interest response_col - a string holding the name of the column test_size - a float between [0,1] about what proportion of data should be in the test dataset rand_state - an int that is provided as the random state for splitting the data into training and test OUTPUT: test_score - float - r2 score on the test data train_score - float - r2 score on the test data lm_model - model object from sklearn X_train, X_test, y_train, y_test - output from sklearn train test split used for optimal model ''' #Split your data into an X matrix and a response vector y X = df.drop(response_col,axis =1) y = df[response_col] #Create training and test set of data X_train,X_test,y_train,y_test = train_test_split(X,y,test_size = test_size, random_state = rand_state) #Instantiate a Linear Regression model with normalized data lm_model = LinearRegression(normalize = True) #Fit your model to thre training data lm_model.fit(X_train,y_train) #Predict the response for the training data and the test data y_test_preds = lm_model.predict(X_test) y_train_preds = lm_model.predict(X_train) #Obtain an rsquared value for both the training and test data test_score = r2_score(y_test,y_test_preds) train_score = r2_score(y_train,y_train_preds) return test_score, train_score, lm_model, X_train, X_test, y_train, y_test,y_test_preds,y_train_preds #Test your function with the above dataset test_score, train_score, lm_model, X_train, X_test, y_train, y_test,y_test_preds,y_train_preds = fit_linear_mod(df_new, 'price') ``` Our linear regression model explains around 60% of the variation of pricing in the training set, and 60% of variation of pricing in test set ``` #Print training and testing score . R square measures the strength of the relationship between mode and dependent variables #Our model is fitted 60% our observations. print("The rsquared on the training data was {}. The rsquared on the test data was {}.".format(train_score, test_score)) lm_model.intercept_ #plotting y_test, y_test_prediction plt.figure(figsize=(10,5)) sns.regplot(y_test,y_test_preds) plt.xlabel('actual_price') plt.ylabel('predicted_price') plt.figure(figsize=(10,5)) sns.scatterplot(y_test, y_test_preds) ``` We also look at p-values and coefficients of the model. if pvalues is less than 0.05 for each independent variable,there is a correlation between X vars and Y var. For example, p-value of 'accommodates','bathrooms','bedrooms','reviews_per_month' less than 0.05, we can say change in in the independent variables are associated with price. This variable is statistically significant and probably a worthwhile addition to your regression model. Otherwise, it is accepted that there is no significant relationship between X and y variables. The sign of a regression coefficient tells you whether there is a positive or negative correlation between each independent variable the dependent variable. For example, while ‘accommodates’,’bathrooms’,’bedrooms’,’beds’ have positive correlation with price, ‘number of review’,’review score checkin’ has negative correlation. ``` import statsmodels.api as sm X_train_Sm= sm.add_constant(X_train) X_train_Sm= sm.add_constant(X_train) ls=sm.OLS(y_train,X_train_Sm).fit() print(ls.summary()) ```
github_jupyter
# Taller evaluable sobre la extracción, transformación y visualización de datos usando IPython **Juan David Velásquez Henao** jdvelasq@unal.edu.co Universidad Nacional de Colombia, Sede Medellín Facultad de Minas Medellín, Colombia # Instrucciones En la carpeta 'Taller' del repositorio 'ETVL-IPython' se encuentran los archivos 'Precio_Bolsa_Nacional_($kwh)_'*'.xls' en formato de Microsoft Excel, los cuales contienen los precios históricos horarios de la electricidad para el mercado eléctrico Colombiano entre los años 1995 y 2017 en COL-PESOS/kWh. A partir de la información suministrada resuelva los siguientes puntos usando el lenguaje de programación Python. # Preguntas **1.--** Lea los archivos y cree una tabla única concatenando la información para cada uno de los años. Debe transformar la tabla de tal forma que quede con las columnas `Fecha`, `Hora` y `Precio` (únicamente tres columnas). Imprima el encabezamiento de la tabla usando `head()`. ``` import os, pandas, numpy, matplotlib, matplotlib.pyplot direccion = "C:/Users/Asus/Downloads/ETVL-IPython-master (2)/ETVL-IPython-master/Taller" listaArchivos=[] listaInicial=os.walk(direccion) # ESTA PARTE LISTA LOS ARCHIVOS DE EXCEL INCLUIDOS EN UNA CARPETA ESPECIFICADA. SE DESCARTAN LOS ARCHIVOS TEMPORALES Y/O # ARCHIVOS QUE SE ENCUENTREN EN SUBCARPETAS. for Dire, CarpetasDentroDire, ArchivosDentroDire in listaInicial: if Dire == direccion: for nombres in ArchivosDentroDire: # print('Nombres: %s' % nombres) (nombreArchivo, extArchivo) = os.path.splitext(nombres) if nombreArchivo[0]=='~': # print('No Almacenado') pass elif(extArchivo == ".xlsx"): listaArchivos.append(nombreArchivo+extArchivo) # print('Almacenado') elif(extArchivo == ".xls"): listaArchivos.append(nombreArchivo+extArchivo) # print('Almacenado') else: # print('No Almacenado') pass else: # print('Carpeta Excluida') pass print('Archivos a cargador:') print(pandas.Series(listaArchivos).values) # ESTA PARTE CARGA LOS ARCHIVOS Y CARGA LOS DATOS EN UNA SOLA TABLA. NO SE CARGAN DATOS DE LAS HOJAS CUYOS NOMBRES SEAN "Hoja XX" dataF=pandas.DataFrame() for Archivo in listaArchivos: # print('Archivo=',Archivo) xl = pandas.ExcelFile(Archivo) hojas=xl.sheet_names skip=-1 for hj in hojas: if hj[0:4] == 'Hoja': print( hj, 'del archivo', Archivo, 'descartada') else: dataini= xl.parse(hj) for i in range(0,len(dataini)): datamod= xl.parse(hj,skiprows=i) encabezado=datamod.columns if (encabezado[0])=='Fecha': skip=i break if skip>0: break datax=xl.parse(hj,skiprows=skip,parse_cols=24) dataF=dataF.append(datax,ignore_index=True) # print(dataF.shape) dataF_sin_NA=dataF.dropna() dataF_sin_dupli=dataF.drop_duplicates() datos_limpios=dataF_sin_NA.drop_duplicates().reset_index(drop=True) print('El tamaño tabla cargada es de', dataF.shape,'y tiene la siguiente forma:') (dataF.head(3)) #ESTA PARTE DEPURA Y REORGANIZA LOS DATOS lista_horasDia=['00:00:00','01:00:00', '02:00:00', '03:00:00', '04:00:00', '05:00:00' , '06:00:00', '07:00:00', '08:00:00', '09:00:00', '10:00:00', '11:00:00' ,'12:00:00', '13:00:00', '14:00:00', '15:00:00', '16:00:00', '17:00:00' , '18:00:00', '19:00:00', '20:00:00', '21:00:00', '22:00:00', '23:00:00'] lista_horas=['0','1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11','12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', '23'] fechas=pandas.to_datetime(datos_limpios.Fecha) for iorg in range(len(datos_limpios)): asd1=datos_limpios.ix[iorg,lista_horas].T.values asd2=pandas.DataFrame(asd1) temp1=datos_limpios.ix[iorg,'Fecha'] temp2=[temp1]*24 temp3=pandas.DataFrame(temp2) horatemp1=pandas.DataFrame(lista_horasDia) if iorg == 0: dataxorg=asd2 tempp=temp3 horatemp=horatemp1 else: datawass=[dataxorg,asd2] temppwas=[tempp,temp3] horatempwas=[horatemp,horatemp1] dataxorg = pandas.concat(datawass, ignore_index = True) tempp=pandas.concat(temppwas, ignore_index = True) horatemp=pandas.concat(horatempwas,ignore_index = True) hoora=[] preec=[] añomes1=[] fechatotal=[] solofecha=[] for tg in range(len(tempp)): hoora.append(str(horatemp.ix[tg,0])[0:8]) preec.append(str(dataxorg.ix[tg,0])[0:100]) añomes1.append(str(tempp.ix[tg,0])[0:7]+'-01') solofecha.append(str(tempp.ix[tg,0])[0:10]) fechatotal.append(str(tempp.ix[tg,0])[0:10]+' '+str(horatemp.ix[tg,0])[0:8]) solofecha=pandas.to_datetime(pandas.Series((solofecha))) fechatotal=pandas.to_datetime(pandas.Series((fechatotal))) df2 = pandas.DataFrame({'Fecha': pandas.Series(solofecha),'Hora': pandas.Series(hoora), 'Precio': preec}) df2['Precio'] = df2['Precio'].convert_objects(convert_numeric=True) df4 = pandas.DataFrame({'Año': pandas.Series(fechatotal.dt.year),'Mes': pandas.Series(fechatotal.dt.month), 'Dia calendario': pandas.Series(fechatotal.dt.day), 'Dia': pandas.Series(fechatotal.dt.weekday_name), 'Hora': pandas.Series(fechatotal.dt.hour),'Precio': preec, 'Fecha Completa': fechatotal}) df4['Precio'] = df4['Precio'].convert_objects(convert_numeric=True) añomes2=pandas.Series(añomes1) añomes2=pandas.DataFrame(añomes2) añomes=añomes2.drop_duplicates().reset_index(drop=True) añomes3=[] for tg in range(len(añomes)): añomes3.append(str(añomes.ix[tg,0])) añosymeses=pandas.to_datetime(pandas.Series((añomes3))) dias=fechas print('El tamaño tabla depurada es de', df2.shape,'y tiene la siguiente forma:') df2.head(3) ``` **2.--** Compute e imprima el número de registros con datos faltantes. ``` print('Registros iniciales con Datos Faltantes =',len(dataF)-len(dataF_sin_NA)) ``` **3.--** Compute e imprima el número de registros duplicados. ``` print('Registros iniciales Duplicados =',len(dataF)-len(dataF_sin_dupli)) ``` **4.--** Elimine los registros con datos duplicados o datos faltantes, e imprima la cantidad de registros que quedan (registros completos). ``` print('Registros iniciales completos =',len(datos_limpios)) ``` **5.--** Compute y grafique el precio primedio diario. ``` #ESTA PARTE COMPUTA PROMEDIOS Y GRAFICA EL PRECIO PROMEDIO DE CADA DÍA promDia=df4.groupby(['Año', 'Mes', 'Dia calendario'])['Precio'].mean().values#['Precio'].mean()#.values # print(promDia.shape) promMes=df4.groupby(['Año', 'Mes'])['Precio'].mean().values # print(promMes.shape) matplotlib.pyplot.plot(dias, promDia) matplotlib.pyplot.title("Precio Promedio por Dia") matplotlib.pyplot.ylabel('$/kwh') matplotlib.pyplot.xticks(rotation=70) matplotlib.pyplot.show() ``` **6.--** Compute y grafique el precio máximo por mes. ``` #ESTA PARTE EXTRÁE Y GRAFICA EL PRECIO MÁXIMO MENSUAL maxMes=pandas.Series(df4.groupby(['Año', 'Mes'])['Precio'].max().values) matplotlib.pyplot.plot(añosymeses,maxMes) matplotlib.pyplot.title("Precio Máximo por Mes") matplotlib.pyplot.ylabel('$/kwh') matplotlib.pyplot.xticks(rotation=70) matplotlib.pyplot.show() ``` **7.--** Compute y grafique el precio mínimo mensual. ``` #ESTA PARTE EXTRÁE Y GRAFICA EL PRECIO MÍNIMO MENSUAL minMes=pandas.Series(df4.groupby(['Año', 'Mes'])['Precio'].min().values) matplotlib.pyplot.plot(añosymeses,minMes) matplotlib.pyplot.title("Precio Mínimo por Mes") matplotlib.pyplot.ylabel('$/kwh') matplotlib.pyplot.xticks(rotation=70) matplotlib.pyplot.show() ``` **8.--** Haga un gráfico para comparar el precio máximo del mes (para cada mes) y el precio promedio mensual. ``` matplotlib.pyplot.plot(añosymeses, maxMes, linestyle=':', color='r',label='Precio Máximo Mensual')#marker='x', linestyle=':', color='b',label='Precio Máximo Mensual') matplotlib.pyplot.ion() matplotlib.pyplot.plot(añosymeses, promMes, linestyle='--', color='b',label='Precio Promedio Mensual')#marker='o', linestyle='--', color='r',label='Precio Promedio Mensual') matplotlib.pyplot.legend(loc="best") matplotlib.pyplot.title("Precios Máximos y Promedio por Mes") matplotlib.pyplot.ylabel('$/kwh') matplotlib.pyplot.xticks(rotation=70) matplotlib.pyplot.show() ``` **9.--** Haga un histograma que muestre a que horas se produce el máximo precio diario para los días laborales. ``` okm=df4.groupby(['Año','Dia', 'Hora'],as_index = False)['Precio'].max() okm2=okm.groupby(['Año','Dia'],as_index = False)['Precio'].max() colunasencomun=list(set(okm.columns) & set(okm2.columns)) okm3=pandas.merge(okm,okm2, on=colunasencomun, how='inner') okm4=okm3.groupby(['Año','Dia'],as_index = False).max() fechagrafhabil=[] horagrafhabil=[] fechagrafSAB=[] horagrafSAB=[] fechagrafDOM=[] horagrafDOM=[] for vbg in range(len(okm4)): if okm4.ix[vbg,'Dia']== 'Saturday': fechagrafSAB.append(str(okm4.ix[vbg,'Año'])+'-'+str(okm4.ix[vbg,'Dia'])) horagrafSAB.append(str(okm4.ix[vbg,'Hora'])) elif okm4.ix[vbg,'Dia']== 'Sunday': fechagrafDOM.append(str(okm4.ix[vbg,'Año'])+'-'+str(okm4.ix[vbg,'Dia'])) horagrafDOM.append(str(okm4.ix[vbg,'Hora'])) else: fechagrafhabil.append(str(okm4.ix[vbg,'Año'])+'-'+str(okm4.ix[vbg,'Dia'])) horagrafhabil.append(str(okm4.ix[vbg,'Hora'])) horagrafhabil=pandas.Series(horagrafhabil).convert_objects(convert_numeric=True) horagrafSAB=pandas.Series(horagrafSAB).convert_objects(convert_numeric=True) horagrafDOM=pandas.Series(horagrafDOM).convert_objects(convert_numeric=True) indixx=numpy.arange(len(horagrafhabil)) matplotlib.pyplot.barh(indixx,horagrafhabil,align = "center") matplotlib.pyplot.yticks(indixx, fechagrafhabil) matplotlib.pyplot.xlabel('Horas') matplotlib.pyplot.ylabel('Año-Dia') matplotlib.pyplot.title('Hora de precio máximo para los días laborales de cada año.') matplotlib.pyplot.xticks(numpy.arange(0,24,1)) matplotlib.pyplot.xlim(0,23.05) matplotlib.pyplot.ylim(-1,len(horagrafhabil)) matplotlib.pyplot.grid(True) matplotlib.pyplot.show() ``` **10.--** Haga un histograma que muestre a que horas se produce el máximo precio diario para los días sabado. ``` indixx=numpy.arange(len(horagrafSAB)) matplotlib.pyplot.barh(indixx,horagrafSAB,align = "center") matplotlib.pyplot.yticks(indixx, fechagrafSAB) matplotlib.pyplot.xlabel('Horas') matplotlib.pyplot.ylabel('Año-Dia') matplotlib.pyplot.title('Hora de precio máximo para los Sábados de cada año.') matplotlib.pyplot.xticks(numpy.arange(0,24,1)) matplotlib.pyplot.xlim(0,23.05) matplotlib.pyplot.ylim(-1,len(horagrafSAB)) matplotlib.pyplot.grid(True) matplotlib.pyplot.show() ``` **11.--** Haga un histograma que muestre a que horas se produce el máximo precio diario para los días domingo. ``` indixx=numpy.arange(len(horagrafDOM)) matplotlib.pyplot.barh(indixx,horagrafSAB,align = "center") matplotlib.pyplot.yticks(indixx, fechagrafDOM) matplotlib.pyplot.xlabel('Horas') matplotlib.pyplot.ylabel('Año-Dia') matplotlib.pyplot.title('Hora de precio máximo para los Sábados de cada año.') matplotlib.pyplot.xticks(numpy.arange(0,24,1)) matplotlib.pyplot.xlim(0,23.05) matplotlib.pyplot.ylim(-1,len(horagrafDOM)) matplotlib.pyplot.grid(True) matplotlib.pyplot.show() ``` **12.--** Imprima una tabla con la fecha y el valor más bajo por año del precio de bolsa. ``` # print(df4) okm=df4.groupby(['Año','Dia', 'Hora'],as_index = False)['Precio'].min() print('okm=',okm) okm2=okm.groupby(['Año','Dia'],as_index = False)['Precio'].min() print('okm2=',okm2) colunasencomun=list(set(okm.columns) & set(okm2.columns)) okm3=pandas.merge(okm,okm2, on=colunasencomun, how='inner') print('okm3=',okm3) okm4=okm3.groupby(['Año','Dia'],as_index = False).min() print('okm4=',okm4) colunasencomun2=list(set(okm.columns) & set(okm2.columns)) ``` **13.--** Haga una gráfica en que se muestre el precio promedio diario y el precio promedio mensual. ``` promDia=df4.groupby(['Año', 'Mes', 'Dia calendario'],as_index = False)['Precio'].mean()#['Precio'].mean()#.values print(promDia) promMes=df4.groupby(['Año', 'Mes'],as_index = False)['Precio'].mean() print(promMes) promMescompara=[] for thnm in range(len(promDia)): for ijn in range(len(promMes)): if ((promDia.ix(thnm,'Año')==promMes.ix(ijn,'Año'))): if (promDia.ix(thnm,'Mes')==promMes.ix(ijn,'Mes')): promMescompara.append(str(promMes.ix[ijn,'Precio'])) promMescompara=pandas.Series(promMescompara) promMescompara= promMescompara.convert_objects(convert_numeric=True) print(promMescompara) ``` ---
github_jupyter
## Advanced usage ### Using config files Instead of specifying all inputs using [set_input](https://inbo.github.io/niche_vlaanderen/lowlevel.html#niche_vlaanderen.Niche.set_input), it is possible to use a config file. A config file can be loaded using [read_config_file](https://inbo.github.io/niche_vlaanderen/lowlevel.html#niche_vlaanderen.Niche.read_config_file) or it can be read and executed immediately by using [run_config_file](https://inbo.github.io/niche_vlaanderen/lowlevel.html#niche_vlaanderen.Niche.run_config_file). The syntax of the config file is explained more in detail in [Niche Configuration file](https://inbo.github.io/niche_vlaanderen/cli.html), but is already introduced here because it will be used in the next examples. If you want to recreate the examples below, the config files can be found under the `docs` folder, so if you [extract all the data](https://inbo.github.io/niche_vlaanderen/getting_started.html#Interactive-usage) the you should be able to run the examples from the notebook. ### Comparing Niche classes Niche models can be compared using a [NicheDelta](lowlevel.rst#niche_vlaanderen.NicheDelta) class. This can be used to compare different scenario's. In our example, we will compare the results of the running Niche two times, once using a simple model and once using a full model. ``` import niche_vlaanderen as nv import matplotlib.pyplot as plt simple = nv.Niche() simple.run_config_file("simple.yml") full = nv.Niche() full.run_config_file("full.yml") delta = nv.NicheDelta(simple, full) ax = delta.plot(7) plt.show() ``` It is also possible to show the areas in a dataframe by using the [table](lowlevel.rst#niche_vlaanderen.NicheDelta.table) attribute. ``` delta.table.head() ``` Like Niche, NicheDelta also has a write method, which takes a directory as an argument. ``` delta.write("comparison_output", overwrite_files=True) ``` ### Creating deviation maps In many cases, it is not only important to find out which vegetation types are possible given the different input files, but also to find out how much change would be required to `mhw` or `mlw` to allow a certain vegetation type. To create deviation maps, it is necessary to [run](lowlevel.rst#niche_vlaanderen.Niche.run) a model with the `deviation` option. ``` dev = nv.Niche() dev.set_input("mhw","../testcase/zwarte_beek/input/mhw.asc") dev.set_input("mlw","../testcase/zwarte_beek/input/mhw.asc") dev.set_input("soil_code","../testcase/zwarte_beek/input/soil_code.asc") dev.run(deviation=True, full_model=False) ``` The deviation maps can be plotted by specifying either mhw or mlw with the vegetation type, eg mhw_14 (to show the deviation between mhw and the required mhw for vegetation type 14). Positive values indicate that the actual condition is too dry for the vegetation type. Negative values indicate that the actual condition is too wet for the vegetation type. ``` dev.plot("mlw") dev.plot("mlw_14") plt.show() ``` ### Creating statistics per shape object Niche also contains a helper function that allows one to calculate the possible vegetation by using a vector dataset, such as a .geojson or .shp file. The vegetation is returned as a pandas dataframe, where shapes are identified by their id and the area not covered by a shape gets `shape_id` -1. ``` df = full.zonal_stats("../testcase/zwarte_beek/input/study_area_l72.geojson") df ``` ### Using abiotic grids In certain cases the intermediary grids of Acidity or NutrientLevel need changes, to compensate for specific circumstances. In that case it is possible to run a Niche model and make some adjustments to the grid and then using an abiotic grid as an input. ``` import niche_vlaanderen as nv import matplotlib.pyplot as plt full = nv.Niche() full.run_config_file("full.yml") full.write("output_abiotic", overwrite_files=True) ``` Now it is possible to adapt the `acidity` and `nutrient_level` grids outside niche. For this demo, we will use some Python magic to make all nutrient levels one level lower. Note that there is no need to do this in Python, any other tool could be used as well. So if you don't understand this code - don't panic (and ignore the warning)! ``` import rasterio with rasterio.open("output_abiotic/full_nutrient_level.tif") as src: nutrient_level = src.read(1) profile = src.profile nodata = src.nodatavals[0] nutrient_level[nutrient_level != nodata] = nutrient_level[nutrient_level != nodata] -1 # we can not have nutrient level 0, so we set all places where this occurs to 1 nutrient_level[nutrient_level ==0 ] = 1 with rasterio.open("output_abiotic/adjusted_nutrient.tif", 'w', **profile) as dst: dst.write(nutrient_level, 1) ``` Next we will create a new niche model using the same options as our previous full models, but we will also add the previously calculated acidity and nutrient level values as input, and run with the `abiotic=True` option. Note that we use the `read_config_file` method (and not `run_config_file`) because we still want to edit the configuration before running. ``` adjusted = nv.Niche() adjusted.read_config_file("full.yml") adjusted.set_input("acidity", "output_abiotic/full_acidity.tif") adjusted.set_input("nutrient_level", "output_abiotic/adjusted_nutrient.tif") adjusted.name = "adjusted" adjusted.run(abiotic=True) adjusted.plot(7) full.plot(7) plt.show() ``` ### Overwriting standard code tables One is free to adapt the [standard code tables](https://inbo.github.io/niche_vlaanderen/codetables.html) that are used by NICHE. By specifying the paths to the adapted code tables in a NICHE class object, the standard code tables can be overwritten. In this way, standard model functioning can be tweaked. However, it is strongly advised to use ecological data that is reviewed by experts and to have in-depth knowledge of the [model functioning](https://inbo.github.io/niche_vlaanderen/model.html). The possible code tables that can be adapted and set within a [NICHE object](https://inbo.github.io/niche_vlaanderen/lowlevel.html) are: ct_acidity, ct_soil_mlw_class, ct_soil_codes, lnk_acidity, ct_seepage, ct_vegetation, ct_management, ct_nutrient_level and ct_mineralisation After adapting the vegetation code table for type 7 (Caricion gracilis) on peaty soil (V) by randomly altering the maximum ``mhw`` and ``mlw`` to 5 and 4 cm resp. (i.e. below ground, instead of standard values of -28 and -29 cm) and saving the file to ct_vegetation_adj7.csv, the adjusted model can be built and run. ``` adjusted_ct = nv.Niche(ct_vegetation = "ct_vegetation_adj7.csv") adjusted_ct.read_config_file("full.yml") adjusted_ct.run() ``` Example of changed potential area of Caricion gracilis vegetation type because of the changes set in the vegetation code table: ``` adjusted_ct.plot(7) full.plot(7) plt.show() ``` Potential area is shrinking because of the range of grondwater levels that have become more narrow (excluding the wettest places).
github_jupyter
### Requirements #### Jupyter Nbextensions - Python Markdown - Load Tex Macros #### Python & Libs - Python version $\geq$ 3.4 - Numpy version $\geq$ 1.17 - Pandas version $\geq$ 1.0.3 ``` import string import operator import functools import numpy as np import pandas as pd from collections import Counter from IPython.display import display, Math, Latex, Markdown ref_1 = 'The cat sat on the mat.' cand_1 = 'The cat is on the mat.' ref_2 = 'There is a cat on the mat.' cand_2 = 'The the the the the the the the.' preprocess = lambda x: x.lower().translate(str.maketrans('', '', string.punctuation)) def extract(sentence): sentence = preprocess(sentence) uni_gram = sentence.split() bi_gram = [' '.join(words) for words in zip(uni_gram[::], uni_gram[1::])] tri_gram = [' '.join(words) for words in zip(uni_gram[::], uni_gram[1::], uni_gram[2::])] quad_gram = [' '.join(words) for words in zip(uni_gram[::], uni_gram[1::], uni_gram[2::], uni_gram[3::])] return uni_gram, bi_gram, tri_gram, quad_gram ``` ## N-gram Evaluation ### Example Reference `{{ref_1}}` $\xrightarrow[\text{}]{\text{ Preprocessing }}$ `{{preprocess(ref_1)}}` $\xrightarrow[\text{}]{\text{Extract 1-gram}} $ `{{extract(ref_1)[0]}}` $\xrightarrow[\text{}]{\text{Extract 2-gram}} $ `{{extract(ref_1)[1]}}` $\xrightarrow[\text{}]{\text{Extract 3-gram}} $ `{{extract(ref_1)[2]}}` Candidate `{{cand_1}}` $\xrightarrow[\text{}]{\text{ Preprocessing }}$ `{{preprocess(cand_1)}}` $\xrightarrow[\text{}]{\text{Extract 1-gram}} $ `{{extract(cand_1)[0]}}` $\xrightarrow[\text{}]{\text{Extract 2-gram}} $ `{{extract(cand_1)[1]}}` $\xrightarrow[\text{}]{\text{Extract 3-gram}} $ `{{extract(cand_1)[2]}}` ## Considering Recall % ### Modified Precision - Clipping ### Example Candidate `{{cand_2}}` $\xrightarrow[\text{}]{\text{ Preprocessing }}$ `{{preprocess(cand_2)}}` $\xrightarrow[\text{}]{\text{Extract 1-gram}} $ `{{extract(cand_2)[0]}}` $\xrightarrow[\text{}]{\text{Extract 2-gram}} $ `{{extract(cand_2)[1]}}` $\xrightarrow[\text{}]{\text{Extract 3-gram}} $ `{{extract(cand_2)[2][:2] + ['...']}}` ## [BLEU - Bilingual Evaluation Understudy](https://www.aclweb.org/anthology/P02-1040.pdf) ### BLEU_n Formula $ \begin{align} \quad BLEU = BP \cdot exp(\sum_{n=1}^{N} w_n\log_{}{P_n}) \cr \end{align} $ $ \begin{align} \quad BP \quad\,\ = \begin{cases} 1 &, \ c > r \cr exp(1-\frac{r}{c}) &, \ c \leq r \cr \end{cases} \end{align} $ ``` def BLEU_n(candidate, reference): candidate = extract(candidate) reference = extract(reference) BLEU = 0 W_n = 1. / len(candidate) for cand, ref in zip(candidate, reference): BLEU += W_n * np.log(P_n(cand, ref)) BLEU = np.exp(BLEU) * BP(candidate[0], reference[0]) return BLEU def P_n(cand, ref): count = 0 for c in cand: if c in ref: count += 1 ref.remove(c) return 1 if count == 0 else count / len(cand) def BP(candidate, reference): c, r = len(candidate), len(reference) return 1 if c > r else np.exp(1 - r / c) #BLEU_n(cand_1, ref_1) ``` ## [ROUGE - Recall-Oriented Understudy for Gisting Evaluation](https://www.aclweb.org/anthology/W04-1013.pdf) ### Rouge-N Formula $ \begin{align} \quad \textit{Rouge-N}\; = \frac{\sum\limits_{S \in \{\textit{ReferenceSummaries}\}} \sum\limits_{gram_n \in S} Count_{macth}(gram_n)} {\sum\limits_{S \in \{\textit{ReferenceSummaries}\}} \sum\limits_{gram_n \in S} Count(gram_n)} \end{align} $ ``` ref_3 = 'The cat was under the bed.' cand_3 = 'The cat was found under the bed.' def Rouge_n(candidate, reference, n=1): cand, ref = extract(candidate)[n-1], extract(reference)[n-1] cand = list(map(lambda x: 1 if x in ref else 0, cand)) return functools.reduce(operator.add, cand) / len(ref) #Rouge_n(cand_3, ref_3) ``` ### LCS(Longest Common Subsequence) #### Example Reference `{{ref_1}}` $\xrightarrow[\text{}]{\text{ Preprocessing }}$ `{{preprocess(ref_1)}}` Candidate `{{cand_1}}` $\xrightarrow[\text{}]{\text{ Preprocessing }}$ `{{preprocess(cand_1)}}` LCS `the cat on the mat` ### Rouge-L Formula $ \begin{align} \cr \quad R_{lcs} = \frac{LCS(\textit{Reference}, \textit{Candidate})}{m}, \; m \;\text{for }\textit{Reference} \text{ length} \cr P_{lcs} = \frac{LCS(\textit{Reference}, \textit{Candidate})}{n}, \; n \;\text{for }\textit{Candidate} \text{ length} \cr \end{align} $ $ \begin{align} \quad \; F_{lcs} = \frac{(1+\beta^2)R_{lcs}P_{lcs}}{R_{lcs} + \beta^2P_{lcs}} \end{align} $ ``` def Rouge_l(candidate, reference, beta=1.2): cand, ref = extract(candidate)[0], extract(reference)[0] lcs = LCS(cand, ref) r_lcs, p_lcs = lcs / len(ref), lcs / len(cand) return ((1 + beta**2)*r_lcs*p_lcs) / (r_lcs + beta**2*p_lcs) def LCS(cand, ref): l_c, l_r = len(cand), len(ref) dp = np.zeros(shape=(l_c + 1, l_r + 1)) for i in range(l_c): for j in range(l_r): if cand[i] == ref[j]: dp[i + 1][j + 1] = dp[i][j] + 1 elif dp[i + 1][j] > dp[i][j + 1]: dp[i + 1][j + 1] = dp[i + 1][j] else: dp[i + 1][j + 1] = dp[i][j + 1] return int(dp[-1][-1]) #Rouge_l(cand_1, ref_1) ``` ### Rouge-W - WLCS - Weighted LCS-based statistics that favors consecutive LCSes. ### Rouge-S - Skip-gram - Skip-bigram based co-occurrence statistics. - Skip-bigram is any pair of words in their sentence order. ### Rouge-SU - Skip-bigram plus unigram-based co-occurrence statistics. ## [CIDEr - Consensus-based Image Description Evaluation](https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Vedantam_CIDEr_Consensus-Based_Image_2015_CVPR_paper.pdf) ### TF-IDF - Term Frequency $ \begin{align} \quad \textit{TF}(𝑥) \;\text{for count of term } x \;\text{ in the document} \cr \end{align} $ - Inverse Document Frequency $ \begin{align} \quad \textit{IDF}(𝑥) = \log \frac{N + 1}{N(x) + 1} +1 ,\; N\;\text{for total document count and }N(x) \text{ for document which includes term } x \cr \end{align} $ - TF-IDF $ \begin{align} \quad \textit{TF-IDF}(x) = \textit{TF}(x)\;\times\;\textit{IDF}(x) \cr \end{align} $ ### Cosine Similiraty - The cosine of two non-zero vectors can be derived by using the Euclidean dot product formula $ \begin{align} \quad A \cdot B = \lVert A \rVert \lVert B \rVert \cos{\theta} \cr \end{align} $ - Similiraty $ \begin{align} \quad \textit{Similiraty } = \cos{(\theta)} = \frac{A \cdot B} {\lVert A \rVert \lVert B \rVert} = \frac{\sum\limits_{i=1}^{N}A_i B_i} {\sqrt{\sum\limits_{i=1}^{N}A_i^2} \sqrt{\sum\limits_{i=1}^{N}B_i^2}}, \text{ where } A_i \text{ and } B_i \text{are components of vector } A \text{ and } B \text{ respectively.} \end{align} $ ``` @np.vectorize def nonzero(x): return 1 if x > 0 else 0 @np.vectorize def TFIDF(x, IDF): return x * IDF def gen_matrix(documents): d = Counter() for doc in documents: d += Counter(doc) d = dict(d.most_common()).fromkeys(d, 0) matrix = [] for doc in documents: dest = d.copy() dest.update(dict(Counter(doc))) matrix.append(list(dest.values())) return d, matrix def gen_table(d, matrix, fn): columns = {} index = ['Candidate'] + ['Reference_{}'.format(i+1) for i in range(len(matrix) - 1)] if fn.__name__ == 'CIDEr': for idx, key in enumerate(list(d.keys()) +['CIDEr']): columns.update({idx:key}) N = [len(matrix)]*len(matrix[0]) N_x = np.matrix.sum(nonzero(np.matrix(matrix)), axis=0).tolist()[0] N = np.array(N) + np.array(1) N_x = np.array(N_x) + np.array(1) IDF = (np.log(N / N_x) / np.log(10) + (np.array(1))) matrix = TFIDF(matrix, IDF).tolist() else: for idx, key in enumerate(list(d.keys()) +['Cos_Sim']): columns.update({idx:key}) temp = [] for i in range(len(matrix)): temp.append(matrix[i] + [fn(matrix[0], matrix[i])]) df = pd.DataFrame(temp).rename(columns=columns) df.reset_index(drop=True, inplace=True) df.index = index return Markdown(df.to_markdown()) def cosine_similiraty(cand, ref): fn = lambda x: (np.sqrt(np.sum(np.power(x, 2)))) return np.dot(cand, ref) / (fn(cand) * fn(ref)) # Cosine similiraty table = [] for i in range(0, 4): table.append(gen_table(*gen_matrix([extract(cand_1)[i], extract(ref_1)[i], extract(ref_2)[i]]), cosine_similiraty)) ``` ### Example Candidate `{{cand_1}}` $\xrightarrow[\text{}]{\text{ Preprocessing }}$ `{{preprocess(cand_1)}}` $\xrightarrow[\text{}]{\text{Extract 2-gram}} $ `{{extract(cand_1)[1]}}` Reference_1 `{{ref_1}}` $\xrightarrow[\text{}]{\text{ Preprocessing }}$ `{{preprocess(ref_1)}}` $\xrightarrow[\text{}]{\text{Extract 2-gram}}$ `{{extract(ref_1)[1]}}` Reference_2 `{{ref_2}}` $\xrightarrow[\text{}]{\text{ Preprocessing }}$ `{{preprocess(ref_2)}}` $\xrightarrow[\text{}]{\text{Extract 2-gram}}$ `{{extract(ref_2)[1]}}` ### Doc-Term Matrix with Cosine Similiraty `{{table[0]}}` `{{table[1]}}` `{{table[2]}}` ### CIDEr_n Formula $ \begin{align} \cr \quad \textit{CIDEr_n}(\textit{candidate}, \textit{references}) = \frac{1}{M}\sum\limits_{i=1}^{M} \frac{ g^n(\textit{candidate}) \cdot g^n(\textit{references}) } {\lVert g^n(\textit{candidate}) \rVert \times \lVert g^n(\textit{references}) \rVert}, \text{ where } g^n(x) \text{ is } \textit{TF-IDF} \text{ weight of n-gram in sentence } x \text{.} \end{align} $ $ $ ``` def CIDEr(cand, ref): fn = lambda x: (np.sqrt(np.sum(np.power(x, 2)))) return np.dot(cand, ref) / (fn(cand) * fn(ref)) table = [] for i in range(0, 3): table.append(gen_table(*gen_matrix([extract(cand_1)[i], extract(ref_1)[i], extract(ref_2)[i]]), CIDEr)) ``` ### Doc-Term TF-IDF Matrix with CIDEr-n `{{table[0]}}` `{{table[1]}}` `{{table[2]}}`
github_jupyter
## Figure tracer transport ``` #import gsw as sw # Gibbs seawater package import cmocean as cmo import matplotlib.pyplot as plt import matplotlib.colors as mcolors import matplotlib.gridspec as gspec %matplotlib inline from netCDF4 import Dataset import numpy as np import pandas as pd import seaborn as sns import sys import xarray as xr import canyon_tools.readout_tools as rout import canyon_tools.metrics_tools as mpt from IPython.display import HTML HTML('''<script> code_show=true; function code_toggle() { if (code_show){ $('div.input').hide(); } else { $('div.input').show(); } code_show = !code_show } $( document ).ready(code_toggle); </script> <form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''') sns.set_context('paper') sns.set_style('white') plt.rcParams.update({'font.size': 11}) def plot_transports_CS(g0,g1,g2,g3,g4,g5, dfcan, dfdif, color, lab): ax0 = plt.subplot(g0) ax1 = plt.subplot(g1) ax2 = plt.subplot(g2) ax3 = plt.subplot(g3) ax4 = plt.subplot(g4) ax5 = plt.subplot(g5) axs = [ax0,ax1,ax2,ax3,ax4,ax5] for ax in axs: ax.axhline(0, color='gold') ax.tick_params(axis='y', pad=0) ax.tick_params(axis='x', pad=0.05) ax.grid(which='both',color='0.9', linestyle='-') ax.set_ylim(-5, 5) ax.set_xlabel('Days', labelpad=0) ax.set_xticks([0,3,6,9]) for ax in axs[1:]: #ax.set_yticks([-200,-100,0,100,200]) ax.set_yticklabels(['','','','','','']) # Tracers vertical = dfcan.Vert_adv_trans_sb # only advective parts, ignoring diffusve for now ax0.plot(np.arange(1,19,1)/2.0,(vertical)/1E6,color=color, label=lab) ax1.plot(np.arange(1,19,1)/2.0,(dfcan.CS1_adv_trans+dfcan.CS2_adv_trans)/1E6,color=color, label=lab) ax2.plot(np.arange(1,19,1)/2.0,(dfcan.CS3_adv_trans )/1E6,color=color, label=lab) ax3.plot(np.arange(1,19,1)/2.0,(dfcan.CS4_adv_trans+dfcan.CS5_adv_trans)/1E6,color=color, label=lab) ax4.plot(np.arange(1,19,1)/2.0,(dfcan.CS6_adv_trans )/1E6,color=color, label=lab) total = ( (dfcan.CS1_adv_trans ) + (dfcan.CS2_adv_trans ) + (dfcan.CS3_adv_trans ) + (dfcan.CS4_adv_trans ) + (dfcan.CS5_adv_trans ) + (dfcan.CS6_adv_trans ) + vertical) ax5.plot(np.arange(1,19,1)/2.0,total/1E6,color=color, label=lab) return(ax0,ax1,ax2,ax3,ax4,ax5) def plot_can_effect(gs_c, dfcan, dfdif, dfcanNoC, dfdifNoC, color, lab, id_sup): ax = plt.subplot(gs_c, xticks=[]) ax.axhline(0, color='gold') canyon = tot_trans(dfcan, dfdif) no_canyon = tot_trans(dfcanNoC, dfdifNoC) ax.plot(np.arange(1,19,1)/2.0,(canyon-no_canyon)/1E5,color=color, label=lab) ax.tick_params(axis='y', pad=0.5) ax.grid(which='both',color='0.9', linestyle='-') ax.yaxis.tick_right() if lab =='ARGO': ax.text(0.8,0.8,id_sup,transform=ax.transAxes) return(ax) def tot_trans(dfcan, dfdif): vertical = (dfdif.Vert_dif_trans_sb + dfcan.Vert_adv_trans_sb) total = ( (dfcan.CS1_adv_trans ) + (dfcan.CS2_adv_trans ) + (dfcan.CS3_adv_trans ) + (dfcan.CS4_adv_trans ) + (dfcan.CS5_adv_trans ) + (dfcan.CS6_adv_trans ) + vertical) return(total) def plotCSPos(ax,CS1,CS2,CS3,CS4,CS5,CS6): ax.axvline(CS1,color='k',linestyle=':') ax.axvline(CS2,color='k',linestyle=':') ax.axvline(CS3,color='k',linestyle=':') ax.axvline(CS4,color='k',linestyle=':') ax.axvline(CS5,color='k',linestyle=':') ax.axvline(CS6,color='k',linestyle=':') def plot_CS_slice(fig, gs_a, gs_b, t_slice, x_slice, x_slice_vert, y_slice_vert, z_slice, z_slice_zoom, y_ind, z_ind, grid,Flux,FluxV,unit): ax_a = plt.subplot(gs_a)#,xticks=[]) ax_b = plt.subplot(gs_b)#,xticks=[]) areas = (np.expand_dims(grid.dxF.isel(X=x_slice,Y=y_ind).data,0))*(np.expand_dims(grid.drF.isel(Z=z_slice).data,1)) # Zoom shelf --------------------------------------------------------------------------- cnt = ax_a.contourf(grid.X.isel(X=x_slice)/1000, grid.Z.isel(Z=z_slice_zoom), Flux.isel(Zmd000104=z_slice_zoom, X=x_slice)/areas[z_slice_zoom,:], 16,cmap=cmo.cm.tarn, vmax=np.max(Flux.isel(Zmd000104=z_slice_zoom,X=x_slice)/areas[z_slice_zoom,:]), vmin=-np.max(Flux.isel(Zmd000104=z_slice_zoom,X=x_slice)/areas[z_slice_zoom,:])) ax_a.contourf(grid.X.isel(X=x_slice)/1000, grid.Z.isel(Z=z_slice_zoom), grid.HFacC.isel(Z=z_slice_zoom,Y=y_ind,X=x_slice), [0,0.1], colors='#a99582') cb_a = fig.colorbar(cnt, ax=ax_a) cb_a.ax.yaxis.set_tick_params(pad=1.5) ax_a.set_ylabel('Depth / m',labelpad=0.0) ax_a.text(0.001,0.05,'%s' %unit,transform=ax_a.transAxes, fontsize=8, color='k',fontweight='bold') # Vertical section --------------------------------------------------------------------------- cnt=ax_b.contourf(grid.X.isel(X=x_slice_vert)/1000, grid.Y.isel(Y=y_slice_vert)/1000, 100*(FluxV.isel(X=x_slice_vert,Y=y_slice_vert).data)/(grid.rA[y_slice_vert,x_slice_vert]), 16,cmap=cmo.cm.tarn, vmax= np.max(100*(FluxV.isel(X=x_slice_vert,Y=y_slice_vert).data)/(grid.rA[y_slice_vert,x_slice_vert])), vmin=-np.max(100*(FluxV.isel(X=x_slice_vert,Y=y_slice_vert).data)/(grid.rA[y_slice_vert,x_slice_vert]))) ax_b.contourf(grid.X.isel(X=x_slice_vert)/1000, grid.Y.isel(Y=y_slice_vert)/1000, grid.HFacC.isel(Z=z_ind,X=x_slice_vert,Y=y_slice_vert), [0,0.1], colors='#a99582') cb_b=fig.colorbar(cnt, ax=ax_b)#,ticks=[-2,-1,0,1,2,3,4]) cb_b.ax.yaxis.set_tick_params(pad=1.5) ax_b.set_ylabel('C-S distance / km', labelpad=0) ax_b.set_aspect(1) return(ax_a,ax_b) #Exp Grid = '/data/kramosmu/results/TracerExperiments/UPW_10TR_BF2_AST/01_Ast03/gridGlob.nc' GridOut = Dataset(Grid) GridNoC = '/data/kramosmu/results/TracerExperiments/UPW_10TR_BF2_AST/02_Ast03_No_Cny/gridGlob.nc' GridNoCOut = Dataset(GridNoC) State = '/data/kramosmu/results/TracerExperiments/UPW_10TR_BF2_AST/01_Ast03/stateGlob.nc' StateNoC = '/data/kramosmu/results/TracerExperiments/UPW_10TR_BF2_AST/02_Ast03_No_Cny/stateGlob.nc' units = ['$10^5$ $\mu$mol kg$^{-1}$ m$^3$s$^{-1}$', '$10^5$ nM m$^3$s$^{-1}$', '$10^5$ $\mu$mol kg$^{-1}$ m$^3$s$^{-1}$'] tracers = ['TR03','TR08','TR09'] tr_labels = ['Oxygen','Methane','DIC'] exps = ['UPW_10TR_BF2_AST/01_Ast03', 'UPW_10TR_BF2_AST/03_Ast03_Argo', 'UPW_10TR_BF4_BAR/01_Bar03', 'UPW_10TR_BF4_BAR/03_Bar03_Path'] expsNoC = ['UPW_10TR_BF2_AST/02_Ast03_No_Cny', 'UPW_10TR_BF2_AST/04_Ast03_No_Cny_Argo', 'UPW_10TR_BF4_BAR/02_Bar03_No_Cny', 'UPW_10TR_BF4_BAR/04_Bar03_No_Cny_Path'] colors = ['steelblue', 'skyblue', 'orangered', 'lightsalmon'] labels = ['Astoria','ARGO', 'Barkley', 'Pathways'] subplots_id = ['a3', 'b3', 'c3',] t_slice = slice(10,20) x_slice = slice(0,400) x_slice_vert = slice(120,240) y_slice_vert = slice(130,230) z_slice = slice(0,80) z_slice_zoom = slice(0,31) y_ind = 130 # sb index z_ind = 30 # sb index fig = plt.figure(figsize = (7.48,7.05)) gg = gspec.GridSpec(2, 1, hspace=0.12, height_ratios=[3,1]) gs = gspec.GridSpecFromSubplotSpec(3, 1, subplot_spec=gg[0]) gs1 = gspec.GridSpecFromSubplotSpec(1, 3, subplot_spec=gs[0],hspace=0.15,wspace=0.1,width_ratios=[1,0.43,0.38]) gs3 = gspec.GridSpecFromSubplotSpec(1, 3, subplot_spec=gs[1],hspace=0.15,wspace=0.1,width_ratios=[1,0.43,0.38]) gs4 = gspec.GridSpecFromSubplotSpec(1, 3, subplot_spec=gs[2],hspace=0.15,wspace=0.1,width_ratios=[1,0.43,0.38]) gs5 = gspec.GridSpecFromSubplotSpec(1, 6, subplot_spec=gg[1]) ggs = [gs1,gs3,gs4] grid = xr.open_dataset(Grid) # This is horrible ------------------------------------------------------------------------------------ # - Oxygen flux_file = ('/data/kramosmu/results/TracerExperiments/UPW_10TR_BF2_AST/01_Ast03/Flux%sGlob.nc' %tracers[0]) flux = xr.open_dataset(flux_file) adv_flux_AP = (flux.ADVyTr03[t_slice,:,y_ind,:]).mean(dim='T') dif_flux_AP = (flux.DFyETr03[t_slice,:,y_ind,:]).mean(dim='T') Flux = adv_flux_AP + dif_flux_AP adv_fluxV_AP = (flux.ADVrTr03[t_slice,z_ind,:,:]).mean(dim='T') dif_fluxV_AP = (flux.DFrITr03[t_slice,z_ind,:,:]+flux.DFrETr03[t_slice,z_ind,:,:]).mean(dim='T') FluxV = adv_fluxV_AP + dif_fluxV_AP ax3,ax4 = plot_CS_slice(fig, gs1[0], gs1[1],t_slice, x_slice, x_slice_vert, y_slice_vert, z_slice, z_slice_zoom, y_ind, z_ind, grid,Flux,FluxV, units[0]) ax3.text(0.05,0.85,tr_labels[0],fontweight='bold',transform=ax3.transAxes) ax3.text(0.85,0.8,'a1',transform=ax3.transAxes) ax4.text(0.7,0.8,'a2',transform=ax4.transAxes) ax3.tick_params(axis='y', pad=0.5) ax4.tick_params(axis='y', pad=0.5) # - Methane flux_file = ('/data/kramosmu/results/TracerExperiments/UPW_10TR_BF2_AST/01_Ast03/Flux%sGlob.nc' %tracers[1]) flux = xr.open_dataset(flux_file) adv_flux_AP = (flux.ADVyTr08[t_slice,:,y_ind,:]).mean(dim='T') dif_flux_AP = (flux.DFyETr08[t_slice,:,y_ind,:]).mean(dim='T') Flux = adv_flux_AP + dif_flux_AP adv_fluxV_AP = (flux.ADVrTr08[t_slice,z_ind,:,:]).mean(dim='T') dif_fluxV_AP = (flux.DFrITr08[t_slice,z_ind,:,:]+flux.DFrETr08[t_slice,z_ind,:,:]).mean(dim='T') FluxV = adv_fluxV_AP + dif_fluxV_AP ax7,ax8 = plot_CS_slice(fig, gs3[0], gs3[1],t_slice, x_slice, x_slice_vert, y_slice_vert, z_slice, z_slice_zoom, y_ind, z_ind, grid,Flux,FluxV, units[1]) ax7.text(0.05,0.85,tr_labels[1],fontweight='bold',transform=ax7.transAxes) ax7.text(0.85,0.8,'b1',transform=ax7.transAxes) ax8.text(0.7,0.8,'b2',transform=ax8.transAxes) ax7.tick_params(axis='y', pad=0.5) ax8.tick_params(axis='y', pad=0.5) # - DIC flux_file = ('/data/kramosmu/results/TracerExperiments/UPW_10TR_BF2_AST/01_Ast03/Flux%sGlob.nc' %tracers[2]) flux = xr.open_dataset(flux_file) adv_flux_AP = (flux.ADVyTr09[t_slice,:,y_ind,:]).mean(dim='T') dif_flux_AP = (flux.DFyETr09[t_slice,:,y_ind,:]).mean(dim='T') Flux = adv_flux_AP + dif_flux_AP adv_fluxV_AP = (flux.ADVrTr09[t_slice,z_ind,:,:]).mean(dim='T') dif_fluxV_AP = (flux.DFrITr09[t_slice,z_ind,:,:]+flux.DFrETr09[t_slice,z_ind,:,:]).mean(dim='T') FluxV = adv_fluxV_AP + dif_fluxV_AP ax9,ax10 = plot_CS_slice(fig, gs4[0], gs4[1],t_slice, x_slice, x_slice_vert, y_slice_vert, z_slice, z_slice_zoom, y_ind, z_ind, grid,Flux,FluxV, units[2]) ax9.text(0.05,0.85,tr_labels[2],fontweight='bold',transform=ax9.transAxes) ax9.text(0.85,0.8,'c1',transform=ax9.transAxes) ax10.text(0.7,0.8,'c2',transform=ax10.transAxes) ax9.tick_params(axis='y', pad=0.5) ax10.tick_params(axis='y', pad=0.5) #ax9.set_xticks([20,40,60,80,100,120,140]) #ax10.set_xticks([60,80,100]) ax9.set_xlabel('Alongshelf distance / km', labelpad=0) ax10.set_xlabel('Alongshelf dist. / km', labelpad=0) #------------------------------------------------------------------------------------------------------------ # - Canyon Effect for tr, unit, tr_lab, gss, id_sup in zip(tracers, units, tr_labels, ggs,subplots_id): for exp,expNoC, color, lab in zip(exps,expsNoC, colors, labels): # net canyon effect file = ('/data/kramosmu/results/TracerExperiments/%s/adv%s_CS_transports.nc' %(exp,tr)) filedif = ('/data/kramosmu/results/TracerExperiments/%s/dif%s_CS_transports.nc' %(exp,tr)) fileNoC = ('/data/kramosmu/results/TracerExperiments/%s/adv%s_CS_transports.nc' %(expNoC,tr)) filedifNoC = ('/data/kramosmu/results/TracerExperiments/%s/dif%s_CS_transports.nc' %(expNoC,tr)) dfcan = xr.open_dataset(file) dfdif = xr.open_dataset(filedif) dfcanNoC = xr.open_dataset(fileNoC) dfdifNoC = xr.open_dataset(filedifNoC) axx = plot_can_effect(gss[2], dfcan, dfdif, dfcanNoC, dfdifNoC, color, lab, id_sup) axx.set_xticks([0,2,4,6,8]) axx.set_xticklabels(['','','','','']) if tr_lab == 'DIC': axx.set_xticklabels([0,2,4,6,8]) axx.set_xlabel('Days', labelpad=0) #------------------------------------------------------------------------------------------------------------- #- Linear profile cross-shelf transport through CS sections tr = tracers[1] unit = units[1] tr_lab = tr_labels[1] for exp, color, lab in zip(exps, colors, labels): # net canyon effect file = ('/data/kramosmu/results/TracerExperiments/%s/adv%s_CS_transports.nc' %(exp,tr)) filedif = ('/data/kramosmu/results/TracerExperiments/%s/dif%s_CS_transports.nc' %(exp,tr)) dfcan = xr.open_dataset(file) dfdif = xr.open_dataset(filedif) axa,axb,axc,axd,axe,axf,= plot_transports_CS(gs5[0],gs5[1],gs5[2],gs5[3],gs5[4],gs5[5], dfcan, dfdif, color, lab) axf.legend(ncol=1,handletextpad=0 , labelspacing=0.1, handlelength=1.5) #--------------------------------------------------------------------------------------------------------- # - aesthetics axa.set_ylabel('Transport \n / $10^{6}$$\mu$mol kg$^{-1}$ m$^3$s$^{-1}$', labelpad=0) axa.text(0.75,0.05,'d1',transform=axa.transAxes) axb.text(0.75,0.05,'d2',transform=axb.transAxes) axc.text(0.75,0.05,'d3',transform=axc.transAxes) axd.text(0.75,0.05,'d4',transform=axd.transAxes) axe.text(0.75,0.05,'d5',transform=axe.transAxes) axf.text(0.75,0.85,'d6',transform=axf.transAxes) plt.savefig('tracer_transport_rev01.eps',format='eps', bbox_inches='tight') ```
github_jupyter
# Documentation - Generate the datasets used for evotuning the esm model - for each dataset, filter out those sequence longer than 1024 - pfamA_balanced: 18000 entries for 4 clans related to motors - motor_toolkit: motor toolkit - kinesin_labelled: kinesin labelled dataset - pfamA_target_shuffled: pfamA_target - pfamA_target_sub: 396 of each protein family, for embedding visualization only ``` import torch import torch.nn as nn import torch.optim as optim import torchvision import torchvision.transforms as transforms import torch.nn.functional as F from torch.autograd import Variable from torch.nn.utils.rnn import pad_sequence from torch.utils.data import Dataset, IterableDataset, DataLoader # import tqdm import numpy as np import pandas as pd import math seed = 7 torch.manual_seed(seed) np.random.seed(seed) pfamA_motors = pd.read_csv("../../data/pfamA_motors_named.csv") pfamA_motors.head() sum(np.array([len(a) for a in pfamA_motors["seq"]])<1025) sum(np.array([len(a) for a in pfamA_motors["seq"]])>=1025) 7502/1907329 pfamA_motors = pfamA_motors.loc[np.array([len(a) for a in pfamA_motors["seq"]])<1025,:] motor_toolkit = pd.read_csv("../../data/motor_tookits.csv") motor_toolkit.head() # truncate motor_toolkit to be <=1024 sum(motor_toolkit["Length"]<=1024) motor_toolkit.loc[motor_toolkit["Length"]>1024,"seq"] = motor_toolkit.loc[motor_toolkit["Length"]>1024,"seq"].apply(lambda s: s[0:1024]) motor_toolkit["Length"] = motor_toolkit.loc[:,"seq"].apply(lambda s: len(s)) sum(motor_toolkit["Length"]>1024) kinesin_labelled = pd.read_csv("../../data/kinesin_labelled.csv") kinesin_labelled.head() kinesin_labelled.loc[kinesin_labelled["Length"]>1024,"seq"] = kinesin_labelled.loc[kinesin_labelled["Length"]>1024,"seq"].apply(lambda s: s[0:1024]) kinesin_labelled["Length"] = kinesin_labelled.loc[:,"seq"].apply(lambda s: len(s)) sum(kinesin_labelled["Length"]>1024) pfamA_motors_balanced = pfamA_motors.groupby('clan_x').apply(lambda _df: _df.sample(4500,random_state=1)) pfamA_motors_balanced = pfamA_motors_balanced.apply(lambda x: x.reset_index(drop = True)) pfamA_motors_balanced.shape sum(np.array([len(a) for a in pfamA_motors_balanced["seq"]])>=1025) pfamA_target_name = ["PF00349","PF00022","PF03727","PF06723",\ "PF14450","PF03953","PF12327","PF00091","PF10644",\ "PF13809","PF14881","PF00063","PF00225","PF03028"] pfamA_target = pfamA_motors.loc[pfamA_motors["pfamA_acc"].isin(pfamA_target_name),:].reset_index() pfamA_target = pfamA_target.iloc[:,1:] pfamA_target_sub = pfamA_target.sample(frac = 1).groupby("pfamA_acc").head(396) pfamA_target_sub.groupby("pfamA_acc").count() sum(np.array([len(a) for a in pfamA_target_sub["seq"]])>=1025) pfamA_target_sub.to_csv("../../data/esm/pfamA_target_sub.csv",index = False) pfamA_target.to_csv("../../data/esm/pfamA_target.csv",index = False) kinesin_labelled.to_csv("../../data/esm/kinesin_labelled.csv",index = False) motor_toolkit.to_csv("../../data/esm/motor_toolkit.csv",index = False) pfamA_motors_balanced.to_csv("../../data/esm/pfamA_motors_balanced.csv",index = False) ```
github_jupyter
## Compare Network Architectures Now that we have a reasonable baseline for our EasyDeepFakes dataset, let's try to improve performance. For starters, let's just compare how a variety of networks perform on this dataset. We will try: - ResNet - XResNet - EfficientNet - MesoNet - XceptionNet ``` from fastai.core import * from fastai.vision import * path = Path('../data/EasyDeepFakes') src = ImageList.from_folder(path).split_by_folder(train='train', valid='val') def get_data(bs,size): data = (src.label_from_re('([A-Z]+).png$') .transform(get_transforms(max_warp=0, max_zoom=1), size=size) .databunch(bs=bs).normalize(imagenet_stats)) return data bs, sz = 32, 256 data = get_data(bs, sz) data.show_batch(rows=4, figsize=(10,7)) ``` # ResNet The ResNet architecture is one of the most common and trusted baseline architectures. We will use it to establish a reasonable baseline for performance and compare our other networks to it. ### ResNet18 ``` from fastai.vision.models import resnet18 learner = cnn_learner(data, resnet18, metrics=[accuracy]) learner.lr_find() learner.recorder.plot() # Train only the head of the network learner.fit_one_cycle(5, 1e-3) # Unfreeze other layers and train the entire network learner.unfreeze() learner.fit_one_cycle(20, max_lr=slice(1e-5, 1e-3)) ``` ResNet18 gets a final accuracy of **85.6%** and a peak accuracy of **86.8%** ### ResNet34 ``` from fastai.vision.models import resnet34 learner = cnn_learner(data, resnet34, metrics=[accuracy]) learner.lr_find() learner.recorder.plot() # Train only the head of the network learner.fit_one_cycle(5, 1e-3) # Unfreeze other layers and train the entire network learner.unfreeze() learner.fit_one_cycle(20, max_lr=slice(1e-5, 1e-3)) ``` ResNet34 has a final accuracy of **87.6%** and a peak accuracy of **89.1%** ### ResNet50 ``` from fastai.vision.models import resnet50 learner = cnn_learner(data, resnet50, metrics=[accuracy]) learner.lr_find() learner.recorder.plot() # Train only the head of the network learner.fit_one_cycle(5, 5e-3) # Unfreeze other layers and train the entire network learner.unfreeze() learner.fit_one_cycle(20, max_lr=slice(1e-5, 1e-3)) ``` ResNet50 has a final accuracy of **91.1%** and a peak accuracy of **91.1%**. # XResNet `xresnet` is modified resnet architecture developed by fast.ai in according with the paper [Bag of Tricks for Image Classification with Convolutional Neural Networks](https://arxiv.org/abs/1812.01187). Notably the initial 7x7 conv is replaced by a series of 3x3 convolutions. I believe they have also changed some of the 1x1 convolutions in the bottleneck layers. **NOTE:** In fastai v1, there is no pretrained model for xresnet. ### XResNet18 ``` from fastai.vision.models import xresnet18 learner = cnn_learner(data, xresnet18, metrics=[accuracy], pretrained=False) learner.lr_find() learner.recorder.plot() # Train only the head of the network learner.fit_one_cycle(5, 1e-3) # Unfreeze other layers and train the entire network learner.unfreeze() learner.fit_one_cycle(20, max_lr=slice(1e-5, 1e-3)) ``` (Non-pretrained) XResNet18 gets a final accuracy of **65.0%** and a peak accuracy of **68.4%**. ### XResNet34 ``` from fastai.vision.models import xresnet34 learner = cnn_learner(data, xresnet34, metrics=[accuracy], pretrained=False) learner.lr_find() learner.recorder.plot() # Train only the head of the network learner.fit_one_cycle(5, 1e-3) # Unfreeze other layers and train the entire network learner.unfreeze() learner.fit_one_cycle(20, max_lr=slice(1e-5, 1e-3)) ``` `xresnet34` has a final accuracy of **68.9%** and a peak accuracy of **73.4%**. ### XResNet50 ``` from fastai.vision.models import xresnet50 learner = cnn_learner(data, xresnet50, metrics=[accuracy], pretrained=False) learner.lr_find() learner.recorder.plot() # Train only the head of the network learner.fit_one_cycle(5, 1e-3) # Unfreeze other layers and train the entire network learner.unfreeze() learner.fit_one_cycle(20, max_lr=slice(1e-5, 1e-3)) ``` `xresnet50` has a final accuracy of **71.1%** and a peak accuracy of **72.7%**. # EfficientNet EfficientNet is an architecture released by Google with the intention of reducing the number of parameters while maintaining good performance. There are 8 versions of Efficient with increasing capacity from `efficientb0` to `efficientnetb7`. I haven't figured out how to set up layer groups so I'm unable to do discriminitive learning with EfficientNet, but we'll give it a shot anyways. ``` # !pip install efficientnet-pytorch from efficientnet_pytorch import EfficientNet ``` ### EfficientNetB0 ``` model = EfficientNet.from_pretrained('efficientnet-b0', num_classes=data.c) learner = Learner(data, model, metrics=[accuracy]) learner.lr_find() learner.recorder.plot() # Train only the head of the network learner.fit_one_cycle(5, 1e-3) # Unfreeze other layers and train the entire network learner.unfreeze() #NOTE: Not using discriminitive learning rates! learner.fit_one_cycle(20, max_lr=1e-4) ``` `efficientnetb0` has a final accuracy of **91.1%** and a peak accuracy of **92.7%**. ### EfficientNetB1 ``` model = EfficientNet.from_pretrained('efficientnet-b1', num_classes=data.c) learner = Learner(data, model, metrics=[accuracy]) learner.lr_find() learner.recorder.plot() # Train only the head of the network learner.fit_one_cycle(5, 1e-3) # Unfreeze other layers and train the entire network learner.unfreeze() #NOTE: Not using discriminitive learning rates! learner.fit_one_cycle(20, max_lr=1e-4) ``` `efficientnetb1` hsa an final accuracy of **91.9%** and a peak accuracy of **93.7%** ### EfficientNetB2 ``` model = EfficientNet.from_pretrained('efficientnet-b2', num_classes=data.c) learner = Learner(data, model, metrics=[accuracy]) learner.lr_find() learner.recorder.plot() # Train only the head of the network learner.fit_one_cycle(5, 1e-3) # Unfreeze other layers and train the entire network learner.unfreeze() #NOTE: Not using discriminitive learning rates! learner.fit_one_cycle(20, max_lr=1e-4) ``` `efficientnetb2` get an final accuracy of **89.4%** and a peak accuracy of **90.1%**. ## MesoNet MesoNet was developed to detect deep fakes in the [MesoNet: a Compact Facial Video Forgery Detection Network](https://arxiv.org/abs/1809.00888) paper. Like EfficientNet, I'm unsure how to build layer groups, so we will not use discriminitive fine tuning on this network. ``` #export # By Nathan Hubens. # Paper implementation does not use Adaptive Average Pooling. To get the exact same implementation, # comment the avg_pool and uncomment the final max_pool layer. class MesoNet(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 8, 3, 1,1) # 8 x 256 x 256 self.bn1 = nn.BatchNorm2d(8) self.conv2 = nn.Conv2d(8, 8, 5, 1,2) # 8 x 128 x 128 self.bn2 = nn.BatchNorm2d(8) self.conv3 = nn.Conv2d(8, 16, 5, 1,2) # 8 x 64 x 64 self.bn3 = nn.BatchNorm2d(16) self.conv4 = nn.Conv2d(16,16,5,1,2) # 8 x 32 x 32 self.bn4 = nn.BatchNorm2d(16) self.avg_pool = nn.AdaptiveAvgPool2d((8)) self.fc1 = nn.Linear(1024, 16) self.fc2 = nn.Linear(16, 2) def forward(self, x): x = F.relu(self.conv1(x)) x = self.bn1(x) x = F.max_pool2d(x, 2, 2) x = F.relu(self.conv2(x)) x = self.bn2(x) x = F.max_pool2d(x, 2, 2) x = F.relu(self.conv3(x)) x = self.bn3(x) x = F.max_pool2d(x, 2, 2) x = F.relu(self.conv4(x)) x = self.bn4(x) #x = F.max_pool2d(x, 4, 4) x = self.avg_pool(x) x = x.reshape(x.shape[0], -1) x = F.dropout(x, 0.5) x = F.relu(self.fc1(x)) x = F.dropout(x,0.5) x = self.fc2(x) return x model = MesoNet() learner = Learner(data, model, metrics=[accuracy]) learner.lr_find() learner.recorder.plot() # Train only the head of the network learner.fit_one_cycle(5, 1e-3) # Unfreeze other layers and train the entire network learner.unfreeze() #NOTE: Not using discriminitive learning rates! learner.fit_one_cycle(20, max_lr=1e-4) ``` `mesonet` has a final accuracy of **65.6%** and a peak accuracy of **67.6%**. ## XceptionNet [XceptionNet](https://arxiv.org/abs/1610.02357) was developed to be a more performant version of Google's InceptionNet. They replace Inception modules with depthwise separable convolution modules. It was also the best performing model used in [FaceForensics++: Learning to Detect Manipulated Facial Images](https://arxiv.org/abs/1901.08971). ``` ## xception.py """ Ported to pytorch thanks to [tstandley](https://github.com/tstandley/Xception-PyTorch) @author: tstandley Adapted by cadene Creates an Xception Model as defined in: Francois Chollet Xception: Deep Learning with Depthwise Separable Convolutions https://arxiv.org/pdf/1610.02357.pdf This weights ported from the Keras implementation. Achieves the following performance on the validation set: Loss:0.9173 Prec@1:78.892 Prec@5:94.292 REMEMBER to set your image size to 3x299x299 for both test and validation normalize = transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) The resize parameter of the validation transform should be 333, and make sure to center crop at 299x299 """ import math import torch import torch.nn as nn import torch.nn.functional as F import torch.utils.model_zoo as model_zoo from torch.nn import init pretrained_settings = { 'xception': { 'imagenet': { 'url': 'http://data.lip6.fr/cadene/pretrainedmodels/xception-b5690688.pth', 'input_space': 'RGB', 'input_size': [3, 299, 299], 'input_range': [0, 1], 'mean': [0.5, 0.5, 0.5], 'std': [0.5, 0.5, 0.5], 'num_classes': 1000, 'scale': 0.8975 # The resize parameter of the validation transform should be 333, and make sure to center crop at 299x299 } } } class SeparableConv2d(nn.Module): def __init__(self,in_channels,out_channels,kernel_size=1,stride=1,padding=0,dilation=1,bias=False): super(SeparableConv2d,self).__init__() self.conv1 = nn.Conv2d(in_channels,in_channels,kernel_size,stride,padding,dilation,groups=in_channels,bias=bias) self.pointwise = nn.Conv2d(in_channels,out_channels,1,1,0,1,1,bias=bias) def forward(self,x): x = self.conv1(x) x = self.pointwise(x) return x class Block(nn.Module): def __init__(self,in_filters,out_filters,reps,strides=1,start_with_relu=True,grow_first=True): super(Block, self).__init__() if out_filters != in_filters or strides!=1: self.skip = nn.Conv2d(in_filters,out_filters,1,stride=strides, bias=False) self.skipbn = nn.BatchNorm2d(out_filters) else: self.skip=None self.relu = nn.ReLU(inplace=True) rep=[] filters=in_filters if grow_first: rep.append(self.relu) rep.append(SeparableConv2d(in_filters,out_filters,3,stride=1,padding=1,bias=False)) rep.append(nn.BatchNorm2d(out_filters)) filters = out_filters for i in range(reps-1): rep.append(self.relu) rep.append(SeparableConv2d(filters,filters,3,stride=1,padding=1,bias=False)) rep.append(nn.BatchNorm2d(filters)) if not grow_first: rep.append(self.relu) rep.append(SeparableConv2d(in_filters,out_filters,3,stride=1,padding=1,bias=False)) rep.append(nn.BatchNorm2d(out_filters)) if not start_with_relu: rep = rep[1:] else: rep[0] = nn.ReLU(inplace=False) if strides != 1: rep.append(nn.MaxPool2d(3,strides,1)) self.rep = nn.Sequential(*rep) def forward(self,inp): x = self.rep(inp) if self.skip is not None: skip = self.skip(inp) skip = self.skipbn(skip) else: skip = inp x+=skip return x class Xception(nn.Module): """ Xception optimized for the ImageNet dataset, as specified in https://arxiv.org/pdf/1610.02357.pdf """ def __init__(self, num_classes=1000): """ Constructor Args: num_classes: number of classes """ super(Xception, self).__init__() self.num_classes = num_classes self.conv1 = nn.Conv2d(3, 32, 3,2, 0, bias=False) self.bn1 = nn.BatchNorm2d(32) self.relu = nn.ReLU(inplace=True) self.conv2 = nn.Conv2d(32,64,3,bias=False) self.bn2 = nn.BatchNorm2d(64) #do relu here self.block1=Block(64,128,2,2,start_with_relu=False,grow_first=True) self.block2=Block(128,256,2,2,start_with_relu=True,grow_first=True) self.block3=Block(256,728,2,2,start_with_relu=True,grow_first=True) self.block4=Block(728,728,3,1,start_with_relu=True,grow_first=True) self.block5=Block(728,728,3,1,start_with_relu=True,grow_first=True) self.block6=Block(728,728,3,1,start_with_relu=True,grow_first=True) self.block7=Block(728,728,3,1,start_with_relu=True,grow_first=True) self.block8=Block(728,728,3,1,start_with_relu=True,grow_first=True) self.block9=Block(728,728,3,1,start_with_relu=True,grow_first=True) self.block10=Block(728,728,3,1,start_with_relu=True,grow_first=True) self.block11=Block(728,728,3,1,start_with_relu=True,grow_first=True) self.block12=Block(728,1024,2,2,start_with_relu=True,grow_first=False) self.conv3 = SeparableConv2d(1024,1536,3,1,1) self.bn3 = nn.BatchNorm2d(1536) #do relu here self.conv4 = SeparableConv2d(1536,2048,3,1,1) self.bn4 = nn.BatchNorm2d(2048) self.fc = nn.Linear(2048, num_classes) # #------- init weights -------- # for m in self.modules(): # if isinstance(m, nn.Conv2d): # n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels # m.weight.data.normal_(0, math.sqrt(2. / n)) # elif isinstance(m, nn.BatchNorm2d): # m.weight.data.fill_(1) # m.bias.data.zero_() # #----------------------------- def features(self, input): x = self.conv1(input) x = self.bn1(x) x = self.relu(x) x = self.conv2(x) x = self.bn2(x) x = self.relu(x) x = self.block1(x) x = self.block2(x) x = self.block3(x) x = self.block4(x) x = self.block5(x) x = self.block6(x) x = self.block7(x) x = self.block8(x) x = self.block9(x) x = self.block10(x) x = self.block11(x) x = self.block12(x) x = self.conv3(x) x = self.bn3(x) x = self.relu(x) x = self.conv4(x) x = self.bn4(x) return x def logits(self, features): x = self.relu(features) x = F.adaptive_avg_pool2d(x, (1, 1)) x = x.view(x.size(0), -1) x = self.last_linear(x) return x def forward(self, input): x = self.features(input) x = self.logits(x) return x def xception(num_classes=1000, pretrained='imagenet'): model = Xception(num_classes=num_classes) if pretrained: settings = pretrained_settings['xception'][pretrained] assert num_classes == settings['num_classes'], \ "num_classes should be {}, but is {}".format(settings['num_classes'], num_classes) model = Xception(num_classes=num_classes) model.load_state_dict(model_zoo.load_url(settings['url'])) model.input_space = settings['input_space'] model.input_size = settings['input_size'] model.input_range = settings['input_range'] model.mean = settings['mean'] model.std = settings['std'] # TODO: ugly model.last_linear = model.fc del model.fc return model XCEPTION_MODEL = 'xception/xception-b5690688.pth' def return_pytorch04_xception(pretrained=True): # Raises warning "src not broadcastable to dst" but thats fine model = xception(pretrained=False) if pretrained: # Load model in torch 0.4+ model.fc = model.last_linear del model.last_linear state_dict = torch.load( XCEPTION_MODEL) for name, weights in state_dict.items(): if 'pointwise' in name: state_dict[name] = weights.unsqueeze(-1).unsqueeze(-1) model.load_state_dict(state_dict) model.last_linear = model.fc del model.fc return model model = return_pytorch04_xception() model.last_linear = torch.nn.Linear(in_features=2048, out_features=2, bias=True) learner = Learner(data, model, metrics=[accuracy]) learner.lr_find() learner.recorder.plot() # Train only the head of the network learner.fit_one_cycle(5, 1e-3) # Unfreeze other layers and train the entire network learner.unfreeze() #NOTE: Not using discriminitive learning rates! learner.fit_one_cycle(20, max_lr=1e-4) ``` XceptionNet has a final accuracy of **83.5%** and a peak accuracy of **86.8%**. ## Results |Network | Pretrained | Discriminitive | Final Accuracy % | Peak Accuracy %| Time for 1 Epoch (s) | |----------------|----------------|----------------|------------------|----------------|----------------------| |`resnet18` | True | True | 85.6 | 86.8 | **5** | |`resnet34` | True | True | 87.6 | 89.1 | 7 | |`resnet50` | True | True | 91.1 | 91.1 | 13 | |`xresnet18` | False | True | 65.0 | 68.4 | 6 | |`xresnet34` | False | True | 68.9 | 73.4 | 8 | |`xresnet50` | False | True | 71.1 | 72.7 | 14 | |`efficientnetb0`| True | False | 91.1 | 92.7 | 12 | |`efficientnetb1`| True | False | **91.9** | **93.7** | 15 | |`efficientnetb2`| True | False | 89.4 | 90.1 | 16 | | `mesonet` | False | False | 65.6 | 67.6 | **5** | | `xceptionnet` | True | False | 83.5 | 86.8 | 21 |
github_jupyter
# 开发 AI 应用 未来,AI 算法在日常生活中的应用将越来越广泛。例如,你可能想要在智能手机应用中包含图像分类器。为此,在整个应用架构中,你将使用一个用成百上千个图像训练过的深度学习模型。未来的软件开发很大一部分将是使用这些模型作为应用的常用部分。 在此项目中,你将训练一个图像分类器来识别不同的花卉品种。可以想象有这么一款手机应用,当你对着花卉拍摄时,它能够告诉你这朵花的名称。在实际操作中,你会训练此分类器,然后导出它以用在你的应用中。我们将使用[此数据集](http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html),其中包含 102 个花卉类别。你可以在下面查看几个示例。 <img src='assets/Flowers.png' width=500px> 该项目分为多个步骤: * 加载和预处理图像数据集 * 用数据集训练图像分类器 * 使用训练的分类器预测图像内容 我们将指导你完成每一步,你将用 Python 实现这些步骤。 完成此项目后,你将拥有一个可以用任何带标签图像的数据集进行训练的应用。你的网络将学习花卉,并成为一个命令行应用。但是,你对新技能的应用取决于你的想象力和构建数据集的精力。例如,想象有一款应用能够拍摄汽车,告诉你汽车的制造商和型号,然后查询关于该汽车的信息。构建你自己的数据集并开发一款新型应用吧。 首先,导入你所需的软件包。建议在代码开头导入所有软件包。当你创建此 notebook 时,如果发现你需要导入某个软件包,确保在开头导入该软件包。 ``` # Imports here % matplotlib inline import matplotlib.pyplot as plt import numpy as np import time import json import torch from torch import nn from torch import optim import torch.nn.functional as F from torch.autograd import Variable from torchvision import datasets, transforms, models from collections import OrderedDict from PIL import Image ``` ## 加载数据 在此项目中,你将使用 `torchvision` 加载数据([文档](http://pytorch.org/docs/master/torchvision/transforms.html#))。数据应该和此 notebook 一起包含在内,否则你可以[在此处下载数据](https://s3.amazonaws.com/content.udacity-data.com/nd089/flower_data.tar.gz)。数据集分成了三部分:训练集、验证集和测试集。对于训练集,你需要变换数据,例如随机缩放、剪裁和翻转。这样有助于网络泛化,并带来更好的效果。你还需要确保将输入数据的大小调整为 224x224 像素,因为预训练的网络需要这么做。 验证集和测试集用于衡量模型对尚未见过的数据的预测效果。对此步骤,你不需要进行任何缩放或旋转变换,但是需要将图像剪裁到合适的大小。 对于所有三个数据集,你都需要将均值和标准差标准化到网络期望的结果。均值为 `[0.485, 0.456, 0.406]`,标准差为 `[0.229, 0.224, 0.225]`。这样使得每个颜色通道的值位于 -1 到 1 之间,而不是 0 到 1 之间。 ``` train_dir = 'flowers/train' valid_dir = 'flowers/valid' test_dir = 'flowers/test' # TODO: Define your transforms for the training, validation, and testing sets train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) valid_transforms = transforms.Compose([transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) transforms.RandomRotation(30), # TODO: Load the datasets with ImageFolder train_dataset = datasets.ImageFolder(train_dir, transform=train_transforms) valid_dataset = datasets.ImageFolder(valid_dir, transform=valid_transforms) test_dataset = datasets.ImageFolder(test_dir, transform=valid_transforms) # TODO: Using the image datasets and the trainforms, define the dataloaders train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=32, shuffle=True) valid_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=32, shuffle=True) test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=32, shuffle=True) ``` ### 标签映射 你还需要加载从类别标签到类别名称的映射。你可以在文件 `cat_to_name.json` 中找到此映射。它是一个 JSON 对象,可以使用 [`json` 模块](https://docs.python.org/2/library/json.html)读取它。这样可以获得一个从整数编码的类别到实际花卉名称的映射字典。 ``` with open('cat_to_name.json', 'r') as f: cat_to_name = json.load(f) ``` # 构建和训练分类器 数据准备好后,就开始构建和训练分类器了。和往常一样,你应该使用 `torchvision.models` 中的某个预训练模型获取图像特征。使用这些特征构建和训练新的前馈分类器。 这部分将由你来完成。如果你想与他人讨论这部分,欢迎与你的同学讨论!你还可以在论坛上提问或在工作时间内咨询我们的课程经理和助教导师。 请参阅[审阅标准](https://review.udacity.com/#!/rubrics/1663/view),了解如何成功地完成此部分。你需要执行以下操作: * 加载[预训练的网络](http://pytorch.org/docs/master/torchvision/models.html)(如果你需要一个起点,推荐使用 VGG 网络,它简单易用) * 使用 ReLU 激活函数和丢弃定义新的未训练前馈网络作为分类器 * 使用反向传播训练分类器层,并使用预训练的网络获取特征 * 跟踪验证集的损失和准确率,以确定最佳超参数 我们在下面为你留了一个空的单元格,但是你可以使用多个单元格。建议将问题拆分为更小的部分,并单独运行。检查确保每部分都达到预期效果,然后再完成下个部分。你可能会发现,当你实现每部分时,可能需要回去修改之前的代码,这很正常! 训练时,确保仅更新前馈网络的权重。如果一切构建正确的话,验证准确率应该能够超过 70%。确保尝试不同的超参数(学习速率、分类器中的单元、周期等),寻找最佳模型。保存这些超参数并用作项目下个部分的默认值。 ``` # Load AlexNet as my model model = models.alexnet(pretrained=True) model # Freeze parameters so we don't backprop through them for p in model.parameters(): p.requires_grad = False classifier = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(9216, 1000)), ('relu', nn.ReLU()), ('fc2', nn.Linear(1000, 102)), ('output', nn.LogSoftmax(dim=1)) ])) model.classifier = classifier # Train a model with a pre-trained network criterion = nn.NLLLoss() optimizer = optim.Adam(model.classifier.parameters(), lr=0.001) epochs = 3 print_every = 40 steps = 0 # change to cuda model.to('cuda') for e in range(epochs): model.train() running_loss = 0 for ii, (inputs, labels) in enumerate(train_loader): steps += 1 inputs, labels = inputs.to('cuda'), labels.to('cuda') optimizer.zero_grad() # Forward and backward passes outputs = model.forward(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() if steps % print_every == 0: model.eval() correct = 0 total = 0 test_loss = 0 with torch.no_grad(): for data in valid_loader: images, labels = data images = images.to('cuda') labels = labels.to('cuda') outputs = model.forward(images) test_loss += criterion(outputs, labels).item() _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print("Epoch: {}/{}... ".format(e+1, epochs), "Training Loss: {:.4f}".format(running_loss/print_every), " Test Loss: {:.4f}".format(test_loss/len(valid_loader)), " Accuracy: {:.4f}".format(correct / total)) running_loss = 0 model.train() ``` ## 测试网络 建议使用网络在训练或验证过程中从未见过的测试数据测试训练的网络。这样,可以很好地判断模型预测全新图像的效果。用网络预测测试图像,并测量准确率,就像验证过程一样。如果模型训练良好的话,你应该能够达到大约 70% 的准确率。 ``` correct = 0 total = 0 with torch.no_grad(): for data in test_loader: images, labels = data images = images.to('cuda') labels = labels.to('cuda') outputs = model(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('accuracy on the test dataset=', correct / total) ``` ## 保存检查点 训练好网络后,保存模型,以便稍后加载它并进行预测。你可能还需要保存其他内容,例如从类别到索引的映射,索引是从某个图像数据集中获取的:`image_datasets['train'].class_to_idx`。你可以将其作为属性附加到模型上,这样稍后推理会更轻松。 注意,稍后你需要完全重新构建模型,以便用模型进行推理。确保在检查点中包含你所需的任何信息。如果你想加载模型并继续训练,则需要保存周期数量和优化器状态 `optimizer.state_dict`。你可能需要在下面的下个部分使用训练的模型,因此建议立即保存它。 ``` # TODO: Save the checkpoint checkpoint = {'class_to_idx': train_dataset.class_to_idx, 'classifier_input_size': 9216, 'output_size': 102, 'classifier_hidden_layers': 1000, 'state_dict': model.state_dict()} torch.save(checkpoint, 'checkpoint.pth') ``` ## 加载检查点 此刻,建议写一个可以加载检查点并重新构建模型的函数。这样的话,你可以回到此项目并继续完善它,而不用重新训练网络。 ``` # TODO: Write a function that loads a checkpoint and rebuilds the model def load_checkpoint(filepath): checkpoint = torch.load(filepath) model = models.alexnet(pretrained=True) classifier = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(checkpoint['classifier_input_size'], checkpoint['classifier_hidden_layers'])), ('relu', nn.ReLU()), ('fc2', nn.Linear(checkpoint['classifier_hidden_layers'], checkpoint['output_size'])), ('output', nn.LogSoftmax(dim=1)) ])) model.classifier = classifier model.load_state_dict(checkpoint['state_dict']) model.class_to_idx = checkpoint['class_to_idx'] return model m = load_checkpoint('checkpoint.pth') m ``` # 类别推理 现在,你需要写一个使用训练的网络进行推理的函数。即你将向网络中传入一个图像,并预测图像中的花卉类别。写一个叫做 `predict` 的函数,该函数会接受图像和模型,然后返回概率在前 $K$ 的类别及其概率。应该如下所示: ``` probs, classes = predict(image_path, model) print(probs) print(classes) > [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339] > ['70', '3', '45', '62', '55'] ``` 首先,你需要处理输入图像,使其可以用于你的网络。 ## 图像处理 你需要使用 `PIL` 加载图像([文档](https://pillow.readthedocs.io/en/latest/reference/Image.html))。建议写一个函数来处理图像,使图像可以作为模型的输入。该函数应该按照训练的相同方式处理图像。 首先,调整图像大小,使最小的边为 256 像素,并保持宽高比。为此,可以使用 [`thumbnail`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) 或 [`resize`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) 方法。然后,你需要从图像的中心裁剪出 224x224 的部分。 图像的颜色通道通常编码为整数 0-255,但是该模型要求值为浮点数 0-1。你需要变换值。使用 Numpy 数组最简单,你可以从 PIL 图像中获取,例如 `np_image = np.array(pil_image)`。 和之前一样,网络要求图像按照特定的方式标准化。均值应标准化为 `[0.485, 0.456, 0.406]`,标准差应标准化为 `[0.229, 0.224, 0.225]`。你需要用每个颜色通道减去均值,然后除以标准差。 最后,PyTorch 要求颜色通道为第一个维度,但是在 PIL 图像和 Numpy 数组中是第三个维度。你可以使用 [`ndarray.transpose`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ndarray.transpose.html)对维度重新排序。颜色通道必须是第一个维度,并保持另外两个维度的顺序。 ``` def process_image(image): im = Image.open(image) image = valid_transforms(im) return image.numpy() ``` 要检查你的项目,可以使用以下函数来转换 PyTorch 张量并将其显示在 notebook 中。如果 `process_image` 函数可行,用该函数运行输出应该会返回原始图像(但是剪裁掉的部分除外)。 ``` def imshow(image, ax=None, title=None): """Imshow for Tensor.""" if ax is None: fig, ax = plt.subplots() # PyTorch tensors assume the color channel is the first dimension # but matplotlib assumes is the third dimension image = image.numpy().transpose((1, 2, 0)) # Undo preprocessing mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) image = std * image + mean # Image needs to be clipped between 0 and 1 or it looks like noise when displayed image = np.clip(image, 0, 1) ax.imshow(image) return ax np_image = process_image('flowers/test/1/image_06743.jpg') image = torch.from_numpy(np_image) imshow(image) ``` ## 类别预测 可以获得格式正确的图像后 要获得前 $K$ 个值,在张量中使用 [`x.topk(k)`](http://pytorch.org/docs/master/torch.html#torch.topk)。该函数会返回前 `k` 个概率和对应的类别索引。你需要使用 `class_to_idx`(希望你将其添加到了模型中)将这些索引转换为实际类别标签,或者从用来加载数据的[ `ImageFolder`](https://pytorch.org/docs/master/torchvision/datasets.html?highlight=imagefolder#torchvision.datasets.ImageFolder)进行转换。确保颠倒字典 同样,此方法应该接受图像路径和模型检查点,并返回概率和类别。 ``` def predict(image_path, model, topk=5): ''' Predict the class (or classes) of an image using a trained deep learning model. ''' # TODO: Implement the code to predict the class from an image file np_image = process_image(image_path) image = torch.from_numpy(np_image) image.unsqueeze_(0) model.eval() output = model(image) x = torch.topk(output, topk) list_of_class = {} np_log_probs = x[0][0].detach().numpy() tags = x[1][0].detach().numpy() for i in range(topk): for classes, idx in model.class_to_idx.items(): if idx == tags[i]: list_of_class[classes] = np.exp(np_log_probs[i]) return list_of_class predict('flowers/valid/99/image_08063.jpg', m, 5) ``` ## 检查运行状况 你已经可以使用训练的模型做出预测,现在检查模型的性能如何。即使测试准确率很高,始终有必要检查是否存在明显的错误。使用 `matplotlib` 将前 5 个类别的概率以及输入图像绘制为条形图,应该如下所示: <img src='assets/inference_example.png' width=300px> 你可以使用 `cat_to_name.json` 文件(应该之前已经在 notebook 中加载该文件)将类别整数编码转换为实际花卉名称。要将 PyTorch 张量显示为图像,请使用定义如下的 `imshow` 函数。 ``` # TODO: Display an image along with the top 5 classes path = 'flowers/valid/99/image_08063.jpg' im = torch.from_numpy(process_image(path)) imshow(im) def showbarplot(cat_to_name, dictionary): name_of_class = [] probs = [] for classes, prob in dictionary.items(): name_of_class.append(cat_to_name[classes]) probs.append(prob) plt.bar(name_of_class, probs); plt.xticks(rotation=30); plt.xlabel('type of flowers'); plt.ylabel('probabilities'); plt.title('Top guesses of flower categories and their probabilities'); list_of_class = predict('flowers/valid/99/image_08063.jpg', m, 5) showbarplot(cat_to_name, list_of_class) ```
github_jupyter
# Machine Learning for Telecom with Naive Bayes # Introduction Machine Learning for CallDisconnectReason is a notebook which demonstrates exploration of dataset and CallDisconnectReason classification with Spark ml Naive Bayes Algorithm. ``` from pyspark.sql.types import * from pyspark.sql import SparkSession from sagemaker import get_execution_role import sagemaker_pyspark role = get_execution_role() # Configure Spark to use the SageMaker Spark dependency jars jars = sagemaker_pyspark.classpath_jars() classpath = ":".join(sagemaker_pyspark.classpath_jars()) spark = SparkSession.builder.config("spark.driver.extraClassPath", classpath)\ .master("local[*]").getOrCreate() ``` Using S3 Select, enables applications to retrieve only a subset of data from an object by using simple SQL expressions. By using S3 Select to retrieve only the data, you can achieve drastic performance increases – in many cases you can get as much as a 400% improvement. - _We first read a parquet compressed format of CDR dataset using s3select which has already been processed by Glue._ ``` cdr_start_loc = "<%CDRStartFile%>" cdr_stop_loc = "<%CDRStopFile%>" cdr_start_sample_loc = "<%CDRStartSampleFile%>" cdr_stop_sample_loc = "<%CDRStopSampleFile%>" df = spark.read.format("s3select").parquet(cdr_stop_sample_loc) df.createOrReplaceTempView("cdr") durationDF = spark.sql("SELECT _c13 as CallServiceDuration FROM cdr where _c0 = 'STOP'") durationDF.count() ``` # Exploration of Data - _We see how we can explore and visualize the dataset used for processing. Here we create a bar chart representation of CallServiceDuration from CDR dataset._ ``` import matplotlib.pyplot as plt durationpd = durationDF.toPandas().astype(int) durationpd.plot(kind='bar',stacked=True,width=1) ``` - _We can represent the data and visualize with a box plot. The box extends from the lower to upper quartile values of the data, with a line at the median._ ``` color = dict(boxes='DarkGreen', whiskers='DarkOrange', medians='DarkBlue', caps='Gray') durationpd.plot.box(color=color, sym='r+') from pyspark.sql.functions import col durationDF = durationDF.withColumn("CallServiceDuration", col("CallServiceDuration").cast(DoubleType())) ``` - _We can represent the data and visualize the data with histograms partitioned in different bins._ ``` import matplotlib.pyplot as plt bins, counts = durationDF.select('CallServiceDuration').rdd.flatMap(lambda x: x).histogram(durationDF.count()) plt.hist(bins[:-1], bins=bins, weights=counts,color=['green']) sqlDF = spark.sql("SELECT _c2 as Accounting_ID, _c19 as Calling_Number,_c20 as Called_Number, _c14 as CallDisconnectReason FROM cdr where _c0 = 'STOP'") sqlDF.show() ``` # Featurization ``` from pyspark.ml.feature import StringIndexer accountIndexer = StringIndexer(inputCol="Accounting_ID", outputCol="AccountingIDIndex") accountIndexer.setHandleInvalid("skip") tempdf1 = accountIndexer.fit(sqlDF).transform(sqlDF) callingNumberIndexer = StringIndexer(inputCol="Calling_Number", outputCol="Calling_NumberIndex") callingNumberIndexer.setHandleInvalid("skip") tempdf2 = callingNumberIndexer.fit(tempdf1).transform(tempdf1) calledNumberIndexer = StringIndexer(inputCol="Called_Number", outputCol="Called_NumberIndex") calledNumberIndexer.setHandleInvalid("skip") tempdf3 = calledNumberIndexer.fit(tempdf2).transform(tempdf2) from pyspark.ml.feature import StringIndexer # Convert target into numerical categories labelIndexer = StringIndexer(inputCol="CallDisconnectReason", outputCol="label") labelIndexer.setHandleInvalid("skip") from pyspark.sql.functions import rand trainingFraction = 0.75; testingFraction = (1-trainingFraction); seed = 1234; trainData, testData = tempdf3.randomSplit([trainingFraction, testingFraction], seed=seed); # CACHE TRAIN AND TEST DATA trainData.cache() testData.cache() trainData.count(),testData.count() ``` # Analyzing the label distribution - We analyze the distribution of our target labels using a histogram where 16 represents Normal_Call_Clearing. ``` import matplotlib.pyplot as plt negcount = trainData.filter("CallDisconnectReason != 16").count() poscount = trainData.filter("CallDisconnectReason == 16").count() negfrac = 100*float(negcount)/float(negcount+poscount) posfrac = 100*float(poscount)/float(poscount+negcount) ind = [0.0,1.0] frac = [negfrac,posfrac] width = 0.35 plt.title('Label Distribution') plt.bar(ind, frac, width, color='r') plt.xlabel("CallDisconnectReason") plt.ylabel('Percentage share') plt.xticks(ind,['0.0','1.0']) plt.show() import matplotlib.pyplot as plt negcount = testData.filter("CallDisconnectReason != 16").count() poscount = testData.filter("CallDisconnectReason == 16").count() negfrac = 100*float(negcount)/float(negcount+poscount) posfrac = 100*float(poscount)/float(poscount+negcount) ind = [0.0,1.0] frac = [negfrac,posfrac] width = 0.35 plt.title('Label Distribution') plt.bar(ind, frac, width, color='r') plt.xlabel("CallDisconnectReason") plt.ylabel('Percentage share') plt.xticks(ind,['0.0','1.0']) plt.show() from pyspark.ml.feature import VectorAssembler from pyspark.ml.feature import VectorAssembler vecAssembler = VectorAssembler(inputCols=["AccountingIDIndex","Calling_NumberIndex", "Called_NumberIndex"], outputCol="features") ``` __Spark ML Naive Bayes__: Naive Bayes is a simple multiclass classification algorithm with the assumption of independence between every pair of features. Naive Bayes can be trained very efficiently. Within a single pass to the training data, it computes the conditional probability distribution of each feature given label, and then it applies Bayes’ theorem to compute the conditional probability distribution of label given an observation and use it for prediction. - _We use Spark ML Naive Bayes Algorithm and spark Pipeline to train the data set._ ``` from pyspark.ml.classification import NaiveBayes from pyspark.ml.clustering import KMeans from pyspark.ml import Pipeline # Train a NaiveBayes model nb = NaiveBayes(smoothing=1.0, modelType="multinomial") # Chain labelIndexer, vecAssembler and NBmodel in a pipeline = Pipeline(stages=[labelIndexer,vecAssembler, nb]) # Run stages in pipeline and train model model = pipeline.fit(trainData) # Run inference on the test data and show some results predictions = model.transform(testData) predictions.printSchema() predictions.show() predictiondf = predictions.select("label", "prediction", "probability") pddf_pred = predictions.toPandas() pddf_pred ``` - _We use Scatter plot for visualization and represent the dataset._ ``` import matplotlib.pyplot as plt import numpy as np # Set the size of the plot plt.figure(figsize=(14,7)) # Create a colormap colormap = np.array(['red', 'lime', 'black']) # Plot CDR plt.subplot(1, 2, 1) plt.scatter(pddf_pred.Calling_NumberIndex, pddf_pred.Called_NumberIndex, c=pddf_pred.prediction) plt.title('CallDetailRecord') plt.show() ``` # Evaluation ``` from pyspark.ml.evaluation import MulticlassClassificationEvaluator evaluator = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="accuracy") accuracy = evaluator.evaluate(predictiondf) print(accuracy) ``` # Confusion Matrix ``` from sklearn.metrics import confusion_matrix import pandas as pd import matplotlib.pyplot as plt import seaborn as sn outdataframe = predictiondf.select("prediction", "label") pandadf = outdataframe.toPandas() npmat = pandadf.values labels = npmat[:,0] predicted_label = npmat[:,1] cnf_matrix = confusion_matrix(labels, predicted_label) import numpy as np def plot_confusion_matrix(cm, target_names, title='Confusion matrix', cmap=None, normalize=True): import matplotlib.pyplot as plt import numpy as np import itertools accuracy = np.trace(cm) / float(np.sum(cm)) misclass = 1 - accuracy if cmap is None: cmap = plt.get_cmap('Blues') plt.figure(figsize=(8, 6)) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() if target_names is not None: tick_marks = np.arange(len(target_names)) plt.xticks(tick_marks, target_names, rotation=45) plt.yticks(tick_marks, target_names) if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] thresh = cm.max() / 1.5 if normalize else cm.max() / 2 for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): if normalize: plt.text(j, i, "{:0.4f}".format(cm[i, j]), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") else: plt.text(j, i, "{:,}".format(cm[i, j]), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('label') plt.xlabel('Predicted \naccuracy={:0.4f}; misclass={:0.4f}'.format(accuracy, misclass)) plt.show() plot_confusion_matrix(cnf_matrix, normalize = False, target_names = ['Positive', 'Negative'], title = "Confusion Matrix") from pyspark.mllib.evaluation import MulticlassMetrics # Create (prediction, label) pairs predictionAndLabel = predictiondf.select("prediction", "label").rdd # Generate confusion matrix metrics = MulticlassMetrics(predictionAndLabel) print(metrics.confusionMatrix()) ``` # Cross Validation ``` from pyspark.ml.tuning import ParamGridBuilder, CrossValidator # Create ParamGrid and Evaluator for Cross Validation paramGrid = ParamGridBuilder().addGrid(nb.smoothing, [0.0, 0.2, 0.4, 0.6, 0.8, 1.0]).build() cvEvaluator = MulticlassClassificationEvaluator(metricName="accuracy") # Run Cross-validation cv = CrossValidator(estimator=pipeline, estimatorParamMaps=paramGrid, evaluator=cvEvaluator) cvModel = cv.fit(trainData) # Make predictions on testData. cvModel uses the bestModel. cvPredictions = cvModel.transform(testData) cvPredictions.select("label", "prediction", "probability").show() # Evaluate bestModel found from Cross Validation evaluator.evaluate(cvPredictions) ```
github_jupyter
# Feature Engineering with PySpark ## Exploratory Data Analysis ``` import pyspark as sp sp.version import sys print(sys.version_info) sys.version ``` import os os.environ["JAVA_HOME"] = "/Library/Java/JavaVirtualMachines/jdk1.8.0_151.jdk/Contents/Home" ``` sc = sp.SparkContext.getOrCreate() sc.version # spark session # Import SparkSession from pyspark.sql from pyspark.sql import SparkSession # Create a session as spark spark = SparkSession.builder.getOrCreate() df = spark.read.csv('2017_StPaul_MN_Real_Estate.csv', header=True) df.columns df.count() df.dtypes ``` ### What are we predicting? ``` # Select our dependent variable Y_df = df.select(['SalesClosePrice']) # Display summary statistics Y_df.describe().show() ``` Looks like we need to convert the data type of SalesClosePrice: ``` # convert the data type of SalesClosePrice to integer df = df.withColumn("SalesClosePrice", df.SalesClosePrice.cast("integer")) df.select('SalesClosePrice').describe().show() df = df.withColumn("AssessedValuation", df.AssessedValuation.cast("double")) df = df.withColumn("AssociationFee", df.AssociationFee.cast("bigint")) df = df.withColumn("SQFTBELOWGROUND", df.SQFTBELOWGROUND.cast("bigint")) required_dtypes = [('NO', 'bigint'), ('MLSID', 'string'), ('STREETNUMBERNUMERIC', 'bigint'), ('STREETADDRESS', 'string'), ('STREETNAME', 'string'), ('POSTALCODE', 'bigint'), ('STATEORPROVINCE', 'string'), ('CITY', 'string'), ('SALESCLOSEPRICE', 'bigint'), ('LISTDATE', 'string'), ('LISTPRICE', 'bigint'), ('LISTTYPE', 'string'), ('ORIGINALLISTPRICE', 'bigint'), ('PRICEPERTSFT', 'double'), ('FOUNDATIONSIZE', 'bigint'), ('FENCE', 'string'), ('MAPLETTER', 'string'), ('LOTSIZEDIMENSIONS', 'string'), ('SCHOOLDISTRICTNUMBER', 'string'), ('DAYSONMARKET', 'bigint'), ('OFFMARKETDATE', 'string'), ('FIREPLACES', 'bigint'), ('ROOMAREA4', 'string'), ('ROOMTYPE', 'string'), ('ROOF', 'string'), ('ROOMFLOOR4', 'string'), ('POTENTIALSHORTSALE', 'string'), ('POOLDESCRIPTION', 'string'), ('PDOM', 'bigint'), ('GARAGEDESCRIPTION', 'string'), ('SQFTABOVEGROUND', 'bigint'), ('TAXES', 'bigint'), ('ROOMFLOOR1', 'string'), ('ROOMAREA1', 'string'), ('TAXWITHASSESSMENTS', 'double'), ('TAXYEAR', 'bigint'), ('LIVINGAREA', 'bigint'), ('UNITNUMBER', 'string'), ('YEARBUILT', 'bigint'), ('ZONING', 'string'), ('STYLE', 'string'), ('ACRES', 'double'), ('COOLINGDESCRIPTION', 'string'), ('APPLIANCES', 'string'), ('BACKONMARKETDATE', 'double'), ('ROOMFAMILYCHAR', 'string'), ('ROOMAREA3', 'string'), ('EXTERIOR', 'string'), ('ROOMFLOOR3', 'string'), ('ROOMFLOOR2', 'string'), ('ROOMAREA2', 'string'), ('DININGROOMDESCRIPTION', 'string'), ('BASEMENT', 'string'), ('BATHSFULL', 'bigint'), ('BATHSHALF', 'bigint'), ('BATHQUARTER', 'bigint'), ('BATHSTHREEQUARTER', 'double'), ('CLASS', 'string'), ('BATHSTOTAL', 'bigint'), ('BATHDESC', 'string'), ('ROOMAREA5', 'string'), ('ROOMFLOOR5', 'string'), ('ROOMAREA6', 'string'), ('ROOMFLOOR6', 'string'), ('ROOMAREA7', 'string'), ('ROOMFLOOR7', 'string'), ('ROOMAREA8', 'string'), ('ROOMFLOOR8', 'string'), ('BEDROOMS', 'bigint'), ('SQFTBELOWGROUND', 'bigint'), ('ASSUMABLEMORTGAGE', 'string'), ('ASSOCIATIONFEE', 'bigint'), ('ASSESSMENTPENDING', 'string'), ('ASSESSEDVALUATION', 'double')] old_columns = df.columns new_columns = [c for c, d in required_dtypes] for n, o in zip(new_columns, old_columns): df = df.withColumnRenamed(o, n) ``` ### Verifying Data Load ``` def check_load(df, num_records, num_columns): # Takes a dataframe and compares record and column counts to input # Message to return if the critera below aren't met message = 'Validation Failed' # Check number of records if num_records == df.count(): # Check number of columns if num_columns == len(df.columns): # Success message message = 'Validation Passed' return message # Print the data validation message print(check_load(df, 5000, 74)) ``` ### Verifying DataTypes ``` validation_dict = {'ASSESSMENTPENDING': 'string', 'ASSESSEDVALUATION': 'double', 'ASSOCIATIONFEE': 'bigint', 'ASSUMABLEMORTGAGE': 'string', 'SQFTBELOWGROUND': 'bigint'} # create list of actual dtypes to check actual_dtypes_list = df.dtypes # Iterate through the list of actual dtypes tuples for attribute_tuple in actual_dtypes_list: # Check if column name is dictionary of expected dtypes col_name = attribute_tuple[0] if col_name in validation_dict: # Compare attribute types col_type = attribute_tuple[1] if col_type == validation_dict[col_name]: print(col_name + ' has expected dtype.') ``` ### Using `Corr()` ``` for required_type, current_column in zip(required_dtypes, df.columns): # since the required and current column names are the exact order we can do: if required_type[1] != 'string': # df = df.withColumn("{:}".format(current_column), df["`{:}`".format(current_column)].cast(req[1])) df = df.withColumn(current_column, df["{:}".format(current_column)].cast(required_type[1])) check_columns = ['FOUNDATIONSIZE', 'DAYSONMARKET', 'FIREPLACES', 'PDOM', 'SQFTABOVEGROUND', 'TAXES', 'TAXWITHASSESSMENTS', 'TAXYEAR', 'LIVINGAREA', 'YEARBUILT', 'ACRES', 'BACKONMARKETDATE', 'BATHSFULL', 'BATHSHALF', 'BATHQUARTER', 'BATHSTHREEQUARTER', 'BATHSTOTAL', 'BEDROOMS', 'SQFTBELOWGROUND', 'ASSOCIATIONFEE', 'ASSESSEDVALUATION'] # Name and value of col with max corr corr_max = 0 corr_max_col = check_columns[0] # Loop to check all columns contained in list for col in check_columns: # Check the correlation of a pair of columns corr_val = df.corr(col, 'SALESCLOSEPRICE') # Logic to compare corr_max with current corr_val if corr_val > corr_max: # Update the column name and corr value corr_max = corr_val corr_max_col = col print(corr_max_col) ``` ### Using Visualizations: distplot ``` %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns # Select a single column and sample and convert to pandas # sample 50% and not use replacement and setting the random seed to 42. sample_df = df.select(['LISTPRICE']).sample(False, .5, 42) pandas_df = sample_df.toPandas() # Plot distribution of pandas_df and display plot sns.distplot(pandas_df) plt.show() # Import skewness function from pyspark.sql.functions import skewness # Compute and print skewness of LISTPRICE print(df.agg({'LISTPRICE': 'skewness'}).collect()) ``` We can use the skewness function to verify this numerically rather than visually. ### Using Visualizations: lmplot ``` # Select a the relevant columns and sample sample_df = df.select(['SALESCLOSEPRICE','LIVINGAREA']).sample(False, .5, 42) # Convert to pandas dataframe pandas_df = sample_df.toPandas() # Linear model plot of pandas_df sns.lmplot(x='LIVINGAREA', y='SALESCLOSEPRICE', data=pandas_df) ``` we can see that as LivingArea increases, the price of the home increases at a relatively steady rate ## Wrangling with Spark Functions ### Dropping a list of columns ``` # List of columns to remove from dataset cols_to_drop = ['STREETNUMBERNUMERIC', 'LOTSIZEDIMENSIONS'] # Drop columns in list df = df.drop(*cols_to_drop) ``` We can always come back to these after our intial model if we need more information. ### Using text filters to remove records ``` # Inspect unique values in the column 'ASSUMABLEMORTGAGE' df.select(['ASSUMABLEMORTGAGE']).distinct().show() # List of possible values containing 'yes' yes_values = ['Yes w/ Qualifying', 'Yes w/No Qualifying'] # Filter the text values out of df but keep null values text_filter = ~df['ASSUMABLEMORTGAGE'].isin(yes_values) | df['ASSUMABLEMORTGAGE'].isNull() df = df.where(text_filter) # Print count of remaining records print(df.count()) ``` ### Filtering numeric fields conditionally ``` from pyspark.sql.functions import log df = df.withColumn('log_SalesClosePrice', log('SalesClosePrice')) from pyspark.sql.functions import mean, stddev # Calculate values used for outlier filtering mean_val = df.agg({'log_SalesClosePrice': 'mean'}).collect()[0][0] stddev_val = df.agg({'log_SalesClosePrice': 'stddev'}).collect()[0][0] # Create three standard deviation (μ ± 3σ) lower and upper bounds for data low_bound = mean_val - (3 * stddev_val) hi_bound = mean_val + (3 * stddev_val) # Filter the data to fit between the lower and upper bounds df = df.where((df['log_SalesClosePrice'] < hi_bound) & (df['log_SalesClosePrice'] > low_bound)) ``` ### Custom Percentage Scaling ``` from pyspark.sql.functions import round # Define max and min values and collect them max_days = df.agg({'DAYSONMARKET': 'max'}).collect()[0][0] min_days = df.agg({'DAYSONMARKET': 'min'}).collect()[0][0] # Create a new column based off the scaled data df = df.withColumn('percentage_scaled_days', round((df['DAYSONMARKET'] - min_days) / (max_days - min_days)) * 100) # Calc max and min for new column print(df.agg({'percentage_scaled_days': 'max'}).collect()) print(df.agg({'percentage_scaled_days': 'min'}).collect()) ``` ### Scaling your scalers ``` def min_max_scaler(df, cols_to_scale): # Takes a dataframe and list of columns to minmax scale. Returns a dataframe. for col in cols_to_scale: # Define min and max values and collect them max_days = df.agg({col: 'max'}).collect()[0][0] min_days = df.agg({col: 'min'}).collect()[0][0] new_column_name = 'scaled_' + col # Create a new column based off the scaled data df = df.withColumn(new_column_name, (df[col] - min_days) / (max_days - min_days)) return df df = min_max_scaler(df, ['FOUNDATIONSIZE', 'DAYSONMARKET', 'FIREPLACES']) # Show that our data is now between 0 and 1 df[['DAYSONMARKET', 'scaled_DAYSONMARKET']].show() ``` ### Correcting Right Skew Data ``` # Compute the skewness print(df.agg({'YEARBUILT': 'skewness'}).collect()) # Calculate the max year max_year = df.agg({'YEARBUILT': 'max'}).collect()[0][0] # Create a new column of reflected data df = df.withColumn('Reflect_YearBuilt', (max_year + 1) - df['YEARBUILT']) # Create a new column based reflected data df = df.withColumn('adj_yearbuilt', 1 / log(df['Reflect_YearBuilt'])) ``` What you've seen here are only a few of the ways that you might try to make your data fit a normal distribution. ### Visualizing Missing Data ``` columns = ['APPLIANCES', 'BACKONMARKETDATE', 'ROOMFAMILYCHAR', 'BASEMENT', 'DININGROOMDESCRIPTION'] df.select(columns).show() # Sample the dataframe and convert to Pandas sample_df = df.select(columns).sample(False, 0.5, 42) pandas_df = sample_df.toPandas() # Convert all values to T/F tf_df = pandas_df.isnull() # Plot it sns.heatmap(data=tf_df) plt.xticks(rotation=30, fontsize=10) plt.yticks(rotation=0, fontsize=10) plt.show() # Set the answer to the column with the most missing data answer = 'BACKONMARKETDATE' answer ``` ### Imputing Missing Data ``` # Count missing rows missing = df.where(df['PDOM'].isNull()).count() # Calculate the mean value col_mean = df.agg({'PDOM': 'mean'}).collect()[0][0] # Replacing with the mean value for that column df.fillna(col_mean, subset=['PDOM']) ``` Make sure to spend time considering the appropriate ways to handle missing data in your problems. ### Calculate Missing Percents ``` def column_dropper(df, threshold): # Takes a dataframe and threshold for missing values. Returns a dataframe. total_records = df.count() for col in df.columns: # Calculate the percentage of missing values missing = df.where(df[col].isNull()).count() missing_percent = missing / total_records # Drop column if percent of missing is more than threshold if missing_percent > threshold: df = df.drop(col) return df # Drop columns that are more than 60% missing df = column_dropper(df, .6) ``` ### A Dangerous Join ``` # Cast data types walk_df = walk_df.withColumn('longitude', walk_df.longitude.cast('double')) walk_df = walk_df.withColumn('latitude', walk_df.latitude.cast('double')) # Round precision df = df.withColumn('longitude', round(df['longitude'], 5)) df = df.withColumn('latitude', round(df['latitude'], 5)) # Create join condition condition = [walk_df['latitude'] == df['latitude'], walk_df['longitude'] == df['longitude']] # Join the dataframes together join_df = df.join(walk_df, on=condition, how='left') # Count non-null records from new field print(join_df.where(~join_df['walkscore'].isNull()).count()) ``` ### Spark SQL Join ``` # Register dataframes as tables df.createOrReplaceTempView("df") walk_df.createOrReplaceTempView("walk_df") # SQL to join dataframes join_sql = """ SELECT * FROM df LEFT JOIN walk_df ON df.longitude = walk_df.longitude AND df.latitude = walk_df.latitude """ # Perform sql join joined_df = spark.sql(join_sql) ``` ### Checking for Bad Joins ``` # Join on mismatched keys precision wrong_prec_cond = [walk_df['latitude'] == df_orig['latitude'], walk_df['longitude'] == df_orig['longitude']] wrong_prec_df = df_orig.join(walk_df, on=wrong_prec_cond, how='left') # Compare bad join to the correct one print(wrong_prec_df.where(wrong_prec_df['walkscore'].isNull()).count()) print(correct_join_df.where(correct_join_df['walkscore'].isNull()).count()) # Create a join on too few keys few_keys_cond = [walk_df['longitude'] == df['longitude']] few_keys_df = df.join(walk_df, on=few_keys_cond, how='left') # Compare bad join to the correct one print("Record Count of the Too Few Keys Join Example: " + str(few_keys_df.count())) print("Record Count of the Correct Join Example: " + str(correct_join_df.count())) ``` ## Feature Engineering ### Differences ``` # Lot size in square feet acres_to_sqfeet = 43560 df = df.withColumn('LOT_SIZE_SQFT', df['ACRES'] * acres_to_sqfeet) # Create new column YARD_SIZE df = df.withColumn('YARD_SIZE', df['LOT_SIZE_SQFT'] - df['FOUNDATIONSIZE']) # Corr of ACRES vs SALESCLOSEPRICE print("Corr of ACRES vs SALESCLOSEPRICE: " + str(df.corr('ACRES', 'SALESCLOSEPRICE'))) # Corr of FOUNDATIONSIZE vs SALESCLOSEPRICE print("Corr of FOUNDATIONSIZE vs SALESCLOSEPRICE: " + str(df.corr('FOUNDATIONSIZE', 'SALESCLOSEPRICE'))) # Corr of YARD_SIZE vs SALESCLOSEPRICE print("Corr of YARD_SIZE vs SALESCLOSEPRICE: " + str(df.corr('YARD_SIZE', 'SALESCLOSEPRICE'))) ``` ### Ratios ``` # ASSESSED_TO_LIST df = df.withColumn('ASSESSED_TO_LIST', df['ASSESSEDVALUATION'] / df['LISTPRICE']) df[['ASSESSEDVALUATION', 'LISTPRICE', 'ASSESSED_TO_LIST']].show(5) # TAX_TO_LIST df = df.withColumn('TAX_TO_LIST', df['TAXES'] / df['LISTPRICE']) df[['TAX_TO_LIST', 'TAXES', 'LISTPRICE']].show(5) # BED_TO_BATHS df = df.withColumn('BED_TO_BATHS', df['BEDROOMS'] / df['BATHSTOTAL']) df[['BED_TO_BATHS', 'BEDROOMS', 'BATHSTOTAL']].show(5) ``` ### Deeper Features ``` from scipy import stats def r2(x, y): return stats.pearsonr(x, y)[0] ** 2 # Create new feature by adding two features together df = df.withColumn('Total_SQFT', df['SQFTBELOWGROUND'] + df['SQFTABOVEGROUND']) # Create additional new feature using previously created feature df = df.withColumn('BATHS_PER_1000SQFT', df['BATHSTOTAL'] / (df['Total_SQFT'] / 1000)) df[['BATHS_PER_1000SQFT']].describe().show() # Sample and create pandas dataframe pandas_df = df.sample(False, 0.5, 0).toPandas() # Linear model plots sns.jointplot(x='Total_SQFT', y='SALESCLOSEPRICE', data=pandas_df, kind="reg", stat_func=r2) sns.jointplot(x='BATHS_PER_1000SQFT', y='SALESCLOSEPRICE', data=pandas_df, kind="reg", stat_func=r2) ``` ### Time Components ``` # Import needed functions from pyspark.sql.functions import to_date, dayofweek # Convert to date type df = df.withColumn('LISTDATE', to_date(df['LISTDATE'], format='MM/dd/yyyy HH:mm')) # Get the day of the week df = df.withColumn('List_Day_of_Week', dayofweek(df['LISTDATE'])) # Sample and convert to pandas dataframe sample_df = df.sample(False, .5, 42).toPandas() # Plot count plot of of day of week sns.countplot(x="List_Day_of_Week", data=sample_df) plt.show() ``` ### Joining On Time Components ``` import pandas as pd data = dict(City=['LELM - Lake Elmo', 'MAPW - Maplewood','STP - Saint Paul','WB - Woodbury', \ 'OAKD - Oakdale', 'LELM - Lake Elmo', 'MAPW - Maplewood', \ 'STP - Saint Paul', 'WB - Woodbury', 'OAKD - Oakdale'], MedianHomeValue=[401000, 193000, 172000, 291000, 210000, 385000, 187000, 162000, 277000, 192000], Year= [2016,2016,2016,2016,2016,2015,2015,2015,2015, 2015]) df_price = pd.DataFrame(data) price_df = spark.createDataFrame(df_price) price_df.show() from pyspark.sql.functions import year # Create year column df = df.withColumn('list_year', year(df['LISTDATE'])) # Adjust year to match df = df.withColumn('report_year', (df['list_year'] - 1)) # Create join condition condition = [df['CITY'] == price_df['City'], df['report_year'] == price_df['year']] # Join the dataframes together df = df.join(price_df, on=condition, how='left') # Inspect that new columns are available df[['MedianHomeValue']].show() ``` ### Date Math ``` from pyspark.sql.functions import lag, datediff, to_date from pyspark.sql.window import Window # Cast data type mort_df = mort_df.withColumn('DATE', to_date(mort_df['DATE'])) # Create window w = Window().orderBy(mort_df['DATE']) # Create lag column mort_df = mort_df.withColumn('DATE-1', lag(mort_df['DATE'], count=1).over(w)) # Calculate difference between date columns mort_df = mort_df.withColumn('Days_Between_Report', datediff(mort_df['DATE'], mort_df['DATE-1'])) # Print results mort_df.select('Days_Between_Report').distinct().show() ``` ### Extracting Text to New Features ``` # Import needed functions from pyspark.sql.functions import when # Create boolean conditions for string matches has_attached_garage = df['GARAGEDESCRIPTION'].like('%Attached%') has_detached_garage = df['GARAGEDESCRIPTION'].like('%Detached%') # Conditional value assignment df = df.withColumn('has_attached_garage', (when(has_attached_garage, 1) .when(has_detached_garage, 0) .otherwise(None))) # Inspect results df[['GARAGEDESCRIPTION', 'has_attached_garage']].show(truncate=100) ``` ### Splitting & Exploding ``` df.select(['GARAGEDESCRIPTION']).show(truncate=100) # Import needed functions from pyspark.sql.functions import split, explode # Convert string to list-like array df = df.withColumn('garage_list', split(df['GARAGEDESCRIPTION'], ', ')) # Explode the values into new records ex_df = df.withColumn('ex_garage_list', explode(df['garage_list'])) # Inspect the values ex_df[['ex_garage_list']].distinct().show(100, truncate=50) ``` ### Pivot & Join ``` from pyspark.sql.functions import coalesce, first # Pivot piv_df = ex_df.groupBy('NO').pivot('ex_garage_list').agg(coalesce(first('constant_val'))) # Join the dataframes together and fill null joined_df = df.join(piv_df, on='NO', how='left') # Columns to zero fill zfill_cols = piv_df.columns # Zero fill the pivoted values zfilled_df = joined_df.fillna(0, subset=zfill_cols) ``` ### Binarizing Day of Week ``` df = df.withColumn('List_Day_of_Week', df['List_Day_of_Week'].cast('double')) # Import transformer from pyspark.ml.feature import Binarizer # Create the transformer binarizer = Binarizer(threshold=5, inputCol='List_Day_of_Week', outputCol='Listed_On_Weekend') # Apply the transformation to df df = binarizer.transform(df) # Verify transformation df[['List_Day_of_Week', 'Listed_On_Weekend']].show() ``` ### Bucketing ``` sample_df.head() sample_df.BEDROOMS.dtype from pyspark.ml.feature import Bucketizer # Plot distribution of sample_df sns.distplot(sample_df.BEDROOMS, axlabel='BEDROOMS') plt.show() # Create the bucket splits and bucketizer splits = [0, 1, 2, 3, 4, 5, float('Inf')] buck = Bucketizer(splits=splits, inputCol='BEDROOMS', outputCol='bedrooms') # Apply the transformation to df df = buck.transform(df) # Display results df[['BEDROOMS', 'bedrooms']].show() ``` ### One Hot Encoding ``` df.select(['SCHOOLDISTRICTNUMBER']).show() from pyspark.ml.feature import OneHotEncoder, StringIndexer # Map strings to numbers with string indexer string_indexer = StringIndexer(inputCol='SCHOOLDISTRICTNUMBER', outputCol='School_Index') indexed_df = string_indexer.fit(df).transform(df) # Onehot encode indexed values encoder = OneHotEncoder(inputCol='School_Index', outputCol='School_Vec') encoded_df = encoder.transform(indexed_df) # Inspect the transformation steps encoded_df[['SCHOOLDISTRICTNUMBER', 'School_Index', 'School_Vec']].show(truncate=100) ``` notice that the implementation in PySpark is different than Pandas get_dummies() as it puts everything into a single column of type vector rather than a new column for each value. It's also different from sklearn's OneHotEncoder in that the last categorical value is captured by a vector of all zeros ### Building a Model ``` df.select(['OFFMARKETDATE']).show() from datetime import timedelta df = df.withColumn('OFFMARKETDATE', to_date(df['OFFMARKETDATE'], format='MM/dd/yyyy HH:mm')) def train_test_split_date(df, split_col, test_days=45): """Calculate the date to split test and training sets""" # Find how many days our data spans max_date = df.agg({split_col: 'max'}).collect()[0][0] min_date = df.agg({split_col: 'min'}).collect()[0][0] # Subtract an integer number of days from the last date in dataset split_date = max_date - timedelta(days=test_days) return split_date # Find the date to use in spitting test and train split_date = train_test_split_date(df, 'OFFMARKETDATE') # Create Sequential Test and Training Sets train_df = df.where(df['OFFMARKETDATE'] < split_date) test_df = df.where(df['OFFMARKETDATE'] >= split_date).where(df['LISTDATE'] <= split_date) split_date train_df.count(), test_df.count() ``` ### Adjusting Time Features ``` from pyspark.sql.functions import datediff, to_date, lit split_date = to_date(lit('2017-12-10')) # Create a copy of DAYSONMARKET to review later test_df = test_df.withColumn('DAYSONMARKET_Original', test_df['DAYSONMARKET']) # Recalculate DAYSONMARKET from what we know on our split date test_df = test_df.withColumn('DAYSONMARKET', datediff(split_date, test_df['LISTDATE'])) # Review the difference test_df[['LISTDATE', 'OFFMARKETDATE', 'DAYSONMARKET_Original', 'DAYSONMARKET']].show() ``` if the house is still on the market, we don't know how many more days it will stay on the market. We need to adjust our test_df to reflect what information we currently have as of 2017-12-10. Missing values are handled by Random Forests internally where they partition on missing values. As long as you replace them with something outside of the range of normal values, they will be handled correctly. Likewise, categorical features only need to be mapped to numbers, they are fine to stay all in one column by using a StringIndexer as we saw in chapter 3. OneHot encoding which converts each possible value to its own boolean feature is not needed. ### Dropping Columns with Low Observations ``` df.select('FENCE').show() binary_cols = ['FENCE_WIRE', 'FENCE_ELECTRIC', 'FENCE_NAN', 'FENCE_PARTIAL', 'FENCE_RAIL', 'FENCE_OTHER', 'FENCE_CHAIN LINK', 'FENCE_FULL', 'FENCE_NONE', 'FENCE_PRIVACY', 'FENCE_WOOD', 'FENCE_INVISIBLE', # e.g. one hot = fence columns 'ROOF_ASPHALT SHINGLES', 'ROOF_SHAKES', 'ROOF_NAN', 'ROOF_UNSPECIFIED SHINGLE', 'ROOF_SLATE', 'ROOF_PITCHED', 'ROOF_FLAT', 'ROOF_TAR/GRAVEL', 'ROOF_OTHER', 'ROOF_METAL', 'ROOF_TILE', 'ROOF_RUBBER', 'ROOF_WOOD SHINGLES', 'ROOF_AGE OVER 8 YEARS', 'ROOF_AGE 8 YEARS OR LESS', 'POOLDESCRIPTION_NAN', 'POOLDESCRIPTION_HEATED', 'POOLDESCRIPTION_NONE', 'POOLDESCRIPTION_SHARED', 'POOLDESCRIPTION_INDOOR', 'POOLDESCRIPTION_OUTDOOR', 'POOLDESCRIPTION_ABOVE GROUND', 'POOLDESCRIPTION_BELOW GROUND', 'GARAGEDESCRIPTION_ASSIGNED', 'GARAGEDESCRIPTION_TANDEM', 'GARAGEDESCRIPTION_UNCOVERED/OPEN', 'GARAGEDESCRIPTION_TUCKUNDER', 'GARAGEDESCRIPTION_DRIVEWAY - ASPHALT', 'GARAGEDESCRIPTION_HEATED GARAGE', 'GARAGEDESCRIPTION_UNDERGROUND GARAGE', 'GARAGEDESCRIPTION_DRIVEWAY - SHARED', 'GARAGEDESCRIPTION_CONTRACT PKG REQUIRED', 'GARAGEDESCRIPTION_GARAGE DOOR OPENER', 'GARAGEDESCRIPTION_MORE PARKING OFFSITE FOR FEE', 'GARAGEDESCRIPTION_VALET PARKING FOR FEE', 'GARAGEDESCRIPTION_OTHER', 'GARAGEDESCRIPTION_MORE PARKING ONSITE FOR FEE', 'GARAGEDESCRIPTION_DRIVEWAY - OTHER SURFACE', 'GARAGEDESCRIPTION_DETACHED GARAGE', 'GARAGEDESCRIPTION_SECURED', 'GARAGEDESCRIPTION_CARPORT', 'GARAGEDESCRIPTION_DRIVEWAY - CONCRETE', 'GARAGEDESCRIPTION_ON-STREET PARKING ONLY', 'GARAGEDESCRIPTION_COVERED', 'GARAGEDESCRIPTION_INSULATED GARAGE', 'GARAGEDESCRIPTION_UNASSIGNED', 'GARAGEDESCRIPTION_NONE', 'GARAGEDESCRIPTION_DRIVEWAY - GRAVEL', 'GARAGEDESCRIPTION_NO INT ACCESS TO DWELLING', 'GARAGEDESCRIPTION_UNITS VARY', 'GARAGEDESCRIPTION_ATTACHED GARAGE', 'APPLIANCES_NAN', 'APPLIANCES_COOKTOP', 'APPLIANCES_WALL OVEN', 'APPLIANCES_WATER SOFTENER - OWNED', 'APPLIANCES_DISPOSAL', 'APPLIANCES_DISHWASHER', 'APPLIANCES_OTHER', 'APPLIANCES_INDOOR GRILL', 'APPLIANCES_WASHER', 'APPLIANCES_RANGE', 'APPLIANCES_REFRIGERATOR', 'APPLIANCES_FURNACE HUMIDIFIER', 'APPLIANCES_TANKLESS WATER HEATER', 'APPLIANCES_ELECTRONIC AIR FILTER', 'APPLIANCES_MICROWAVE', 'APPLIANCES_EXHAUST FAN/HOOD', 'APPLIANCES_NONE', 'APPLIANCES_CENTRAL VACUUM', 'APPLIANCES_TRASH COMPACTOR', 'APPLIANCES_AIR-TO-AIR EXCHANGER', 'APPLIANCES_DRYER', 'APPLIANCES_FREEZER', 'APPLIANCES_WATER SOFTENER - RENTED', 'EXTERIOR_SHAKES', 'EXTERIOR_CEMENT BOARD', 'EXTERIOR_BLOCK', 'EXTERIOR_VINYL', 'EXTERIOR_FIBER BOARD', 'EXTERIOR_OTHER', 'EXTERIOR_METAL', 'EXTERIOR_BRICK/STONE', 'EXTERIOR_STUCCO', 'EXTERIOR_ENGINEERED WOOD', 'EXTERIOR_WOOD', 'DININGROOMDESCRIPTION_EAT IN KITCHEN', 'DININGROOMDESCRIPTION_NAN', 'DININGROOMDESCRIPTION_OTHER', 'DININGROOMDESCRIPTION_LIVING/DINING ROOM', 'DININGROOMDESCRIPTION_SEPARATE/FORMAL DINING ROOM', 'DININGROOMDESCRIPTION_KITCHEN/DINING ROOM', 'DININGROOMDESCRIPTION_INFORMAL DINING ROOM', 'DININGROOMDESCRIPTION_BREAKFAST AREA', 'BASEMENT_FINISHED (LIVABLE)', 'BASEMENT_PARTIAL', 'BASEMENT_SUMP PUMP', 'BASEMENT_INSULATING CONCRETE FORMS', 'BASEMENT_CRAWL SPACE', 'BASEMENT_PARTIAL FINISHED', 'BASEMENT_CONCRETE BLOCK', 'BASEMENT_DRAINAGE SYSTEM', 'BASEMENT_POURED CONCRETE', 'BASEMENT_UNFINISHED', 'BASEMENT_DRAIN TILED', 'BASEMENT_WOOD', 'BASEMENT_FULL', 'BASEMENT_EGRESS WINDOWS', 'BASEMENT_DAY/LOOKOUT WINDOWS', 'BASEMENT_SLAB', 'BASEMENT_STONE', 'BASEMENT_NONE', 'BASEMENT_WALKOUT', 'BATHDESC_MAIN FLOOR 1/2 BATH', 'BATHDESC_TWO MASTER BATHS', 'BATHDESC_MASTER WALK-THRU', 'BATHDESC_WHIRLPOOL', 'BATHDESC_NAN', 'BATHDESC_3/4 BASEMENT', 'BATHDESC_TWO BASEMENT BATHS', 'BATHDESC_OTHER', 'BATHDESC_3/4 MASTER', 'BATHDESC_MAIN FLOOR 3/4 BATH', 'BATHDESC_FULL MASTER', 'BATHDESC_MAIN FLOOR FULL BATH', 'BATHDESC_WALK-IN SHOWER', 'BATHDESC_SEPARATE TUB & SHOWER', 'BATHDESC_FULL BASEMENT', 'BATHDESC_BASEMENT', 'BATHDESC_WALK THRU', 'BATHDESC_BATHROOM ENSUITE', 'BATHDESC_PRIVATE MASTER', 'BATHDESC_JACK & JILL 3/4', 'BATHDESC_UPPER LEVEL 1/2 BATH', 'BATHDESC_ROUGH IN', 'BATHDESC_UPPER LEVEL FULL BATH', 'BATHDESC_1/2 MASTER', 'BATHDESC_1/2 BASEMENT', 'BATHDESC_JACK AND JILL', 'BATHDESC_UPPER LEVEL 3/4 BATH', 'ZONING_INDUSTRIAL', 'ZONING_BUSINESS/COMMERCIAL', 'ZONING_OTHER', 'ZONING_RESIDENTIAL-SINGLE', 'ZONING_RESIDENTIAL-MULTI-FAMILY', 'COOLINGDESCRIPTION_WINDOW', 'COOLINGDESCRIPTION_WALL', 'COOLINGDESCRIPTION_DUCTLESS MINI-SPLIT', 'COOLINGDESCRIPTION_NONE', 'COOLINGDESCRIPTION_GEOTHERMAL', 'COOLINGDESCRIPTION_CENTRAL', 'CITY:LELM - LAKE ELMO', 'CITY:MAPW - MAPLEWOOD', 'CITY:OAKD - OAKDALE', 'CITY:STP - SAINT PAUL', 'CITY:WB - WOODBURY', 'LISTTYPE:EXCLUSIVE AGENCY', 'LISTTYPE:EXCLUSIVE RIGHT', 'LISTTYPE:EXCLUSIVE RIGHT WITH EXCLUSIONS', 'LISTTYPE:OTHER', 'LISTTYPE:SERVICE AGREEMENT', 'SCHOOLDISTRICTNUMBER:6 - SOUTH ST. PAUL', 'SCHOOLDISTRICTNUMBER:622 - NORTH ST PAUL-MAPLEWOOD', 'SCHOOLDISTRICTNUMBER:623 - ROSEVILLE', 'SCHOOLDISTRICTNUMBER:624 - WHITE BEAR LAKE', 'SCHOOLDISTRICTNUMBER:625 - ST. PAUL', 'SCHOOLDISTRICTNUMBER:832 - MAHTOMEDI', 'SCHOOLDISTRICTNUMBER:833 - SOUTH WASHINGTON COUNTY', 'SCHOOLDISTRICTNUMBER:834 - STILLWATER', 'POTENTIALSHORTSALE:NO', 'POTENTIALSHORTSALE:NOT DISCLOSED', 'STYLE:(CC) CONVERTED MANSION', 'STYLE:(CC) HIGH RISE (4+ LEVELS)', 'STYLE:(CC) LOW RISE (3- LEVELS)', 'STYLE:(CC) MANOR/VILLAGE', 'STYLE:(CC) TWO UNIT', 'STYLE:(SF) FOUR OR MORE LEVEL SPLIT', 'STYLE:(SF) MODIFIED TWO STORY', 'STYLE:(SF) MORE THAN TWO STORIES', 'STYLE:(SF) ONE 1/2 STORIES', 'STYLE:(SF) ONE STORY', 'STYLE:(SF) OTHER', 'STYLE:(SF) SPLIT ENTRY (BI-LEVEL)', 'STYLE:(SF) THREE LEVEL SPLIT', 'STYLE:(SF) TWO STORIES', 'STYLE:(TH) DETACHED', 'STYLE:(TH) QUAD/4 CORNERS', 'STYLE:(TH) SIDE X SIDE', 'STYLE:(TW) TWIN HOME', 'ASSUMABLEMORTGAGE:INFORMATION COMING', 'ASSUMABLEMORTGAGE:NOT ASSUMABLE', 'ASSUMABLEMORTGAGE:YES W/ QUALIFYING', 'ASSUMABLEMORTGAGE:YES W/NO QUALIFYING', 'ASSESSMENTPENDING:NO', 'ASSESSMENTPENDING:UNKNOWN', 'ASSESSMENTPENDING:YES'] len(binary_cols) obs_threshold = 30 cols_to_remove = list() # Inspect first 10 binary columns in list for col in binary_cols[0:10]: # Count the number of 1 values in the binary column obs_count = df.agg({col: 'sum'}).collect()[0][0] # If less than our observation threshold, remove if obs_count < obs_threshold: cols_to_remove.append(col) # Drop columns and print starting and ending dataframe shapes new_df = df.drop(*cols_to_remove) print('Rows: ' + str(df.count()) + ' Columns: ' + str(len(df.columns))) print('Rows: ' + str(new_df.count()) + ' Columns: ' + str(len(new_df.columns))) ``` Rows: 5000 Columns: 253 Rows: 5000 Columns: 250 ### Naively Handling Missing and Categorical Values For missing values since our data is strictly positive, we will assign -1. The random forest will split on this value and handle it differently than the rest of the values in the same feature. ``` categorical_cols = ['CITY', 'LISTTYPE', 'SCHOOLDISTRICTNUMBER', 'POTENTIALSHORTSALE', 'STYLE', 'ASSUMABLEMORTGAGE', 'ASSESSMENTPENDING'] from pyspark.ml import Pipeline # Replace missing values df = df.fillna(-1, subset=['WALKSCORE', 'BIKESCORE']) # Create list of StringIndexers using list comprehension indexers = [StringIndexer(inputCol=col, outputCol=col+"_IDX")\ .setHandleInvalid("keep") for col in categorical_cols] # Create pipeline of indexers indexer_pipeline = Pipeline(stages=indexers) # Fit and Transform the pipeline to the original data df_indexed = indexer_pipeline.fit(df).transform(df) # Clean up redundant columns df_indexed = df_indexed.drop(*categorical_cols) # Inspect data transformations print(df_indexed.dtypes) ``` ### Building a Regression Model ``` from pyspark.ml.regression import GBTRegressor # Train a Gradient Boosted Trees (GBT) model. gbt = GBTRegressor(featuresCol='features', labelCol='SALESCLOSEPRICE', predictionCol="Prediction_Price", seed=42 ) # Train model. model = gbt.fit(train_df) ``` ### Evaluating & Comparing Algorithms ``` from pyspark.ml.evaluation import RegressionEvaluator # Select columns to compute test error evaluator = RegressionEvaluator(labelCol='SALESCLOSEPRICE', predictionCol='Prediction_Price') # Dictionary of model predictions to loop over models = {'Gradient Boosted Trees': gbt_predictions, 'Random Forest Regression': rfr_predictions} for key, preds in models.items(): # Create evaluation metrics rmse = evaluator.evaluate(preds, {evaluator.metricName: 'rmse'}) r2 = evaluator.evaluate(preds, {evaluator.metricName: 'r2'}) # Print Model Metrics print(key + ' RMSE: ' + str(rmse)) print(key + ' R^2: ' + str(r2)) ``` Gradient Boosted Trees RMSE: 74380.63652512032 Gradient Boosted Trees R^2: 0.6482244200795505 Random Forest Regression RMSE: 22898.84041072095 Random Forest Regression R^2: 0.9666594402208077 ### Interpreting Results ``` # Convert feature importances to a pandas column fi_df = pd.DataFrame(importances, columns=['importance']) # Convert list of feature names to pandas column fi_df['feature'] = pd.Series(feature_cols) # Sort the data based on feature importance fi_df.sort_values(by=['importance'], ascending=False, inplace=True) # Inspect Results fi_df.head(10) ``` ### Saving & Loading Models ``` from pyspark.ml.regression import RandomForestRegressionModel # Save model model.save('rfr_no_listprice') # Load model loaded_model = RandomForestRegressionModel.load('rfr_no_listprice') ```
github_jupyter
# Recommending products with RetailRocket event logs This IPython notebook illustrates the usage of the [ctpfrec](https://github.com/david-cortes/ctpfrec/) Python package for _Collaborative Topic Poisson Factorization_ in recommender systems based on sparse count data using the [RetailRocket](https://www.kaggle.com/retailrocket/ecommerce-dataset) dataset, consisting of event logs (view, add to cart, purchase) from an online catalog of products plus anonymized text descriptions of items. Collaborative Topic Poisson Factorization is a probabilistic model that tries to jointly factorize the user-item interaction matrix along with item-word text descriptions (as bag-of-words) of the items by the product of lower dimensional matrices. The package can also extend this model to add user attributes in the same format as the items’. Compared to competing methods such as BPR (Bayesian Personalized Ranking) or weighted-implicit NMF (non-negative matrix factorization of the non-probabilistic type that uses squared loss), it only requires iterating over the data for which an interaction was observed and not over data for which no interaction was observed (i.e. it doesn’t iterate over items not clicked by a user), thus being more scalable, and at the same time producing better results when fit to sparse count data (in general). Same for the word counts of items. The implementation here is based on the paper _Content-based recommendations with poisson factorization (Gopalan, P.K., Charlin, L. and Blei, D., 2014)_. For a similar package for explicit feedback data see also [cmfrec](https://github.com/david-cortes/cmfrec/). For Poisson factorization without side information see [hpfrec](https://github.com/david-cortes/hpfrec/). **Small note: if the TOC here is not clickable or the math symbols don't show properly, try visualizing this same notebook from nbviewer following [this link](http://nbviewer.jupyter.org/github/david-cortes/ctpfrec/blob/master/example/ctpfrec_retailrocket.ipynb).** ** * ## Sections * [1. Model description](#p1) * [2. Loading and processing the dataset](#p2) * [3. Fitting the model](#p3) * [4. Common sense checks](#p4) * [5. Comparison to model without item information](#p5) * [6. Making recommendations](#p6) * [7. References](#p7) ** * <a id="p1"></a> ## 1. Model description The model consists in producing a low-rank non-negative matrix factorization of the item-word matrix (a.k.a. bag-of-words, a matrix where each row represents an item and each column a word, with entries containing the number of times each word appeared in an item’s text, ideally with some pre-processing on the words such as stemming or lemmatization) by the product of two lower-rank matrices $$ W_{iw} \approx \Theta_{ik} \beta_{wk}^T $$ along with another low-rank matrix factorization of the user-item activity matrix (a matrix where each entry corresponds to how many times each user interacted with each item) that shares the same item-factor matrix above plus an offset based on user activity and not based on items’ words $$ Y_{ui} \approx \eta_{uk} (\Theta_{ik} + \epsilon_{ik})^T $$ These matrices are assumed to come from a generative process as follows: * Items: $$ \beta_{wk} \sim Gamma(a,b) $$ $$ \Theta_{ik} \sim Gamma(c,d)$$ $$ W_{iw} \sim Poisson(\Theta_{ik} \beta_{wk}^T) $$ _(Where $W$ is the item-word count matrix, $k$ is the number of latent factors, $i$ is the number of items, $w$ is the number of words)_ * User-Item interactions $$ \eta_{uk} \sim Gamma(e,f) $$ $$ \epsilon_{ik} \sim Gamma(g,h) $$ $$ Y_{ui} \sim Poisson(\eta_{uk} (\Theta_{ik} + \epsilon_{ik})^T) $$ _(Where $u$ is the number of users, $Y$ is the user-item interaction matrix)_ The model is fit using mean-field variational inference with coordinate ascent. For more details see the paper in the references. ** * <a id="p2"></a> ## 2. Loading and processing the data Reading and concatenating the data. First the event logs: ``` import numpy as np, pandas as pd events = pd.read_csv("events.csv") events.head() events.event.value_counts() ``` In order to put all user-item interactions in one scale, I will arbitrarily assign values as follows: * View: +1 * Add to basket: +3 * Purchase: +3 Thus, if a user clicks an item, that `(user, item)` pair will have `value=1`, if she later adds it to cart and purchases it, will have `value=7` (plus any other views of the same item), and so on. The reasoning behind this scale is because the distributions of counts and sums of counts seem to still follow a nice exponential distribution with these values, but different values might give better results in terms of models fit to them. ``` %matplotlib inline equiv = { 'view':1, 'addtocart':3, 'transaction':3 } events['count']=events.event.map(equiv) events.groupby('visitorid')['count'].sum().value_counts().hist(bins=200) events = events.groupby(['visitorid','itemid'])['count'].sum().to_frame().reset_index() events.rename(columns={'visitorid':'UserId', 'itemid':'ItemId', 'count':'Count'}, inplace=True) events.head() ``` Now creating a train and test split. For simplicity purposes and in order to be able to make a fair comparison with a model that doesn't use item descriptions, I will try to only take users that had >= 3 items in the training data, and items that had >= 3 users. Given the lack of user attributes and the fact that it will be compared later to a model without side information, the test set will only have users from the training data, but it's also possible to use user attributes if they follow the same format as the items', in which case the model can also recommend items to new users. In order to compare it later to a model without items' text, I will also filter out the test set to have only items that were in the training set. **This is however not a model limitation, as it can also recommend items that have descriptions but no user interactions**. ``` from sklearn.model_selection import train_test_split events_train, events_test = train_test_split(events, test_size=.2, random_state=1) del events ## In order to find users and items with at least 3 interactions each, ## it's easier and faster to use a simple heuristic that first filters according to one criteria, ## then, according to the other, and repeats. ## Finding a real subset of the data in which each item has strictly >= 3 users, ## and each user has strictly >= 3 items, is a harder graph partitioning or optimization ## problem. For a similar example of finding such subsets see also: ## http://nbviewer.ipython.org/github/david-cortes/datascienceprojects/blob/master/optimization/dataset_splitting.ipynb users_filter_out = events_train.groupby('UserId')['ItemId'].agg(lambda x: len(tuple(x))) users_filter_out = np.array(users_filter_out.index[users_filter_out < 3]) items_filter_out = events_train.loc[~np.in1d(events_train.UserId, users_filter_out)].groupby('ItemId')['UserId'].agg(lambda x: len(tuple(x))) items_filter_out = np.array(items_filter_out.index[items_filter_out < 3]) users_filter_out = events_train.loc[~np.in1d(events_train.ItemId, items_filter_out)].groupby('UserId')['ItemId'].agg(lambda x: len(tuple(x))) users_filter_out = np.array(users_filter_out.index[users_filter_out < 3]) events_train = events_train.loc[~np.in1d(events_train.UserId.values, users_filter_out)] events_train = events_train.loc[~np.in1d(events_train.ItemId.values, items_filter_out)] events_test = events_test.loc[np.in1d(events_test.UserId.values, events_train.UserId.values)] events_test = events_test.loc[np.in1d(events_test.ItemId.values, events_train.ItemId.values)] print(events_train.shape) print(events_test.shape) ``` Now processing the text descriptions of the items: ``` iteminfo = pd.read_csv("item_properties_part1.csv") iteminfo2 = pd.read_csv("item_properties_part2.csv") iteminfo = iteminfo.append(iteminfo2, ignore_index=True) iteminfo.head() ``` The item's description contain many fields and have a mixture of words and numbers. The numeric variables, as per the documentation, are prefixed with an "n" and have three digits decimal precision - I will exclude them here since this model is insensitive to numeric attributes such as price. The words are already lemmazed, and since we only have their IDs, it's not possible to do any other pre-processing on them. Although the descriptions don't say anything about it, looking at the contents and the lengths of the different fields, here I will assume that the field $283$ is the product title and the field $888$ is the product description. I will just concatenate them to obtain an overall item text, but there might be better ways of doing this (such as having different IDs for the same word when it appears in the title or the body, or multiplying those in the title by some number, etc.) As the descriptions vary over time, I will only take the most recent version for each item: ``` iteminfo = iteminfo.loc[iteminfo.property.isin(('888','283'))] iteminfo = iteminfo.loc[iteminfo.groupby(['itemid','property'])['timestamp'].idxmax()] iteminfo.reset_index(drop=True, inplace=True) iteminfo.head() ``` **Note that for simplicity I am completely ignoring the categories (these are easily incorporated e.g. by adding a count of +1 for each category to which an item belongs) and important factors such as the price. I am also completely ignoring all the other fields.** ``` from sklearn.feature_extraction.text import CountVectorizer from scipy.sparse import coo_matrix import re def concat_fields(x): x = list(x) out = x[0] for i in x[1:]: out += " " + i return out class NonNumberTokenizer(object): def __init__(self): pass def __call__(self, txt): return [i for i in txt.split(" ") if bool(re.search("^\d", i))] iteminfo = iteminfo.groupby('itemid')['value'].agg(lambda x: concat_fields(x)) t = CountVectorizer(tokenizer=NonNumberTokenizer(), stop_words=None, dtype=np.int32, strip_accents=None, lowercase=False) bag_of_words = t.fit_transform(iteminfo) bag_of_words = coo_matrix(bag_of_words) bag_of_words = pd.DataFrame({ 'ItemId' : iteminfo.index[bag_of_words.row], 'WordId' : bag_of_words.col, 'Count' : bag_of_words.data }) del iteminfo bag_of_words.head() ``` In this case, I will not filter it out by only items that were in the training set, as other items can still be used to get better latent factors. ** * <a id="p3"></a> ## 3. Fitting the model Fitting the model - note that I'm using some enhancements (passed as arguments to the class constructor) over the original version in the paper: * Standardizing item counts so as not to favor items with longer descriptions. * Initializing $\Theta$ and $\beta$ through hierarchical Poisson factorization instead of latent Dirichlet allocation. * Using a small step size for the updates for the parameters obtained from hierarchical Poisson factorization at the beginning, which then grows to one with increasing iteration numbers (informally, this achieves to somehwat "preserve" these fits while the user parameters are adjusted to these already-fit item parameters - then as the user parameters are already defined towards them, the item and word parameters start changing too). I'll be also fitting two slightly different models: one that takes (and can make recommendations for) all the items for which there are either descriptions or user clicks, and another that uses all the items for which there are descriptions to initialize the item-related parameters but discards the ones without clicks (can only make recommendations for items that users have clicked). For more information about the parameters and what they do, see the online documentation: [http://ctpfrec.readthedocs.io](http://ctpfrec.readthedocs.io) ``` print(events_train.shape) print(events_test.shape) print(bag_of_words.shape) %%time from ctpfrec import CTPF recommender_all_items = CTPF(k=70, step_size=lambda x: 1-1/np.sqrt(x+1), standardize_items=True, initialize_hpf=True, reindex=True, missing_items='include', allow_inconsistent_math=True, random_seed=1) recommender_all_items.fit(counts_df=events_train.copy(), words_df=bag_of_words.copy()) %%time recommender_clicked_items_only = CTPF(k=70, step_size=lambda x: 1-1/np.sqrt(x+1), standardize_items=True, initialize_hpf=True, reindex=True, missing_items='exclude', allow_inconsistent_math=True, random_seed=1) recommender_clicked_items_only.fit(counts_df=events_train.copy(), words_df=bag_of_words.copy()) ``` Most of the time here was spent in fitting the model to items that no user in the training set had clicked. If using instead a random initialization, it would have taken a lot less time to fit this model (there would be only a fraction of the items - see above time spent in each procedure), but the results are slightly worse. _Disclaimer: this notebook was run on a Google cloud server with Skylake CPU using 8 cores, and memory usage tops at around 6GB of RAM for the first model (including all the objects loaded before). In a desktop computer, it would take a bit longer to fit._ ** * <a id="p4"></a> ## 4. Common sense checks There are many different metrics to evaluate recommendation quality in implicit datasets, but all of them have their drawbacks. The idea of this notebook is to illustrate the package usage and not to introduce and compare evaluation metrics, so I will only perform some common sense checks on the test data. For implementations of evaluation metrics for implicit recommendations see other packages such as [lightFM](https://github.com/lyst/lightfm). As some common sense checks, the predictions should: * Be higher for this non-zero hold-out sample than for random items. * Produce a good discrimination between random items and those in the hold-out sample (very related to the first point). * Be correlated with the numer of events per user-item pair in the hold-out sample. * Follow an exponential distribution rather than a normal or some other symmetric distribution. Here I'll check these four conditions: #### Model with all items ``` events_test['Predicted'] = recommender_all_items.predict(user=events_test.UserId, item=events_test.ItemId) events_test['RandomItem'] = np.random.choice(events_train.ItemId.unique(), size=events_test.shape[0]) events_test['PredictedRandom'] = recommender_all_items.predict(user=events_test.UserId, item=events_test.RandomItem) print("Average prediction for combinations in test set: ", events_test.Predicted.mean()) print("Average prediction for random combinations: ", events_test.PredictedRandom.mean()) from sklearn.metrics import roc_auc_score was_clicked = np.r_[np.ones(events_test.shape[0]), np.zeros(events_test.shape[0])] score_model = np.r_[events_test.Predicted.values, events_test.PredictedRandom.values] roc_auc_score(was_clicked[~np.isnan(score_model)], score_model[~np.isnan(score_model)]) np.corrcoef(events_test.Count[~events_test.Predicted.isnull()], events_test.Predicted[~events_test.Predicted.isnull()])[0,1] import matplotlib.pyplot as plt %matplotlib inline _ = plt.hist(events_test.Predicted, bins=200) plt.xlim(0,5) plt.show() ``` #### Model with clicked items only ``` events_test['Predicted'] = recommender_clicked_items_only.predict(user=events_test.UserId, item=events_test.ItemId) events_test['PredictedRandom'] = recommender_clicked_items_only.predict(user=events_test.UserId, item=events_test.RandomItem) print("Average prediction for combinations in test set: ", events_test.Predicted.mean()) print("Average prediction for random combinations: ", events_test.PredictedRandom.mean()) was_clicked = np.r_[np.ones(events_test.shape[0]), np.zeros(events_test.shape[0])] score_model = np.r_[events_test.Predicted.values, events_test.PredictedRandom.values] roc_auc_score(was_clicked, score_model) np.corrcoef(events_test.Count, events_test.Predicted)[0,1] _ = plt.hist(events_test.Predicted, bins=200) plt.xlim(0,5) plt.show() ``` ** * <a id="p5"></a> ## 5. Comparison to model without item information A natural benchmark to compare this model is to is a Poisson factorization model without any item side information - here I'll do the comparison with a _Hierarchical Poisson factorization_ model with the same metrics as above: ``` %%time from hpfrec import HPF recommender_no_sideinfo = HPF(k=70) recommender_no_sideinfo.fit(events_train.copy()) events_test_comp = events_test.copy() events_test_comp['Predicted'] = recommender_no_sideinfo.predict(user=events_test_comp.UserId, item=events_test_comp.ItemId) events_test_comp['PredictedRandom'] = recommender_no_sideinfo.predict(user=events_test_comp.UserId, item=events_test_comp.RandomItem) print("Average prediction for combinations in test set: ", events_test_comp.Predicted.mean()) print("Average prediction for random combinations: ", events_test_comp.PredictedRandom.mean()) was_clicked = np.r_[np.ones(events_test_comp.shape[0]), np.zeros(events_test_comp.shape[0])] score_model = np.r_[events_test_comp.Predicted.values, events_test_comp.PredictedRandom.values] roc_auc_score(was_clicked, score_model) np.corrcoef(events_test_comp.Count, events_test_comp.Predicted)[0,1] ``` As can be seen, adding the side information and widening the catalog to include more items using only their text descriptions (no clicks) results in an improvemnet over all 3 metrics, especially correlation with number of clicks. More important than that however, is its ability to make recommendations from a far wider catalog of items, which in practice can make a much larger difference in recommendation quality than improvement in typicall offline metrics. ** * <a id="p6"></a> ## 6. Making recommendations The package provides a simple API for making predictions and Top-N recommended lists. These Top-N lists can be made among all items, or across some user-provided subset only, and you can choose to discard items with which the user had already interacted in the training set. Here I will: * Pick a random user with a reasonably long event history. * See which items would the model recommend to them among those which he has not yet clicked. * Compare it with the recommended list from the model without item side information. Unfortunately, since all the data is anonymized, it's not possible to make a qualitative evaluation of the results by looking at the recommended lists as it is in other datasets. ``` users_many_events = events_train.groupby('UserId')['ItemId'].agg(lambda x: len(tuple(x))) users_many_events = np.array(users_many_events.index[users_many_events > 20]) np.random.seed(1) chosen_user = np.random.choice(users_many_events) chosen_user %%time recommender_all_items.topN(chosen_user, n=20) ``` *(These numbers represent the IDs of the items being recommended as they appeared in the `events_train` data frame)* ``` %%time recommender_clicked_items_only.topN(chosen_user, n=20) %%time recommender_no_sideinfo.topN(chosen_user, n=20) ``` ** * <a id="p7"></a> ## 7. References * Gopalan, Prem K., Laurent Charlin, and David Blei. "Content-based recommendations with poisson factorization." Advances in Neural Information Processing Systems. 2014.
github_jupyter
``` import numpy as np import seaborn as sns import pandas as pd import matplotlib.pyplot as plt plt.rcParams['axes.titlesize'] = 20 plt.rcParams['axes.titleweight'] = 10 ``` ## 1. Dataset Read ``` df = pd.read_csv("haberman.csv") df.head() ``` ## 2. Basic Analysis ``` print("No. of features are in given dataset : {} \n".format(len(df.columns[:-1]))) print("Features are : {} \n".format(list(df.columns)[:-1])) print("Target Feature is : {}".format(df.columns[-1])) df.info() # Note: No null entries df.describe() # basic stats about df # !pip install statsmodels #Median, Quantiles, Percentiles, IQR. print("\nMedians:") print(np.median(df["nodes"])) #Median with an outlier print(np.median(np.append(df["age"],50))); print(np.median(df["year"])) print("\nQuantiles:") print(np.percentile(df["nodes"],np.arange(0, 100, 25))) print(np.percentile(df["age"],np.arange(0, 100, 25))) print(np.percentile(df["year"], np.arange(0, 100, 25))) print("\n90th Percentiles:") print(np.percentile(df["nodes"],90)) print(np.percentile(df["age"],90)) print(np.percentile(df["year"], 90)) from statsmodels import robust print ("\nMedian Absolute Deviation") print(robust.mad(df["nodes"])) print(robust.mad(df["age"])) print(robust.mad(df["year"])) print("No of datapoint is in each feature : {} \n".format(df.size / 4)) print("No of classes and datapoint is in dataset :\n{}\n".format(df.status.value_counts())) ``` ## 3. Insights of cancer patients survival rate of haberman hospital before/after surgery ``` plt.figure(figsize=(15,8)); sns.set(style='whitegrid'); plt.rcParams['axes.titlesize'] = 15 plt.rcParams['axes.titleweight'] = 50 sns.FacetGrid(df, hue='status', size=4) \ .map(plt.scatter, 'age', 'nodes') \ .add_legend(); plt.title("AGE - NODES SCATTER PLOT"); plt.show(); ``` `Observation :` 1. Features are tightly merged with each other 2. Higher lyph nodes causes higher risks ### 3.1 Multivariate Analysis ``` plt.close(); sns.set_style("whitegrid"); sns.pairplot(df, hue="status", size=4, vars=['year','age', 'nodes'], diag_kind='kde', kind='scatter'); plt.show() ``` `observation : ` 1. All features are tightly overlapped 2. aged peoples with higher lyph nodes is having higher possibility of survival risk ### 3.2 Univariate Analysis #### Histogram, CDF, PDF ``` sns.set_style("whitegrid"); plt.figure(figsize=(15,8)) plt.rcParams['axes.titlesize'] = 20 plt.rcParams['axes.titleweight'] = 10 sns.distplot(df.loc[df['status'] == 1].nodes , bins=20, label='survived', color='Green'); sns.distplot(df.loc[df['status'] == 2].nodes , bins=20, label='unsurvived', color='Red'); plt.legend(); plt.title("NODES DISTRIBUTION OVER TARGET"); ``` `observation :` 1. The person who has nodes > 10, is having higher survival risk; 2. if the person who has < 10, is having low survival risk ``` sns.set_style("whitegrid"); plt.figure(figsize=(15,8)) plt.rcParams['axes.titlesize'] = 20 plt.rcParams['axes.titleweight'] = 10 sns.distplot(df.loc[df['status'] == 1].age , bins=20, label='survived', color='Green'); sns.distplot(df.loc[df['status'] == 2].age , bins=20, label='unsurvived', color='Red'); plt.legend(); plt.title("AGE DISTRIBUTION OVER TARGET"); ``` `observation : ` 1. people whose age lies between 42-55, they slightly have higher survival risk possibility; ``` sns.set_style("whitegrid"); plt.figure(figsize=(15,8)) plt.rcParams['axes.titlesize'] = 20 plt.rcParams['axes.titleweight'] = 10 sns.distplot(df.loc[df['status'] == 1].year , bins=20, label='survived', color='Green') sns.distplot(df.loc[df['status'] == 2].year , bins=20, label='unsurvived', color='Red') plt.legend() plt.title("YEAR DISTRIBUTION OVER TARGET"); ``` `observation : ` 1. patient who had surgery inbetween [1958 - mid of 1963] and [1966 - 1968] years had highly survived; 2. patient who had surgery inbetween [1963 - 1966] years had higher survival risk. ``` sns.set_style("whitegrid"); plt.figure(figsize=(10,6)) plt.rcParams['axes.titlesize'] = 20 plt.rcParams['axes.titleweight'] = 10 count, bin_edges = np.histogram(df.loc[df['status'] == 1].nodes, bins=10, density=True) nodes_pdf = count / sum(count) nodes_cdf = np.cumsum(nodes_pdf) plt.plot(bin_edges[1:],nodes_pdf, color='green', marker='o', linestyle='dashed') plt.plot(bin_edges[1:],nodes_cdf, color='black', marker='o', linestyle='dashed') count, bin_edges = np.histogram(df.loc[df['status'] == 1].age, bins=10, density=True) age_pdf = count / sum(count) age_cdf = np.cumsum(age_pdf) plt.plot(bin_edges[1:],age_pdf, color='red', marker='o', linestyle='dotted') plt.plot(bin_edges[1:],age_cdf, color='black', marker='o', linestyle='dotted') count, bin_edges = np.histogram(df.loc[df['status'] == 1].year, bins=10, density=True) year_pdf = count / sum(count) year_cdf = np.cumsum(year_pdf) plt.plot(bin_edges[1:],year_pdf, color='blue', marker='o', linestyle='solid') plt.plot(bin_edges[1:],year_cdf, color='black', marker='o', linestyle='solid') plt.title("SURVIVED PATIENTS PDF & CDF") plt.legend(["nodes_pdf","nodes_cdf", "age_pdf", "age_cdf", "year_pdf", "year_cdf"]) plt.show(); ``` `observation : ` 1. if nodes < 10, 82% patient survived, else: 10% chances of survival 2. if age 45-65, 18% patient survived, than among other patients; ``` sns.set_style("whitegrid"); plt.figure(figsize=(10,6)) plt.rcParams['axes.titlesize'] = 20 plt.rcParams['axes.titleweight'] = 10 count, bin_edges = np.histogram(df.loc[df['status'] == 2].nodes, bins=10, density=True) nodes_pdf = count / sum(count) nodes_cdf = np.cumsum(nodes_pdf) plt.plot(bin_edges[1:],nodes_pdf, color='green', marker='o', linestyle='dashed') plt.plot(bin_edges[1:],nodes_cdf, color='black', marker='o', linestyle='dashed') count, bin_edges = np.histogram(df.loc[df['status'] == 2].age, bins=10, density=True) age_pdf = count / sum(count) age_cdf = np.cumsum(age_pdf) plt.plot(bin_edges[1:],age_pdf, color='red', marker='o', linestyle='dotted') plt.plot(bin_edges[1:],age_cdf, color='black', marker='o', linestyle='dotted') count, bin_edges = np.histogram(df.loc[df['status'] == 2].year, bins=10, density=True) year_pdf = count / sum(count) year_cdf = np.cumsum(year_pdf) plt.plot(bin_edges[1:],year_pdf, color='blue', marker='o', linestyle='solid') plt.plot(bin_edges[1:],year_cdf, color='black', marker='o', linestyle='solid') plt.title("UNSURVIVED PATIENTS PDF & CDF") plt.legend(["nodes_pdf","nodes_cdf", "age_pdf", "age_cdf", "year_pdf", "year_cdf"]) plt.show(); ``` `observation : ` 1. nodes > 20, 97% of unsurvived rate. 2. age inbetween 38 - 48 has 20% of unsurvived rate. ``` sns.set_style("whitegrid"); g = sns.catplot(x="status", y="nodes", hue="status", data=df, kind="box", height=4, aspect=.7) g.fig.set_figwidth(10) g.fig.set_figheight(5) g.fig.suptitle('[BOX PLOT] NODES OVER STATUS', fontsize=20) g.add_legend() plt.show() ``` `observation :` 1. 75 percentile of patient who has survived had lyph node less than 5. 2. Threshold for unsurvival is 25 percentile, 75 percentile is 12, 25 percentile is 2 ``` sns.set_style("whitegrid"); g = sns.catplot(x="status", y="nodes", hue="status", data=df, kind="violin", height=4, aspect=.7); g.fig.set_figwidth(10) g.fig.set_figheight(5) g.add_legend() g.fig.suptitle('[VIOLIN PLOT] NODES OVER STATUS', fontsize=20) plt.show() ``` `observation : ` 1. plot 1 clearly is showing lymph nodes closer to zero has highely survived, whiskers also 0-7. 2. plot 2 is showing lymph nodes far away from zero has highly unsurvived, whiskers 0-20 has threshold 0-12 short survival chance. ``` plt.rcParams['axes.titlesize'] = 20 plt.rcParams['axes.titleweight'] = 10 sns.jointplot(x='age',y='nodes',data=df,kind='kde') plt.suptitle("JOINT_PLOT FOR NODES - AGE",fontsize=20) plt.show() ``` `observation : ` 1. long survival is more from age range 47–60 and axillary nodes from 0–3. ### 3.3 BAR_PLOTS [SUMMARIZATION] ``` plt.figure(figsize=(15,8)); sns.set(style="whitegrid"); # sns.FacetGrid(df, hue='status') # Draw a nested barplot to show survival for class and sex g = sns.catplot(x="nodes", y="status", data=df, height=6, kind="bar", palette="muted"); # g.despine(left=True) g.set_ylabels("survival probability"); g.fig.set_figwidth(15); g.fig.set_figheight(8.27); g.fig.suptitle("Survival rate for node vise [0-2]", fontsize=20); plt.figure(figsize=(15,8)) sns.set(style="whitegrid"); g = sns.catplot(x="age", y="status", data=df, height=6, kind="bar", palette="muted"); g.set_ylabels("survival probability"); g.fig.set_figwidth(15) g.fig.set_figheight(8.27) g.fig.suptitle("Survival rate for age vise [0-2]", fontsize=20); ``` ## 4. Conclusion : `1. Patient’s age and operation year alone are not deciding factors for his/her survival. Yet, people less than 35 years have more chance of survival.` `2. Survival chance is inversely proportional to the number of positive axillary nodes. We also saw that the absence of positive axillary nodes cannot always guarantee survival.` `3. The objective of classifying the survival status of a new patient based on the given features is a difficult task as the data is imbalanced.`
github_jupyter
``` """ You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab. Instructions for setting up Colab are as follows: 1. Open a new Python 3 notebook. 2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL) 3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator) 4. Run this cell to set up dependencies. 5. Restart the runtime (Runtime -> Restart Runtime) for any upgraded packages to take effect """ # If you're using Google Colab and not running locally, run this cell. ## Install dependencies !pip install wget !apt-get install sox libsndfile1 ffmpeg !pip install unidecode !pip install matplotlib>=3.3.2 ## Install NeMo BRANCH = 'r1.3.0' !python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all] ## Grab the config we'll use in this example !mkdir configs !wget -P configs/ https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/examples/asr/conf/config.yaml """ Remember to restart the runtime for the kernel to pick up any upgraded packages (e.g. matplotlib)! Alternatively, you can uncomment the exit() below to crash and restart the kernel, in the case that you want to use the "Run All Cells" (or similar) option. """ # exit() ``` # Introduction to End-To-End Automatic Speech Recognition This notebook contains a basic tutorial of Automatic Speech Recognition (ASR) concepts, introduced with code snippets using the [NeMo framework](https://github.com/NVIDIA/NeMo). We will first introduce the basics of the main concepts behind speech recognition, then explore concrete examples of what the data looks like and walk through putting together a simple end-to-end ASR pipeline. We assume that you are familiar with general machine learning concepts and can follow Python code, and we'll be using the [AN4 dataset from CMU](http://www.speech.cs.cmu.edu/databases/an4/) (with processing using `sox`). ## Conceptual Overview: What is ASR? ASR, or **Automatic Speech Recognition**, refers to the problem of getting a program to automatically transcribe spoken language (speech-to-text). Our goal is usually to have a model that minimizes the **Word Error Rate (WER)** metric when transcribing speech input. In other words, given some audio file (e.g. a WAV file) containing speech, how do we transform this into the corresponding text with as few errors as possible? Traditional speech recognition takes a generative approach, modeling the full pipeline of how speech sounds are produced in order to evaluate a speech sample. We would start from a **language model** that encapsulates the most likely orderings of words that are generated (e.g. an n-gram model), to a **pronunciation model** for each word in that ordering (e.g. a pronunciation table), to an **acoustic model** that translates those pronunciations to audio waveforms (e.g. a Gaussian Mixture Model). Then, if we receive some spoken input, our goal would be to find the most likely sequence of text that would result in the given audio according to our generative pipeline of models. Overall, with traditional speech recognition, we try to model `Pr(audio|transcript)*Pr(transcript)`, and take the argmax of this over possible transcripts. Over time, neural nets advanced to the point where each component of the traditional speech recognition model could be replaced by a neural model that had better performance and that had a greater potential for generalization. For example, we could replace an n-gram model with a neural language model, and replace a pronunciation table with a neural pronunciation model, and so on. However, each of these neural models need to be trained individually on different tasks, and errors in any model in the pipeline could throw off the whole prediction. Thus, we can see the appeal of **end-to-end ASR architectures**: discriminative models that simply take an audio input and give a textual output, and in which all components of the architecture are trained together towards the same goal. The model's encoder would be akin to an acoustic model for extracting speech features, which can then be directly piped to a decoder which outputs text. If desired, we could integrate a language model that would improve our predictions, as well. And the entire end-to-end ASR model can be trained at once--a much easier pipeline to handle! ### End-To-End ASR With an end-to-end model, we want to directly learn `Pr(transcript|audio)` in order to predict the transcripts from the original audio. Since we are dealing with sequential information--audio data over time that corresponds to a sequence of letters--RNNs are the obvious choice. But now we have a pressing problem to deal with: since our input sequence (number of audio timesteps) is not the same length as our desired output (transcript length), how do we match each time step from the audio data to the correct output characters? Earlier speech recognition approaches relied on **temporally-aligned data**, in which each segment of time in an audio file was matched up to a corresponding speech sound such as a phoneme or word. However, if we would like to have the flexibility to predict letter-by-letter to prevent OOV (out of vocabulary) issues, then each time step in the data would have to be labeled with the letter sound that the speaker is making at that point in the audio file. With that information, it seems like we should simply be able to try to predict the correct letter for each time step and then collapse the repeated letters (e.g. the prediction output `LLLAAAAPPTOOOPPPP` would become `LAPTOP`). It turns out that this idea has some problems: not only does alignment make the dataset incredibly labor-intensive to label, but also, what do we do with words like "book" that contain consecutive repeated letters? Simply squashing repeated letters together would not work in that case! ![Alignment example](https://raw.githubusercontent.com/NVIDIA/NeMo/stable/tutorials/asr/images/alignment_example.png) Modern end-to-end approaches get around this using methods that don't require manual alignment at all, so that the input-output pairs are really just the raw audio and the transcript--no extra data or labeling required. Let's briefly go over two popular approaches that allow us to do this, Connectionist Temporal Classification (CTC) and sequence-to-sequence models with attention. #### Connectionist Temporal Classification (CTC) In normal speech recognition prediction output, we would expect to have characters such as the letters from A through Z, numbers 0 through 9, spaces ("\_"), and so on. CTC introduces a new intermediate output token called the **blank token** ("-") that is useful for getting around the alignment issue. With CTC, we still predict one token per time segment of speech, but we use the blank token to figure out where we can and can't collapse the predictions. The appearance of a blank token helps separate repeating letters that should not be collapsed. For instance, with an audio snippet segmented into `T=11` time steps, we could get predictions that look like `BOO-OOO--KK`, which would then collapse to `"BO-O-K"`, and then we would remove the blank tokens to get our final output, `BOOK`. Now, we can predict one output token per time step, then collapse and clean to get sensible output without any fear of ambiguity from repeating letters! A simple way of getting predictions like this would be to apply a bidirectional RNN to the audio input, apply softmax over each time step's output, and then take the token with the highest probability. The method of always taking the best token at each time step is called **greedy decoding, or max decoding**. To calculate our loss for backprop, we would like to know the log probability of the model producing the correct transcript, `log(Pr(transcript|audio))`. We can get the log probability of a single intermediate output sequence (e.g. `BOO-OOO--KK`) by summing over the log probabilities we get from each token's softmax value, but note that the resulting sum is different from the log probability of the transcript itself (`BOOK`). This is because there are multiple possible output sequences of the same length that can be collapsed to get the same transcript (e.g. `BBO--OO-KKK` also results in `BOOK`), and so we need to **marginalize over every valid sequence of length `T` that collapses to the transcript**. Therefore, to get our transcript's log probability given our audio input, we must sum the log probabilities of every sequence of length `T` that collapses to the transcript (e.g. `log(Pr(output: "BOOK"|audio)) = log(Pr(BOO-OOO--KK|audio)) + log(Pr(BBO--OO-KKK|audio)) + ...`). In practice, we can use a dynamic programming approach to calculate this, accumulating our log probabilities over different "paths" through the softmax outputs at each time step. If you would like a more in-depth explanation of how CTC works, or how we can improve our results by using a modified beam search algorithm, feel free to check out the Further Reading section at the end of this notebook for more resources. #### Sequence-to-Sequence with Attention One problem with CTC is that predictions at different time steps are conditionally independent, which is an issue because the words in a continuous utterance tend to be related to each other in some sensible way. With this conditional independence assumption, we can't learn a language model that can represent such dependencies, though we can add a language model on top of the CTC output to mitigate this to some degree. A popular alternative is to use a sequence-to-sequence model with attention. A typical seq2seq model for ASR consists of some sort of **bidirectional RNN encoder** that consumes the audio sequence timestep-by-timestep, and where the outputs are then passed to an **attention-based decoder**. Each prediction from the decoder is based on attending to some parts of the entire encoded input, as well as the previously outputted tokens. The outputs of the decoder can be anything from word pieces to phonemes to letters, and since predictions are not directly tied to time steps of the input, we can just continue producing tokens one-by-one until an end token is given (or we reach a specified max output length). This way, we do not need to deal with audio alignment, and our predicted transcript is just the sequence of outputs given by our decoder. Now that we have an idea of what some popular end-to-end ASR models look like, let's take a look at the audio data we'll be working with for our example. ## Taking a Look at Our Data (AN4) The AN4 dataset, also known as the Alphanumeric dataset, was collected and published by Carnegie Mellon University. It consists of recordings of people spelling out addresses, names, telephone numbers, etc., one letter or number at a time, as well as their corresponding transcripts. We choose to use AN4 for this tutorial because it is relatively small, with 948 training and 130 test utterances, and so it trains quickly. Before we get started, let's download and prepare the dataset. The utterances are available as `.sph` files, so we will need to convert them to `.wav` for later processing. If you are not using Google Colab, please make sure you have [Sox](http://sox.sourceforge.net/) installed for this step--see the "Downloads" section of the linked Sox homepage. (If you are using Google Colab, Sox should have already been installed in the setup cell at the beginning.) ``` # This is where the an4/ directory will be placed. # Change this if you don't want the data to be extracted in the current directory. data_dir = '.' import glob import os import subprocess import tarfile import wget # Download the dataset. This will take a few moments... print("******") if not os.path.exists(data_dir + '/an4_sphere.tar.gz'): an4_url = 'http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz' an4_path = wget.download(an4_url, data_dir) print(f"Dataset downloaded at: {an4_path}") else: print("Tarfile already exists.") an4_path = data_dir + '/an4_sphere.tar.gz' if not os.path.exists(data_dir + '/an4/'): # Untar and convert .sph to .wav (using sox) tar = tarfile.open(an4_path) tar.extractall(path=data_dir) print("Converting .sph to .wav...") sph_list = glob.glob(data_dir + '/an4/**/*.sph', recursive=True) for sph_path in sph_list: wav_path = sph_path[:-4] + '.wav' cmd = ["sox", sph_path, wav_path] subprocess.run(cmd) print("Finished conversion.\n******") ``` You should now have a folder called `an4` that contains `etc/an4_train.transcription`, `etc/an4_test.transcription`, audio files in `wav/an4_clstk` and `wav/an4test_clstk`, along with some other files we will not need. Now we can load and take a look at the data. As an example, file `cen2-mgah-b.wav` is a 2.6 second-long audio recording of a man saying the letters "G L E N N" one-by-one. To confirm this, we can listen to the file: ``` import librosa import IPython.display as ipd # Load and listen to the audio file example_file = data_dir + '/an4/wav/an4_clstk/mgah/cen2-mgah-b.wav' audio, sample_rate = librosa.load(example_file) ipd.Audio(example_file, rate=sample_rate) ``` In an ASR task, if this WAV file was our input, then "G L E N N" would be our desired output. Let's plot the waveform, which is simply a line plot of the sequence of values that we read from the file. This is a format of viewing audio that you are likely to be familiar with seeing in many audio editors and visualizers: ``` %matplotlib inline import librosa.display import matplotlib.pyplot as plt # Plot our example audio file's waveform plt.rcParams['figure.figsize'] = (15,7) plt.title('Waveform of Audio Example') plt.ylabel('Amplitude') _ = librosa.display.waveplot(audio) ``` We can see the activity in the waveform that corresponds to each letter in the audio, as our speaker here enunciates quite clearly! You can kind of tell that each spoken letter has a different "shape," and it's interesting to note that last two blobs look relatively similar, which is expected because they are both the letter "N." ### Spectrograms and Mel Spectrograms However, since audio information is more useful in the context of frequencies of sound over time, we can get a better representation than this raw sequence of 57,330 values. We can apply a [Fourier Transform](https://en.wikipedia.org/wiki/Fourier_transform) on our audio signal to get something more useful: a **spectrogram**, which is a representation of the energy levels (i.e. amplitude, or "loudness") of each frequency (i.e. pitch) of the signal over the duration of the file. A spectrogram (which can be viewed as a heat map) is a good way of seeing how the *strengths of various frequencies in the audio vary over time*, and is obtained by breaking up the signal into smaller, usually overlapping chunks and performing a Short-Time Fourier Transform (STFT) on each. Let's examine what the spectrogram of our sample looks like. ``` import numpy as np # Get spectrogram using Librosa's Short-Time Fourier Transform (stft) spec = np.abs(librosa.stft(audio)) spec_db = librosa.amplitude_to_db(spec, ref=np.max) # Decibels # Use log scale to view frequencies librosa.display.specshow(spec_db, y_axis='log', x_axis='time') plt.colorbar() plt.title('Audio Spectrogram'); ``` Again, we are able to see each letter being pronounced, and that the last two blobs that correspond to the "N"s are pretty similar-looking. But how do we interpret these shapes and colors? Just as in the waveform plot before, we see time passing on the x-axis (all 2.6s of audio). But now, the y-axis represents different frequencies (on a log scale), and *the color on the plot shows the strength of a frequency at a particular point in time*. We're still not done yet, as we can make one more potentially useful tweak: using the **Mel Spectrogram** instead of the normal spectrogram. This is simply a change in the frequency scale that we use from linear (or logarithmic) to the mel scale, which is "a perceptual scale of pitches judged by listeners to be equal in distance from one another" (from [Wikipedia](https://en.wikipedia.org/wiki/Mel_scale)). In other words, it's a transformation of the frequencies to be more aligned to what humans perceive; a change of +1000Hz from 2000Hz->3000Hz sounds like a larger difference to us than 9000Hz->10000Hz does, so the mel scale normalizes this such that equal distances sound like equal differences to the human ear. Intuitively, we use the mel spectrogram because in this case we are processing and transcribing human speech, such that transforming the scale to better match what we hear is a useful procedure. ``` # Plot the mel spectrogram of our sample mel_spec = librosa.feature.melspectrogram(audio, sr=sample_rate) mel_spec_db = librosa.power_to_db(mel_spec, ref=np.max) librosa.display.specshow( mel_spec_db, x_axis='time', y_axis='mel') plt.colorbar() plt.title('Mel Spectrogram'); ``` ## Convolutional ASR Models Let's take a look at the model that we will be building, and how we specify its parameters. ### The Jasper Model We will be training a small [Jasper (Just Another SPeech Recognizer) model](https://arxiv.org/abs/1904.03288) from scratch (e.g. initialized randomly). In brief, Jasper architectures consist of a repeated block structure that utilizes 1D convolutions. In a Jasper_KxR model, `R` sub-blocks (consisting of a 1D convolution, batch norm, ReLU, and dropout) are grouped into a single block, which is then repeated `K` times. We also have a one extra block at the beginning and a few more at the end that are invariant of `K` and `R`, and we use CTC loss. ### The QuartzNet Model The QuartzNet is better variant of Jasper with a key difference that it uses time-channel separable 1D convolutions. This allows it to dramatically reduce number of weights while keeping similar accuracy. A Jasper/QuartzNet models look like this (QuartzNet model is pictured): ![QuartzNet with CTC](https://developer.nvidia.com/blog/wp-content/uploads/2020/05/quartznet-model-architecture-1-625x742.png) # Using NeMo for Automatic Speech Recognition Now that we have an idea of what ASR is and how the audio data looks like, we can start using NeMo to do some ASR! We'll be using the **Neural Modules (NeMo) toolkit** for this part, so if you haven't already, you should download and install NeMo and its dependencies. To do so, just follow the directions on the [GitHub page](https://github.com/NVIDIA/NeMo), or in the [documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/v1.0.2/). NeMo lets us easily hook together the components (modules) of our model, such as the data layer, intermediate layers, and various losses, without worrying too much about implementation details of individual parts or connections between modules. NeMo also comes with complete models which only require your data and hyperparameters for training. ``` # NeMo's "core" package import nemo # NeMo's ASR collection - this collections contains complete ASR models and # building blocks (modules) for ASR import nemo.collections.asr as nemo_asr ``` ## Using an Out-of-the-Box Model NeMo's ASR collection comes with many building blocks and even complete models that we can use for training and evaluation. Moreover, several models come with pre-trained weights. Let's instantiate a complete QuartzNet15x5 model. ``` # This line will download pre-trained QuartzNet15x5 model from NVIDIA's NGC cloud and instantiate it for you quartznet = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name="QuartzNet15x5Base-En") ``` Next, we'll simply add paths to files we want to transcribe into the list and pass it to our model. Note that it will work for relatively short (<25 seconds) files. ``` files = ['./an4/wav/an4_clstk/mgah/cen2-mgah-b.wav'] for fname, transcription in zip(files, quartznet.transcribe(paths2audio_files=files)): print(f"Audio in {fname} was recognized as: {transcription}") ``` That was easy! But there are plenty of scenarios where you would want to fine-tune the model on your own data or even train from scratch. For example, this out-of-the box model will obviously not work for Spanish and would likely perform poorly for telephone audio. So if you have collected your own data, you certainly should attempt to fine-tune or train on it! ## Training from Scratch To train from scratch, you need to prepare your training data in the right format and specify your models architecture. ### Creating Data Manifests The first thing we need to do now is to create manifests for our training and evaluation data, which will contain the metadata of our audio files. NeMo data sets take in a standardized manifest format where each line corresponds to one sample of audio, such that the number of lines in a manifest is equal to the number of samples that are represented by that manifest. A line must contain the path to an audio file, the corresponding transcript (or path to a transcript file), and the duration of the audio sample. Here's an example of what one line in a NeMo-compatible manifest might look like: ``` {"audio_filepath": "path/to/audio.wav", "duration": 3.45, "text": "this is a nemo tutorial"} ``` We can build our training and evaluation manifests using `an4/etc/an4_train.transcription` and `an4/etc/an4_test.transcription`, which have lines containing transcripts and their corresponding audio file IDs: ``` ... <s> P I T T S B U R G H </s> (cen5-fash-b) <s> TWO SIX EIGHT FOUR FOUR ONE EIGHT </s> (cen7-fash-b) ... ``` ``` # --- Building Manifest Files --- # import json # Function to build a manifest def build_manifest(transcripts_path, manifest_path, wav_path): with open(transcripts_path, 'r') as fin: with open(manifest_path, 'w') as fout: for line in fin: # Lines look like this: # <s> transcript </s> (fileID) transcript = line[: line.find('(')-1].lower() transcript = transcript.replace('<s>', '').replace('</s>', '') transcript = transcript.strip() file_id = line[line.find('(')+1 : -2] # e.g. "cen4-fash-b" audio_path = os.path.join( data_dir, wav_path, file_id[file_id.find('-')+1 : file_id.rfind('-')], file_id + '.wav') duration = librosa.core.get_duration(filename=audio_path) # Write the metadata to the manifest metadata = { "audio_filepath": audio_path, "duration": duration, "text": transcript } json.dump(metadata, fout) fout.write('\n') # Building Manifests print("******") train_transcripts = data_dir + '/an4/etc/an4_train.transcription' train_manifest = data_dir + '/an4/train_manifest.json' if not os.path.isfile(train_manifest): build_manifest(train_transcripts, train_manifest, 'an4/wav/an4_clstk') print("Training manifest created.") test_transcripts = data_dir + '/an4/etc/an4_test.transcription' test_manifest = data_dir + '/an4/test_manifest.json' if not os.path.isfile(test_manifest): build_manifest(test_transcripts, test_manifest, 'an4/wav/an4test_clstk') print("Test manifest created.") print("***Done***") ``` ### Specifying Our Model with a YAML Config File For this tutorial, we'll build a *Jasper_4x1 model*, with `K=4` blocks of single (`R=1`) sub-blocks and a *greedy CTC decoder*, using the configuration found in `./configs/config.yaml`. If we open up this config file, we find model section which describes architecture of our model. A model contains an entry labeled `encoder`, with a field called `jasper` that contains a list with multiple entries. Each of the members in this list specifies one block in our model, and looks something like this: ``` - filters: 128 repeat: 1 kernel: [11] stride: [2] dilation: [1] dropout: 0.2 residual: false separable: true se: true se_context_size: -1 ``` The first member of the list corresponds to the first block in the Jasper architecture diagram, which appears regardless of `K` and `R`. Next, we have four entries that correspond to the `K=4` blocks, and each has `repeat: 1` since we are using `R=1`. These are followed by two more entries for the blocks that appear at the end of our Jasper model before the CTC loss. There are also some entries at the top of the file that specify how we will handle training (`train_ds`) and validation (`validation_ds`) data. Using a YAML config such as this is helpful for getting a quick and human-readable overview of what your architecture looks like, and allows you to swap out model and run configurations easily without needing to change your code. ``` # --- Config Information ---# try: from ruamel.yaml import YAML except ModuleNotFoundError: from ruamel_yaml import YAML config_path = './configs/config.yaml' yaml = YAML(typ='safe') with open(config_path) as f: params = yaml.load(f) print(params) ``` ### Training with PyTorch Lightning NeMo models and modules can be used in any PyTorch code where torch.nn.Module is expected. However, NeMo's models are based on [PytorchLightning's](https://github.com/PyTorchLightning/pytorch-lightning) LightningModule and we recommend you use PytorchLightning for training and fine-tuning as it makes using mixed precision and distributed training very easy. So to start, let's create Trainer instance for training on GPU for 50 epochs ``` import pytorch_lightning as pl trainer = pl.Trainer(gpus=1, max_epochs=50) ``` Next, we instantiate and ASR model based on our ``config.yaml`` file from the previous section. Note that this is a stage during which we also tell the model where our training and validation manifests are. ``` from omegaconf import DictConfig params['model']['train_ds']['manifest_filepath'] = train_manifest params['model']['validation_ds']['manifest_filepath'] = test_manifest first_asr_model = nemo_asr.models.EncDecCTCModel(cfg=DictConfig(params['model']), trainer=trainer) ``` With that, we can start training with just one line! ``` # Start training!!! trainer.fit(first_asr_model) ``` There we go! We've put together a full training pipeline for the model and trained it for 50 epochs. If you'd like to save this model checkpoint for loading later (e.g. for fine-tuning, or for continuing training), you can simply call `first_asr_model.save_to(<checkpoint_path>)`. Then, to restore your weights, you can rebuild the model using the config (let's say you call it `first_asr_model_continued` this time) and call `first_asr_model_continued.restore_from(<checkpoint_path>)`. ### After Training: Monitoring Progress and Changing Hyperparameters We can now start Tensorboard to see how training went. Recall that WER stands for Word Error Rate and so the lower it is, the better. ``` try: from google import colab COLAB_ENV = True except (ImportError, ModuleNotFoundError): COLAB_ENV = False # Load the TensorBoard notebook extension if COLAB_ENV: %load_ext tensorboard %tensorboard --logdir lightning_logs/ else: print("To use tensorboard, please use this notebook in a Google Colab environment.") ``` We could improve this model by playing with hyperparameters. We can look at the current hyperparameters with the following: ``` print(params['model']['optim']) ``` Let's say we wanted to change the learning rate. To do so, we can create a `new_opt` dict and set our desired learning rate, then call `<model>.setup_optimization()` with the new optimization parameters. ``` import copy new_opt = copy.deepcopy(params['model']['optim']) new_opt['lr'] = 0.001 first_asr_model.setup_optimization(optim_config=DictConfig(new_opt)) # And then you can invoke trainer.fit(first_asr_model) ``` ## Inference Let's have a quick look at how one could run inference with NeMo's ASR model. First, ``EncDecCTCModel`` and its subclasses contain a handy ``transcribe`` method which can be used to simply obtain audio files' transcriptions. It also has batch_size argument to improve performance. ``` print(first_asr_model.transcribe(paths2audio_files=['./an4/wav/an4_clstk/mgah/cen2-mgah-b.wav', './an4/wav/an4_clstk/fmjd/cen7-fmjd-b.wav', './an4/wav/an4_clstk/fmjd/cen8-fmjd-b.wav', './an4/wav/an4_clstk/fkai/cen8-fkai-b.wav'], batch_size=4)) ``` Below is an example of a simple inference loop in pure PyTorch. It also shows how one can compute Word Error Rate (WER) metric between predictions and references. ``` # Bigger batch-size = bigger throughput params['model']['validation_ds']['batch_size'] = 16 # Setup the test data loader and make sure the model is on GPU first_asr_model.setup_test_data(test_data_config=params['model']['validation_ds']) first_asr_model.cuda() # We will be computing Word Error Rate (WER) metric between our hypothesis and predictions. # WER is computed as numerator/denominator. # We'll gather all the test batches' numerators and denominators. wer_nums = [] wer_denoms = [] # Loop over all test batches. # Iterating over the model's `test_dataloader` will give us: # (audio_signal, audio_signal_length, transcript_tokens, transcript_length) # See the AudioToCharDataset for more details. for test_batch in first_asr_model.test_dataloader(): test_batch = [x.cuda() for x in test_batch] targets = test_batch[2] targets_lengths = test_batch[3] log_probs, encoded_len, greedy_predictions = first_asr_model( input_signal=test_batch[0], input_signal_length=test_batch[1] ) # Notice the model has a helper object to compute WER first_asr_model._wer.update(greedy_predictions, targets, targets_lengths) _, wer_num, wer_denom = first_asr_model._wer.compute() first_asr_model._wer.reset() wer_nums.append(wer_num.detach().cpu().numpy()) wer_denoms.append(wer_denom.detach().cpu().numpy()) # Release tensors from GPU memory del test_batch, log_probs, targets, targets_lengths, encoded_len, greedy_predictions # We need to sum all numerators and denominators first. Then divide. print(f"WER = {sum(wer_nums)/sum(wer_denoms)}") ``` This WER is not particularly impressive and could be significantly improved. You could train longer (try 100 epochs) to get a better number. Check out the next section on how to improve it further. ## Model Improvements You already have all you need to create your own ASR model in NeMo, but there are a few more tricks that you can employ if you so desire. In this section, we'll briefly cover a few possibilities for improving an ASR model. ### Data Augmentation There exist several ASR data augmentation methods that can increase the size of our training set. For example, we can perform augmentation on the spectrograms by zeroing out specific frequency segments ("frequency masking") or time segments ("time masking") as described by [SpecAugment](https://arxiv.org/abs/1904.08779), or zero out rectangles on the spectrogram as in [Cutout](https://arxiv.org/pdf/1708.04552.pdf). In NeMo, we can do all three of these by simply adding in a `SpectrogramAugmentation` neural module. (As of now, it does not perform the time warping from the SpecAugment paper.) Our toy model does not do spectrogram augmentation. But the real one we got from cloud does: ``` print(quartznet._cfg['spec_augment']) ``` If you want to enable SpecAugment in your model, make sure your .yaml config file contains 'model/spec_augment' section which looks like the one above. ### Transfer learning Transfer learning is an important machine learning technique that uses a model’s knowledge of one task to make it perform better on another. Fine-tuning is one of the techniques to perform transfer learning. It is an essential part of the recipe for many state-of-the-art results where a base model is first pretrained on a task with abundant training data and then fine-tuned on different tasks of interest where the training data is less abundant or even scarce. In ASR you might want to do fine-tuning in multiple scenarios, for example, when you want to improve your model's performance on a particular domain (medical, financial, etc.) or on accented speech. You can even transfer learn from one language to another! Check out [this paper](https://arxiv.org/abs/2005.04290) for examples. Transfer learning with NeMo is simple. Let's demonstrate how the model we got from the cloud could be fine-tuned on AN4 data. (NOTE: this is a toy example). And, while we are at it, we will change model's vocabulary, just to demonstrate how it's done. ``` # Check what kind of vocabulary/alphabet the model has right now print(quartznet.decoder.vocabulary) # Let's add "!" symbol there. Note that you can (and should!) change the vocabulary # entirely when fine-tuning using a different language. quartznet.change_vocabulary( new_vocabulary=[ ' ', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', "'", "!" ] ) ``` After this, our decoder has completely changed, but our encoder (which is where most of the weights are) remained intact. Let's fine tune-this model for 2 epochs on AN4 dataset. We will also use the smaller learning rate from ``new_opt` (see the "After Training" section)`. ``` # Use the smaller learning rate we set before quartznet.setup_optimization(optim_config=DictConfig(new_opt)) # Point to the data we'll use for fine-tuning as the training set quartznet.setup_training_data(train_data_config=params['model']['train_ds']) # Point to the new validation data for fine-tuning quartznet.setup_validation_data(val_data_config=params['model']['validation_ds']) # And now we can create a PyTorch Lightning trainer and call `fit` again. trainer = pl.Trainer(gpus=[1], max_epochs=2) trainer.fit(quartznet) ``` ### Fast Training Last but not least, we could simply speed up training our model! If you have the resources, you can speed up training by splitting the workload across multiple GPUs. Otherwise (or in addition), there's always mixed precision training, which allows you to increase your batch size. You can use [PyTorch Lightning's Trainer object](https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html?highlight=Trainer) to handle mixed-precision and distributed training for you. Below are some examples of flags you would pass to the `Trainer` to use these features: ```python # Mixed precision: trainer = pl.Trainer(amp_level='O1', precision=16) # Trainer with a distributed backend: trainer = pl.Trainer(gpus=2, num_nodes=2, accelerator='ddp') # Of course, you can combine these flags as well. ``` Finally, have a look at [example scripts in NeMo repository](https://github.com/NVIDIA/NeMo/blob/stable/examples/asr/speech_to_text.py) which can handle mixed precision and distributed training using command-line arguments. ### Deployment Note: It is recommended to run the deployment code from the NVIDIA PyTorch container. Let's get back to our pre-trained model and see how easy it can be exported to an ONNX file in order to run it in an inference engine like TensorRT or ONNXRuntime. If you are running in an environment outside of the NVIDIA PyTorch container (like Google Colab for example) then you will have to build the onnxruntime and onnxruntime-gpu. The cell below gives an example of how to build those runtimes but the example may have to be adapted depending on your environment. ``` !pip install --upgrade onnxruntime onnxruntime-gpu #!mkdir -p ort #%cd ort #!git clean -xfd #!git clone --depth 1 --branch v1.8.0 https://github.com/microsoft/onnxruntime.git . #!./build.sh --skip_tests --config Release --build_shared_lib --parallel --use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/#x86_64-linux-gnu --build_wheel #!pip uninstall -y onnxruntime #!pip uninstall -y onnxruntime-gpu #!pip install --upgrade --force-reinstall ./build/Linux/Release/dist/onnxruntime*.whl #%cd .. ``` Then run ``` import json import os import tempfile import onnxruntime import torch import numpy as np import nemo.collections.asr as nemo_asr from nemo.collections.asr.data.audio_to_text import AudioToCharDataset from nemo.collections.asr.metrics.wer import WER def to_numpy(tensor): return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() def setup_transcribe_dataloader(cfg, vocabulary): config = { 'manifest_filepath': os.path.join(cfg['temp_dir'], 'manifest.json'), 'sample_rate': 16000, 'labels': vocabulary, 'batch_size': min(cfg['batch_size'], len(cfg['paths2audio_files'])), 'trim_silence': True, 'shuffle': False, } dataset = AudioToCharDataset( manifest_filepath=config['manifest_filepath'], labels=config['labels'], sample_rate=config['sample_rate'], int_values=config.get('int_values', False), augmentor=None, max_duration=config.get('max_duration', None), min_duration=config.get('min_duration', None), max_utts=config.get('max_utts', 0), blank_index=config.get('blank_index', -1), unk_index=config.get('unk_index', -1), normalize=config.get('normalize_transcripts', False), trim=config.get('trim_silence', True), parser=config.get('parser', 'en'), ) return torch.utils.data.DataLoader( dataset=dataset, batch_size=config['batch_size'], collate_fn=dataset.collate_fn, drop_last=config.get('drop_last', False), shuffle=False, num_workers=config.get('num_workers', 0), pin_memory=config.get('pin_memory', False), ) quartznet = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name="QuartzNet15x5Base-En") quartznet.export('qn.onnx') ort_session = onnxruntime.InferenceSession('qn.onnx') with tempfile.TemporaryDirectory() as tmpdir: with open(os.path.join(tmpdir, 'manifest.json'), 'w') as fp: for audio_file in files: entry = {'audio_filepath': audio_file, 'duration': 100000, 'text': 'nothing'} fp.write(json.dumps(entry) + '\n') config = {'paths2audio_files': files, 'batch_size': 4, 'temp_dir': tmpdir} temporary_datalayer = setup_transcribe_dataloader(config, quartznet.decoder.vocabulary) for test_batch in temporary_datalayer: processed_signal, processed_signal_len = quartznet.preprocessor( input_signal=test_batch[0].to(quartznet.device), length=test_batch[1].to(quartznet.device) ) ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(processed_signal),} ologits = ort_session.run(None, ort_inputs) alogits = np.asarray(ologits) logits = torch.from_numpy(alogits[0]) greedy_predictions = logits.argmax(dim=-1, keepdim=False) wer = WER(vocabulary=quartznet.decoder.vocabulary, batch_dim_index=0, use_cer=False, ctc_decode=True) hypotheses = wer.ctc_decoder_predictions_tensor(greedy_predictions) print(hypotheses) break ``` ## Under the Hood NeMo is open-source and we do all our model development in the open, so you can inspect our code if you wish. In particular, ``nemo_asr.model.EncDecCTCModel`` is an encoder-decoder model which is constructed using several ``Neural Modules`` taken from ``nemo_asr.modules.`` Here is what its forward pass looks like: ```python def forward(self, input_signal, input_signal_length): processed_signal, processed_signal_len = self.preprocessor( input_signal=input_signal, length=input_signal_length, ) # Spec augment is not applied during evaluation/testing if self.spec_augmentation is not None and self.training: processed_signal = self.spec_augmentation(input_spec=processed_signal) encoded, encoded_len = self.encoder(audio_signal=processed_signal, length=processed_signal_len) log_probs = self.decoder(encoder_output=encoded) greedy_predictions = log_probs.argmax(dim=-1, keepdim=False) return log_probs, encoded_len, greedy_predictions ``` Here: * ``self.preprocessor`` is an instance of ``nemo_asr.modules.AudioToMelSpectrogramPreprocessor``, which is a neural module that takes audio signal and converts it into a Mel-Spectrogram * ``self.spec_augmentation`` - is a neural module of type ```nemo_asr.modules.SpectrogramAugmentation``, which implements data augmentation. * ``self.encoder`` - is a convolutional Jasper/QuartzNet-like encoder of type ``nemo_asr.modules.ConvASREncoder`` * ``self.decoder`` - is a ``nemo_asr.modules.ConvASRDecoder`` which simply projects into the target alphabet (vocabulary). Also, ``EncDecCTCModel`` uses the audio dataset class ``nemo_asr.data.AudioToCharDataset`` and CTC loss implemented in ``nemo_asr.losses.CTCLoss``. You can use these and other neural modules (or create new ones yourself!) to construct new ASR models. # Further Reading/Watching: That's all for now! If you'd like to learn more about the topics covered in this tutorial, here are some resources that may interest you: - [Stanford Lecture on ASR](https://www.youtube.com/watch?v=3MjIkWxXigM) - ["An Intuitive Explanation of Connectionist Temporal Classification"](https://towardsdatascience.com/intuitively-understanding-connectionist-temporal-classification-3797e43a86c) - [Explanation of CTC with Prefix Beam Search](https://medium.com/corti-ai/ctc-networks-and-language-models-prefix-beam-search-explained-c11d1ee23306) - [Listen Attend and Spell Paper (seq2seq ASR model)](https://arxiv.org/abs/1508.01211) - [Explanation of the mel spectrogram in more depth](https://towardsdatascience.com/getting-to-know-the-mel-spectrogram-31bca3e2d9d0) - [Jasper Paper](https://arxiv.org/abs/1904.03288) - [QuartzNet paper](https://arxiv.org/abs/1910.10261) - [SpecAugment Paper](https://arxiv.org/abs/1904.08779) - [Explanation and visualization of SpecAugment](https://towardsdatascience.com/state-of-the-art-audio-data-augmentation-with-google-brains-specaugment-and-pytorch-d3d1a3ce291e) - [Cutout Paper](https://arxiv.org/pdf/1708.04552.pdf) - [Transfer Learning Blogpost](https://developer.nvidia.com/blog/jump-start-training-for-speech-recognition-models-with-nemo/)
github_jupyter
``` from os import fsdecode import subprocess import math import json from numpy import linalg as la, ma import numpy as np import time import os import julian import matplotlib.pyplot as plt import pandas as pd from numpy.linalg import linalg from scipy.spatial.transform import Rotation as R from scipy.spatial import distance from scipy.stats import burr from datetime import datetime as dt import uuid import sys from pprint import pprint import shutil import astropy import numpy as np import sys from astropy.utils import iers from astropy.time import Time #Define the function that generates/modifies the neptune.inp file and then executes neptune def propagate( inputTypeStateVector = 2, inputTypeCovarianceMatrix = ' 2', beginDate = '2016 07 20 00 31 50.00', endDate = '2016 07 27 00 31 50.00', radiusX = 615.119526, radiusY = -7095.644839, radiusZ = -678.668352, velocityX = 0.390367, velocityY = 0.741902, velocityZ = -7.396980, semiMajorAxis = 6800.59176, eccentricity = 0.0012347, inclination = 98.4076293, rightAscensionOfAscendingNode = 30.3309997, argumentOfPerigee = 68.5606724, trueAnomaly = 91.5725696, variancePositionX = 10., variancePositionY = 100., variancePositionZ = 30., varianceVelocityX = 2., varianceVelocityY = 1., varianceVelocityZ = 1., covMatrix2row = '0.d0', covMatrix3row = '0.d0 0.d0', covMatrix4row = '0.d0 0.d0 0.d0', covMatrix5row = '0.d0 0.d0 0.d0 0.d0', covMatrix6row = '0.d0 0.d0 0.d0 0.d0 0.d0', geopotential = 6, atmosphericDrag = 1, sunGravity = 1, moonGravity = 1, solarRadiationPressure = 1, earthAlbedo = 1, solidEarthTides = 1, oceanTides = 0, orbitalManeuvers = 0, geopotentialModel = 3, atmosphericModel = 2, geopotentialHarmonicSwitch = 0, geopotentialHarmonic = '20 30', shadowModelSwitch = 1, shadowBoundaryCorrection = 0, covariancePropagationSwitch = 1, covariancePropagationDegGeopotential = 36, covariancePropagationAtmosDrag = 1, covariancePropagationSun = 0, covariancePropagationMoon = 0, covariancePropagationSolarRadPressure = 0, noiseMatrixComputation = 0, fapDayFile = 'fap_day.dat'): runId = str(uuid.uuid4()) with open("input/neptune.inp", 'r', encoding="utf-8") as f: lines = f.readlines() lines[23] = str(runId) + '\n' lines[44] = str(inputTypeStateVector) + '\n' lines[51] = str(inputTypeCovarianceMatrix) + '\n' lines[59] = str(beginDate) + '\n' lines[60] = str(endDate) + '\n' lines[66] = str(radiusX) + 'd0 \n' lines[67] = str(radiusY) + 'd0 \n' lines[68] = str(radiusZ) + 'd0 \n' lines[69] = str(velocityX) + 'd0 \n' lines[70] = str(velocityY) + 'd0 \n' lines[71] = str(velocityZ) + 'd0 \n' lines[75] = str(semiMajorAxis) + '\n' lines[76] = str(eccentricity) + '\n' lines[77] = str(inclination) + '\n' lines[78] = str(rightAscensionOfAscendingNode) + '\n' lines[79] = str(argumentOfPerigee) + '\n' lines[80] = str(trueAnomaly) + '\n' lines[84] = str(variancePositionX) + 'd0 \n' lines[85] = str(variancePositionY) + 'd0 \n' lines[86] = str(variancePositionZ) + 'd0 \n' lines[87] = str(varianceVelocityX) + 'd-4 \n' lines[88] = str(varianceVelocityY) + 'd-4 \n' lines[89] = str(varianceVelocityZ) + 'd-4 \n' lines[91] = str(covMatrix2row) + '\n' lines[92] = str(covMatrix3row) + '\n' lines[93] = str(covMatrix4row) + '\n' lines[94] = str(covMatrix5row) + '\n' lines[95] = str(covMatrix6row) + '\n' lines[105] = str(geopotential) + '\n' lines[106] = str(atmosphericDrag) + '\n' lines[107] = str(sunGravity) + '\n' lines[108] = str(moonGravity) + '\n' lines[109] = str(solarRadiationPressure) + '\n' lines[110] = str(earthAlbedo) + '\n' lines[111] = str(solidEarthTides) + '\n' lines[112] = str(oceanTides) + '\n' lines[113] = str(orbitalManeuvers) + '\n' lines[120] = str(geopotentialModel) + '\n' lines[127] = str(atmosphericModel) + '\n' lines[135] = str(geopotentialHarmonicSwitch) + '\n' lines[136] = str(geopotentialHarmonic) + '\n' lines[140] = str(shadowModelSwitch) + '\n' lines[141] = str(shadowBoundaryCorrection) + '\n' lines[145] = str(covariancePropagationSwitch) + '\n' lines[146] = str(covariancePropagationDegGeopotential) + '\n' lines[147] = str(covariancePropagationAtmosDrag) + '\n' lines[148] = str(covariancePropagationSun) + '\n' lines[149] = str(covariancePropagationMoon) + '\n' lines[150] = str(covariancePropagationSolarRadPressure) + '\n' lines[157] = str(noiseMatrixComputation) + '\n' lines[246] = str(fapDayFile) + '\n' with open("input/neptune.inp", 'w', encoding="utf-8") as file: file.writelines(lines) input_dict = { 'runId': runId, 'inputTypeStateVector': inputTypeStateVector, 'inputTypeCovarianceMatrix': inputTypeCovarianceMatrix, 'beginDate': beginDate, 'endDate': endDate, 'radiusX': radiusX, 'radiusY': radiusY, 'radiusZ': radiusZ, 'velocityX': velocityX, 'velocityY': velocityY, 'velocityZ': velocityZ, 'semiMajorAxis': semiMajorAxis, 'eccentricity': eccentricity, 'inclination': inclination, 'rightAscensionOfAscendingNode': rightAscensionOfAscendingNode, 'argumentOfPerigee': argumentOfPerigee, 'trueAnomaly': trueAnomaly, 'variancePositionX': variancePositionX, 'variancePositionY': variancePositionY, 'variancePositionZ': variancePositionZ, 'varianceVelocityX': varianceVelocityX, 'varianceVelocityY': varianceVelocityY, 'varianceVelocityZ': varianceVelocityZ, 'covMatrix2row': covMatrix2row, 'covMatrix3row': covMatrix3row, 'covMatrix4row': covMatrix4row, 'covMatrix5row': covMatrix5row, 'covMatrix6row': covMatrix6row, 'geopotential': geopotential, 'atmosphericDrag': atmosphericDrag, 'sunGravity': sunGravity, 'moonGravity': moonGravity, 'solarRadiationPressure': solarRadiationPressure, 'earthAlbedo': earthAlbedo, 'solidEarthTides': solidEarthTides, 'oceanTides': oceanTides, 'orbitalManeuvers': orbitalManeuvers, 'geopotentialModel': geopotentialModel, 'atmosphericModel': atmosphericModel, 'geopotentialHarmonicSwitch': geopotentialHarmonicSwitch, 'geopotentialHarmonic': geopotentialHarmonic, 'shadowModelSwitch': shadowModelSwitch, 'shadowBoundaryCorrection': shadowBoundaryCorrection, 'covariancePropagationSwitch': covariancePropagationSwitch, 'covariancePropagationDegGeopotential': covariancePropagationDegGeopotential, 'covariancePropagationAtmosDrag': covariancePropagationAtmosDrag, 'covariancePropagationSun' : covariancePropagationSun, 'covariancePropagationMoon': covariancePropagationMoon, 'covariancePropagationSolarRadPressure': covariancePropagationSolarRadPressure, 'noiseMatrixComputation': noiseMatrixComputation, 'fapDayFile': fapDayFile } subprocess.call("../bin/neptune-sa", stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) output_dir = os.path.join("output",str(input_dict['runId'])) os.mkdir(output_dir) for filetype in ['.acc', '.csv', '.cvu', '.osc', '.vru']: filename = str(input_dict['runId']) + str(filetype) src = os.path.join("output", filename) dst = os.path.join("output",str(input_dict['runId'])) shutil.move(src,dst) filename = str(input_dict['runId']) + '.json' filepath = os.path.join("output",str(input_dict['runId']), filename) with open(filepath, 'w') as f: json.dump(input_dict, f) return runId # Override runID # a81d2b74-9cfb-4590-a5b8-84088184112e geopotential = 36 # 9783a265-62c7-4f40-9c40-0bd5463a24a2 geopotential = 6 def plot_rundId(runId): filename = str(runId) + '.json' input_json_path = os.path.join("output", str(runId), filename) with open(input_json_path) as json_file: input_json = json.load(json_file) print('Geopotential: ' + str(input_json['geopotential'])) ###################################################### # Plot the covariances ###################################################### filename = str(runId) + ".vru" output_file_path = os.path.join("output", str(runId), filename) # read file to pandas data frame data = pd.read_table( output_file_path, comment='#', header=None, sep='\s+', names=['date','time','mjd','rx','ry','rz','vx','vy','vz'], parse_dates=[[0,1]] ) data_labels = ['rx','ry','rz','vx','vy','vz'] data[data_labels] = data[data_labels].apply(np.sqrt) data[data_labels] = data[data_labels].multiply(1000.0) # strip MJD data = data[['date_time', 'rx', 'ry', 'rz', 'vx', 'vy', 'vz']] ###################################################### # Plot the kepler elements ###################################################### filename = str(runId) + ".osc" output_file_path = os.path.join("output", str(runId), filename) # now plot data.plot(x='date_time', subplots=True, sharex=True, title='$1\sigma$ errors (r in m, v in m/s)') plt.show() data = pd.read_table( output_file_path, comment='#', header=None, sep='\s+', names=['date','time','mjd','sma','ecc','inc','raan','aop','tran','mean'], parse_dates=[[0,1]] ) # strip MJD sma = data[['date_time', 'sma']] # now plot sma.plot( x='date_time', subplots=True, sharex=True, title='SMA (km, deg)', color='c' ) # set the #plt.xlim([dt(2016, 7, 21), dt(2016, 7, 23)]) plt.show() # strip MJD ecc = data[['date_time', 'ecc']] # now plot ecc.plot( x='date_time', subplots=True, sharex=True, title='ecc (km, deg)', color='r' ) # set the #plt.xlim([dt(2016, 7, 21), dt(2016, 7, 23)]) plt.show() # strip MJD inc = data[['date_time', 'inc']] # now plot inc.plot( x='date_time', subplots=True, sharex=True, title='inc (km, deg)', color='b' ) # set the #plt.xlim([dt(2016, 7, 21), dt(2016, 7, 23)]) plt.show() # strip MJD raan = data[['date_time', 'raan']] # now plot raan.plot( x='date_time', subplots=True, sharex=True, title='Raan (km, deg)', color='y' ) # set the #plt.xlim([dt(2016, 7, 21), dt(2016, 7, 23)]) plt.show() # strip MJD data = data[['date_time', 'aop']] data['aop'] = data['aop'].apply(lambda x: math.radians(x)) data['aop'] = np.unwrap(data['aop'].tolist()) # now plot data.plot( x='date_time', subplots=True, sharex=True, title='aop (km, deg)', color = 'k' ) # set the #plt.xlim([dt(2016, 7, 21), dt(2016, 7, 23)]) plt.show() def fap_day_modifier(f10Mod = 0, f3mMod = 0, ssnMod = 0, apMod = 0): with open("data/fap_day.dat", 'r') as f: lines = f.readlines() for i in range(2, len(lines)): splitLine = lines[i].split() f10 = int(splitLine[1]) + int(f10Mod) splitLine[1] = str(f10).zfill(3) f3m = int(splitLine[2]) + int(f3mMod) splitLine[2] = str(f3m).zfill(3) ssn = int(splitLine[3]) + int(ssnMod) splitLine[3] = str(ssn).zfill(3) ap = int(splitLine[4]) + int(apMod) splitLine[4] = str(ap).zfill(3) splitLine.append('\n') lines[i] = ' '.join(splitLine) with open("data/fap_day_modified.dat", 'w') as file: file.writelines(lines) fap_day_modifier() def get_rtn_matrix(state_vector): r = state_vector[0:3] v = state_vector[3:6] rxv = np.cross(r, v) vecRTN = np.empty([3,3],float) # process vector R vecRTN[0,:] = np.divide(r,np.linalg.norm(r)) # process vector W vecRTN[2,:] = np.divide(rxv,np.linalg.norm(rxv)) # process vector S vecRTN[1,:] = np.cross(vecRTN[2,:], vecRTN[0,:]) return vecRTN # Run propagation with burr distribution on f10.7 values numberIterations = 100 # Generate f10.7 modifier list using normal distribution f10ModList = np.random.normal(0.0, 20, numberIterations) apModList = np.random.normal(0.0, 7, numberIterations) # Initialise the runIdList IF no additional runIdList = [] #Set variables noiseMatrixComputation = 0 covariancePropagationSwitch = 0 beginDate = '2016 07 20 00 31 50.00', endDate = '2016 07 27 00 31 50.00', # Create a initial "unmodified" reference propagation runIdList.append(propagate(beginDate = beginDate, endDate = endDate,noiseMatrixComputation=noiseMatrixComputation, covariancePropagationSwitch=covariancePropagationSwitch)) for i in range(0, len(f10ModList)): fap_day_modifier(f10Mod=f10ModList[i]) runIdList.append(propagate(beginDate = beginDate, endDate = endDate,noiseMatrixComputation=noiseMatrixComputation, covariancePropagationSwitch=covariancePropagationSwitch, fapDayFile='fap_day_modified.dat',)) plt.hist(f10ModList) plt.show() # x = [] # y = [] # z = [] # u = [] # v = [] # w = [] r = [] t = [] n = [] # Calculate required data from the reference propagation filename = str(runIdList[0]) + ".csv" output_file_path = os.path.join("output", str(runIdList[0]), filename) data = pd.read_table( output_file_path, comment='#', header=None, sep='\s+', names=['date','time','mjd','x','y','z','u','v','w'], parse_dates=[[0,1]] ) stateVector = [ data.tail(1)['x'].values[0], data.tail(1)['y'].values[0], data.tail(1)['z'].values[0], data.tail(1)['u'].values[0], data.tail(1)['v'].values[0], data.tail(1)['w'].values[0]] rtnMatrix = get_rtn_matrix(state_vector=stateVector) rtnMatrix = np.array(rtnMatrix) stateVector = np.array([ data.tail(1)['x'].values[0], data.tail(1)['y'].values[0], data.tail(1)['z'].values[0] ]) RTN1 = np.dot(rtnMatrix, stateVector) # filename = str(runIdList[0]) + ".vru" # output_file_path = os.path.join("output", str(runIdList[0]), filename) # # read file to pandas data frame # data = pd.read_table( # output_file_path, # comment='#', # header=None, # sep='\s+', # names=['date','time','mjd','rx','ry','rz','vx','vy','vz'], parse_dates=[[0,1]] # ) # data_labels = ['rx','ry','rz','vx','vy','vz'] # data[data_labels] = data[data_labels].apply(np.sqrt) # #data[data_labels] = data[data_labels].multiply(1000.0) # # strip MJD # data = data[['date_time', 'rx', 'ry', 'rz', 'vx', 'vy', 'vz']] # covarianceVector = np.array([data.tail(1)['rx'].values[0], data.tail(1)['ry'].values[0], data.tail(1)['ry'].values[0]]) # covarianceVectorRTN = np.dot(rtnMatrix, covarianceVector) print("Number of Propagations: " + str(len(runIdList))) for i in range(1, len(runIdList)): filename = str(runIdList[i]) + ".csv" output_file_path = os.path.join("output", str(runIdList[i]), filename) data = pd.read_table( output_file_path, comment='#', header=None, sep='\s+', names=['date','time','mjd','x','y','z','u','v','w'], parse_dates=[[0,1]] ) stateVector = np.array([ data.tail(1)['x'].values[0], data.tail(1)['y'].values[0], data.tail(1)['z'].values[0] ]) RTN2 = np.dot(rtnMatrix, stateVector) r.append(RTN1[0]-RTN2[0]) t.append(RTN1[1]-RTN2[1]) n.append(RTN1[2]-RTN2[2]) plt.hist(r, bins=40) plt.xlabel("km") plt.title("Radial Standard Deviation") plt.axvline(0, color='k', linestyle='dashed', linewidth=2) #plt.axvline(covarianceVectorRTN[0], color='r', linestyle='dashed', linewidth=1) #plt.axvline(-covarianceVectorRTN[0], color='r', linestyle='dashed', linewidth=1) plt.show() plt.hist(t, bins=25) plt.xlabel("km") plt.title("Tangential Standard Deviation") plt.axvline(0, color='k', linestyle='dashed', linewidth=2) # plt.axvline(covarianceVectorRTN[1], color='r', linestyle='dashed', linewidth=1) # plt.axvline(-covarianceVectorRTN[1], color='r', linestyle='dashed', linewidth=1) plt.show() plt.hist(n, bins=25) plt.xlabel("km") plt.title("Normal Standard Deviation") plt.axvline(0, color='k', linestyle='dashed', linewidth=2) #plt.axvline(-covarianceVectorRTN[2], color='r', linestyle='dashed', linewidth=1) plt.show() # Override runID # a81d2b74-9cfb-4590-a5b8-84088184112e geopotential = 36 # 9783a265-62c7-4f40-9c40-0bd5463a24a2 geopotential = 6 for id in ['9783a265-62c7-4f40-9c40-0bd5463a24a2','a81d2b74-9cfb-4590-a5b8-84088184112e']: plot_rundId(id) filename = "fap_day.dat" output_file_path = os.path.join("data", filename) # read file to pandas data frame data = pd.read_table( output_file_path, comment='#', header=None, sep='\s+', names=['date', 'F10', 'F3M', 'SSN', 'Ap'], parse_dates=[1] ) print(data) # now plot data.plot(x='date', subplots=True, sharex=True, title='Space Weather') plt.show() # set the #plt.xlim([dt(2016, 7, 21), dt(2016, 7, 23)]) print(len(runIdList)) ```
github_jupyter
# Continuous Control --- In this notebook, you will learn how to use the Unity ML-Agents environment for the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program. ### 1. Start the Environment We begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/). ``` import torch import numpy as np import pandas as pd from collections import deque from unityagents import UnityEnvironment import random import matplotlib.pyplot as plt %matplotlib inline from ddpg_agent import Agent ``` Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded. - **Mac**: `"path/to/Reacher.app"` - **Windows** (x86): `"path/to/Reacher_Windows_x86/Reacher.exe"` - **Windows** (x86_64): `"path/to/Reacher_Windows_x86_64/Reacher.exe"` - **Linux** (x86): `"path/to/Reacher_Linux/Reacher.x86"` - **Linux** (x86_64): `"path/to/Reacher_Linux/Reacher.x86_64"` - **Linux** (x86, headless): `"path/to/Reacher_Linux_NoVis/Reacher.x86"` - **Linux** (x86_64, headless): `"path/to/Reacher_Linux_NoVis/Reacher.x86_64"` For instance, if you are using a Mac, then you downloaded `Reacher.app`. If this file is in the same folder as the notebook, then the line below should appear as follows: ``` env = UnityEnvironment(file_name="Reacher.app") ``` ``` env = UnityEnvironment(file_name="Reacher1.app") ``` Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python. ``` # get the default brain brain_name = env.brain_names[0] brain = env.brains[brain_name] ``` ### 2. Examine the State and Action Spaces In this environment, a double-jointed arm can move to target locations. A reward of `+0.1` is provided for each step that the agent's hand is in the goal location. Thus, the goal of your agent is to maintain its position at the target location for as many time steps as possible. The observation space consists of `33` variables corresponding to position, rotation, velocity, and angular velocities of the arm. Each action is a vector with four numbers, corresponding to torque applicable to two joints. Every entry in the action vector must be a number between `-1` and `1`. Run the code cell below to print some information about the environment. ``` # reset the environment env_info = env.reset(train_mode=True)[brain_name] # number of agents num_agents = len(env_info.agents) print('Number of agents:', num_agents) # size of each action action_size = brain.vector_action_space_size print('Size of each action:', action_size) # examine the state space states = env_info.vector_observations state_size = states.shape[1] print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size)) print('The state for the first agent looks like:', states[0]) ``` ### 3. Take Random Actions in the Environment In the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment. Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment! ``` env_info = env.reset(train_mode=False)[brain_name] # reset the environment states = env_info.vector_observations # get the current state (for each agent) scores = np.zeros(num_agents) # initialize the score (for each agent) while True: actions = np.random.randn(num_agents, action_size) # select an action (for each agent) actions = np.clip(actions, -1, 1) # all actions between -1 and 1 env_info = env.step(actions)[brain_name] # send all actions to tne environment next_states = env_info.vector_observations # get next state (for each agent) rewards = env_info.rewards # get reward (for each agent) dones = env_info.local_done # see if episode finished scores += env_info.rewards # update the score (for each agent) states = next_states # roll over states to next time step if np.any(dones): # exit loop if episode finished break print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores))) ``` ### 4. It's Your Turn! Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following: ```python env_info = env.reset(train_mode=True)[brain_name] ``` ``` agent = Agent(state_size=state_size, action_size=action_size, n_agents=num_agents, random_seed=42) def plot_scores(scores, rolling_window=10, save_fig=False): """Plot scores and optional rolling mean using specified window.""" fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(len(scores)), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.title(f'scores') rolling_mean = pd.Series(scores).rolling(rolling_window).mean() plt.plot(rolling_mean); if save_fig: plt.savefig(f'figures_scores.png', bbox_inches='tight', pad_inches=0) def ddpg(n_episodes=10000, max_t=1000, print_every=100): scores_deque = deque(maxlen=print_every) scores = [] for i_episode in range(1, n_episodes+1): env_info = env.reset(train_mode=True)[brain_name] states = env_info.vector_observations agent.reset() score = np.zeros(num_agents) for t in range(max_t): actions = agent.act(states) env_info = env.step(actions)[brain_name] next_states = env_info.vector_observations # get next state (for each agent) rewards = env_info.rewards # get reward (for each agent) dones = env_info.local_done # see if episode finished agent.step(states, actions, rewards, next_states, dones) states = next_states score += rewards if any(dones): break scores_deque.append(np.mean(score)) scores.append(np.mean(score)) print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)), end="") torch.save(agent.actor_local.state_dict(), './weights/checkpoint_actor.pth') torch.save(agent.critic_local.state_dict(), './weights/checkpoint_critic.pth') if i_episode % print_every == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque))) plot_scores(scores) if np.mean(scores_deque) >= 30.0: print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode - print_every, np.mean(scores_deque))) torch.save(agent.actor_local.state_dict(), './weights/checkpoint_actor.pth') torch.save(agent.critic_local.state_dict(), './weights/checkpoint_critic.pth') break return scores scores = ddpg() fig = plt.figure() ax = fig.add_subplot(111) plt.plot(np.arange(1, len(scores)+1), scores) plt.ylabel('Score') plt.xlabel('Episode #') plt.show() plot_scores(scores) ``` When finished, you can close the environment. ``` env.close() ```
github_jupyter
<a href="https://colab.research.google.com/github/GMTAccount/Projets-scolaires/blob/main/Projet_Gomoku.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> <h1> Liste des erreurs </h1> <ul> <li style="color:green;"> Recentrer le premier point (fait) </li> <li> Problème de l'emplacement des valeurs </li> <li> Problème des valeurs placées sur la matrice </li> <li> Problème de valeurs infinies </li> <li> Manque Utility </li> </ul> ``` print ( "hello" , " test ") print("test2") #fonction 1: Méthod einitialisation (creation plateau de jeu) V #fonction 2: Attribution Joueur V #fonction 3: Affichage du plateau + avec coordonées V #fonction 4: Saisie sécurisée de placement + affichage placement chosii par l'IA (en cours) #fonction 5: menu V #fonction 5,5: Saisie 1ers pions (en cours) #fonction 6: timer V #fonction 7: Fonction minimax V #fonction 8: Terminal Test return 0 (égalité) 1(on a gagné) 2 (aversaire a gagné) -1 (partie non finie) V #fonction 9: Actions (récapitule toutes les acttions) V #fonction 10: utility (détermine quelle est la meilleure action) (en cours) #Fonction 11:décision,/result (applique l'action) V #class Gomoko V #attributs: matrice de 0 1 2 #joueur: booleen #méthodes: Affichage #saisie etc #init timer #MAIN: menu #0 : pas occupé, 1 : occupé par nous, 2 : occupé par l'adversaire import time def AfficherPlateau(plateau): rep ="" lettres= {"A" : 1 , "B" : 2 , "C" : 3, "D" : 4, "E" : 5, "F": 6 , "G" : 7 , "H" :8 , "I" :9 , "J" : 10 , "K" :11 , "L" :12 , "M" :13 , "N":14 , "O": 15 } rep += "\t " rep += " ".join(lettres) rep += "\n" for i in range(15): for j in range(15): if plateau[i][j]==0: if j==0: if i<9: rep += "\t " +str(i+1) + " . " else: rep += "\t" + str(i+1) +" . " else: rep += ". " elif plateau[i][j]==1: if j==0: if i<9: rep += "\t " + str(i+1) + " O " else: rep += "\t" + str(i+1) + " O " else: rep += "O " else: if j==0: if i<9: rep += "\t " + str(i+1) + " X " else: rep += "\t" + str(i+1) +" X " else: rep += "X " rep += "\n" print (rep) return rep plateau= [[ 0 for x in range (15)] for y in range(15)] def Jouable(ligne,col, plateau): #ok if (col < 0 or col > 14 or ligne >14 or ligne < 0): return 0 elif (plateau[ligne][col] == 0): return 1 else: return 0 def EstNul(plateau): #ok for i in range(15): for j in range(15): if(Jouable(i,j,plateau)==1): return False return True def TerminalTest(plateau): #ok for i in range(15): for j in range(15): if(j>= 4): if(pow(plateau[i][j]+plateau[i][j-1]+plateau[i][j-2]+plateau[i][j-3]+plateau[i][j-4],2) == 25): return True elif(i >= 4): if(pow(plateau[i][j]+plateau[i-1][j-1]+plateau[i-2][j-2]+plateau[i-3][j-3]+plateau[i-4][j-4],2) == 25): return True elif(i<11): if(pow(plateau[i][j]+plateau[i+1][j-1]+plateau[i+2][j-2]+plateau[i+3][j-3]+plateau[i+4][j-4],2) == 25): return True if(j< 11): if(pow(plateau[i][j]+plateau[i][j+1]+plateau[i][j+2]+plateau[i][j+3]+plateau[i][j+4],2) == 25): return True elif(i >= 4): if(pow(plateau[i][j]+plateau[i-1][j+1]+plateau[i-2][j+2]+plateau[i-3][j+3]+plateau[i-4][j+4],2) == 25): return True elif(i<11): if(pow(plateau[i][j]+plateau[i+1][j+1]+plateau[i+2][j+2]+plateau[i+3][j+3]+plateau[i+4][j+4],2) == 25): return True if(i >= 4): if(pow(plateau[i][j]+plateau[i-1][j]+plateau[i-2][j]+plateau[i-3][j]+plateau[i-4][j],2) == 25): return True if(i<11): if(pow(plateau[i][j]+plateau[i+1][j]+plateau[i+2][j]+plateau[i+3][j]+plateau[i+4][j],2) == 25): return True return False def Saisie(plateau, Joueur): #ok secu = False lettres= {"A" : 1 , "B" : 2 , "C" : 3 , "D" : 4, "E" : 5, "F": 6 , "G" : 7 , "H" :8 , "I" :9 , "J" : 10 , "K" :11 , "L" :12 , "M" :13 , "N":14 , "O": 15 } while secu == False : print("Veuillez saisir les coordonnées de la case que vous voulez jouer") print("(Séparez les axes par un point virgule, par exemple D;7)") saisie = input() try: saisieTab = saisie.split(";") if len (saisieTab) == 2 and int(saisieTab[1]) > 0 and int(saisieTab[1]) <= len(lettres) and "".join(saisieTab[0].split() ).upper() in lettres : secu = True saisiex = int(saisieTab[1]) saisiey = "".join(saisieTab[0].split() ).upper() if plateau[saisiex-1][lettres[saisiey]-1] == 1 or plateau[saisiex-1][lettres[saisiey]-1] == -1 : secu = False except: print("Saisie invalide, veuillez réessayer") return (lettres[saisiey]-1,saisiex-1) print('\nC\'est au tour du joueur', Joueur, ' de jouer !') print('\nChoisissez la colonne où vous voulez mettre votre pion ', end ='') col=int(input())-1 print('\nChoisissez la ligne ou vous voulez mettre votre pion', end='') ligne=int(input())-1 if(col < 0 or col > 14 or Jouable(ligne,col, plateau) == 0): while(col < 0 or col > 14 or Jouable(ligne, col, plateau) == 0): print('Vous ne pouvez pas choisir cette colonne, choisissez une colonne valide !') col = int(input()) if(ligne<0 or ligne>14 or Jouable(ligne,col,plateau)==0): while(ligne<0 or ligne>14 or Jouable(ligne,col,plateau)==0): print('Vous ne pouvez pas choisir cette ligne, choisissez une ligne valide !') ligne=int(input()) print('Vous avez joué la colonne', col+1, 'et la ligne', ligne+1, '\n') return col, ligne def MinMax(plateau,Joueur): Fin=False J=1 while(Fin==False): if(Joueur==J): col,ligne=Saisie(plateau,Joueur) #print(ligne,col) plateau[ligne][col]= 1 else: print('\nC\'est au tour de l\'IA de jouer') start = time.time() [ligne,col] = JeuxIA(plateau,3,-float("inf"),float("inf"),True)[1] print(ligne+1,col+1) end = time.time() print('Temps de réponse: {}s'.format(round(end - start, 7))) plateau[ligne][col]= -1 AfficherPlateau(plateau) if(TerminalTest(plateau)): print('\nLe joueur', J, 'a gagné ! \nVoulez-vous rejouer ? (y/n) ', end = '') rejouer=str(input) Fin=True elif(EstNul(plateau)): print('\nMatch nul, voulez-vous rejouer ? (y/n) ', end = '') rejouer=str(input()) Fin=True if(J==1): J=2 else: J=1 return rejouer def JeuxIA(M, profondeur,alpha,beta,maximizingPlayer): actions=Actions(M) if profondeur == 0 or TerminalTest(M) == True or EstNul(M) == True: cout = Utility(M,maximizingPlayer) return [cout,[0,0]] elif maximizingPlayer == True: maxEval = [-1000000,[0,0]] for i in range(len(actions)): evale = JeuxIA(Result(M,actions[i][0],actions[i][1],True),profondeur-1,alpha,beta,False) evale[1] = [actions[i][0],actions[i][1]] if evale[0] > maxEval[0]: maxEval[0] = evale[0] maxEval[1] = evale[1] alpha = max(alpha,evale[0]) if beta <= alpha : break return maxEval else: minEval = [1000000,[0,0]] for i in range(len(actions)): evale = JeuxIA(Result(M,actions[i][0],actions[i][1],False),profondeur-1,alpha,beta,True) evale[1] = [actions[i][0],actions[i][1]] if evale[0] < minEval[0]: minEval[0] = evale[0] minEval[1] = evale[1] beta = min(beta,evale[0]) if beta <= alpha: break return minEval def Actions(state): # ok ù actions=[] for a in range(15): for b in range(15): if Jouable(a,b,state)==0: if Jouable(a-1,b-1,state) == 1 and actions.count([a-1,b-1])== 0 : actions.append([a-1,b-1]) if Jouable(a-1,b,state) == 1 and actions.count([a-1,b]) == 0: actions.append([a-1,b]) if Jouable(a-1,b+1,state) == 1 and actions.count([a-1,b+1]) == 0: actions.append([a-1,b+1]) if Jouable(a,b-1,state) == 1 and actions.count([a,b-1]) == 0: actions.append([a,b-1]) if Jouable(a,b+1,state) == 1 and actions.count([a,b+1]) == 0: actions.append([a,b+1]) if Jouable(a+1,b-1,state) == 1 and actions.count([a+1,b-1]) == 0: actions.append([a+1,b-1]) if Jouable(a+1,b,state) == 1 and actions.count([a+1,b]) == 0: actions.append([a+1,b]) if Jouable(a+1,b+1,state) == 1 and actions.count([a+1,b+1]) == 0: actions.append([a+1,b+1]) if len(actions) == 0: actions.append([6,6]) return actions def Result(state, a, b, IA): newState = [[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]] for i in range(15): newState[i] = state[i][:] if IA == True : newState[a][b] = -1 else : newState[a][b] = 1 return newState def Utility(state,J): somme = 0 if TerminalTest(state) == True: if J == True : return -float("inf") else : return float("inf") else : for i in range(1,14): for j in range(1,14): if(j>= 3): if (state[i][j]+state[i][j-1]+state[i][j-2] == -3) and (state[i][j-3] == 0 or state[i][j+1] == 0): somme += 500 elif(i >= 3): if(state[i][j]+state[i-1][j-1]+state[i-2][j-2] == -3) and (state[i-3][j-3] == 0 or state[i+1][j+1] == 0): somme += 1000 elif(i<9): if(state[i][j]+state[i+1][j-1]+state[i+2][j-2] == -3)and (state[i+3][j-3] == 0 or state[i-1][j+1] == 0): somme += 1000 if(j< 9): if(state[i][j]+state[i][j+1]+state[i][j+2] == -3)and (state[i][j+3] == 0 or state[i][j-1] == 0): somme += 500 elif(i >= 3): if(state[i][j]+state[i-1][j+1]+state[i-2][j+2] == -3)and (state[i-3][j+3] == 0 or state[i+1][j-1] == 0): somme += 1000 elif(i<9): if(state[i][j]+state[i+1][j+1]+state[i+2][j+2] == -3)and (state[i+3][j+3] == 0 or state[i-1][j-1] == 0): somme += 1000 if(i >= 3): if(state[i][j]+state[i-1][j]+state[i-2][j] == -3)and (state[i-3][j] == 0 or state[i+1][j] == 0): somme += 500 if(i<9): if(state[i][j]+state[i+1][j]+state[i+2][j] == -3)and (state[i+3][j] == 0 or state[i-1][j] == 0): somme += 500 for i in range(1,14): for j in range(1,14): if(j>= 3): if (state[i][j]+state[i][j-1]+state[i][j-2] == -3) and (state[i][j-3] == 0 and state[i][j+1] == 0): somme += 50000 elif(i >= 3): if(state[i][j]+state[i-1][j-1]+state[i-2][j-2] == -3) and (state[i-3][j-3] == 0 and state[i+1][j+1] == 0): somme += 100000 elif(i<9): if(state[i][j]+state[i+1][j-1]+state[i+2][j-2] == -3)and (state[i+3][j-3] == 0 and state[i-1][j+1] == 0): somme += 100000 if(j< 9): if(state[i][j]+state[i][j+1]+state[i][j+2] == -3)and (state[i][j+3] == 0 and state[i][j-1] == 0): somme += 50000 elif(i >= 3): if(state[i][j]+state[i-1][j+1]+state[i-2][j+2] == -3)and (state[i-3][j+3] == 0 and state[i+1][j-1] == 0): somme += 100000 elif(i<9): if(state[i][j]+state[i+1][j+1]+state[i+2][j+2] == -3)and (state[i+3][j+3] == 0 and state[i-1][j-1] == 0): somme += 100000 if(i >= 3): if(state[i][j]+state[i-1][j]+state[i-2][j] == -3)and (state[i-3][j] == 0 and state[i+1][j] == 0): somme += 50000 if(i<9): if(state[i][j]+state[i+1][j]+state[i+2][j] == -3)and (state[i+3][j] == 0 and state[i-1][j] == 0): somme += 50000 for i in range(1,14): for j in range(1,14): if(j>= 3): if (state[i][j]+state[i][j-1]+plateau[i][j-2] == 3) and (state[i][j-3] == 0 or state[i][j+1] == 0): somme -= 50000 elif(i >= 3): if(state[i][j]+state[i-1][j-1]+state[i-2][j-2] == 3) and (state[i-3][j-3] == 0 or state[i+1][j+1] == 0): somme -= 100000 elif(i<9): if(state[i][j]+state[i+1][j-1]+state[i+2][j-2] == 3)and (state[i+3][j-3] == 0 or state[i-1][j+1] == 0): somme -= 100000 if(j< 9): if(state[i][j]+state[i][j+1]+state[i][j+2] == 3)and (state[i][j+3] == 0 or state[i][j-1] == 0): somme -= 50000 elif(i >= 3): if(state[i][j]+state[i-1][j+1]+state[i-2][j+2] == 3)and (state[i-3][j+3] == 0 or state[i+1][j-1] == 0): somme -= 100000 elif(i<9): if(state[i][j]+state[i+1][j+1]+state[i+2][j+2] == 3)and (state[i+3][j+3] == 0 or state[i-1][j-1] == 0): somme -= 100000 if(i >= 3): if(state[i][j]+state[i-1][j]+state[i-2][j] == 3)and (state[i-3][j] == 0 or state[i+1][j] == 0): somme -= 50000 if(i<9): if(state[i][j]+state[i+1][j]+state[i+2][j] == 3)and (state[i+3][j] == 0 or state[i-1][j] == 0): somme -= 50000 for i in range(1,14): for j in range(1,14): if(j>= 3): if (state[i][j]+state[i][j-1]+plateau[i][j-2] == 3) and (state[i][j-3] == 0 and state[i][j+1] == 0): somme -= 500000 elif(i >= 3): if(state[i][j]+state[i-1][j-1]+state[i-2][j-2] == 3) and (state[i-3][j-3] == 0 and state[i+1][j+1] == 0): somme -= 1000000 elif(i<9): if(state[i][j]+state[i+1][j-1]+state[i+2][j-2] == 3)and (state[i+3][j-3] == 0 and state[i-1][j+1] == 0): somme -= 1000000 if(j< 9): if(state[i][j]+state[i][j+1]+state[i][j+2] == 3)and (state[i][j+3] == 0 and state[i][j-1] == 0): somme -= 500000 elif(i >= 3): if(state[i][j]+state[i-1][j+1]+state[i-2][j+2] == 3)and (state[i-3][j+3] == 0 and state[i+1][j-1] == 0): somme -= 1000000 elif(i<9): if(state[i][j]+state[i+1][j+1]+state[i+2][j+2] == 3)and (state[i+3][j+3] == 0 and state[i-1][j-1] == 0): somme -= 1000000 if(i >= 3): if(state[i][j]+state[i-1][j]+state[i-2][j] == 3)and (state[i-3][j] == 0 and state[i+1][j] == 0): somme -= 500000 if(i<9): if(state[i][j]+state[i+1][j]+state[i+2][j] == 3)and (state[i+3][j] == 0 and state[i-1][j] == 0): somme -= 500000 for i in range(1,14): for j in range(1,14): if(j>= 2): if (state[i][j]+state[i][j-1] == 2) and (state[i][j-2] == 0 and state[i][j+1] == 0): somme -= 50000 elif(i >= 2): if(state[i][j]+state[i-1][j-1] == 2) and (state[i-2][j-2] == 0 and state[i+1][j+1] == 0): somme -= 100000 elif(i<10): if(state[i][j]+state[i+1][j-1] == 2)and (state[i+2][j-2] == 0 and state[i-1][j+1] == 0): somme -= 100000 if(j< 10): if(state[i][j]+state[i][j+1] == 2)and (state[i][j+2] == 0 and state[i][j-1] == 0): somme -= 50000 elif(i >= 2): if(state[i][j]+state[i-1][j+1] == 2)and (state[i-2][j+2] == 0 and state[i+1][j-1] == 0): somme -= 100000 elif(i<10): if(state[i][j]+state[i+1][j+1] == 2)and (state[i+2][j+2] == 0 and state[i-1][j-1] == 0): somme -= 100000 if(i >= 2): if(state[i][j]+state[i-1][j] == 2)and (state[i-2][j] == 0 and state[i+1][j] == 0): somme -= 50000 if(i<10): if(state[i][j]+state[i+1][j] == 2)and (state[i+2][j] == 0 and state[i-1][j] == 0): somme -= 50000 for i in range(1,14): for j in range(1,14): if(j>= 2): if (state[i][j]+state[i][j-1] == -2) and (state[i][j-2] == 0 and state[i][j+1] == 0): #print('2 cercles en ligne') somme += 5000 elif(i >= 2): if(state[i][j]+state[i-1][j-1] == -2) and (state[i-2][j-2] == 0 and state[i+1][j+1] == 0): #print('2 cercles en diagonales') somme += 10000 elif(i<10): if(state[i][j]+state[i+1][j-1] == -2)and (state[i+2][j-2] == 0 and state[i-1][j+1] == 0): #print('2 cercles en diagonales') somme += 10000 if(j< 10): if(state[i][j]+state[i][j+1] == -2)and (state[i][j+2] ==0 and state[i][j-1] == 0): #print('2 cercles en ligne') somme += 5000 elif(i >= 2): if(state[i][j]+state[i-1][j+1] == -2)and (state[i-2][j+2] == 0 and state[i+1][j-1] == 0): #print('2 cercles en diagonales') somme += 10000 elif(i<10): if(state[i][j]+state[i+1][j+1] == -2)and (state[i+2][j+2] == 0 and state[i-1][j-1] == 0): #print('2 cercles en diagonales') somme += 10000 if(i >= 2): if(state[i][j]+state[i-1][j] == -2)and (state[i-2][j] == 0 and state[i+1][j] == 0): #print('2 cercles en colonne') somme += 5000 if(i<10): if(state[i][j]+state[i+1][j] == -2)and (state[i+2][j] == 0 and state[i-1][j] == 0): #print('2 cercles en colonne') somme += 5000 for i in range(1,14): for j in range(1,14): if(j>= 2): if (state[i][j]+state[i][j-1] == -2) and (state[i][j-2] == 0 or state[i][j+1] == 0): #print('2 cercles en ligne') somme += 500 elif(i >= 2): if(state[i][j]+state[i-1][j-1] == -2) and (state[i-2][j-2] == 0 or state[i+1][j+1] == 0): #print('2 cercles en diagonales') somme += 1000 elif(i<10): if(state[i][j]+state[i+1][j-1] == -2)and (state[i+2][j-2] == 0 or state[i-1][j+1] == 0): #print('2 cercles en diagonales') somme += 1000 if(j< 10): if(state[i][j]+state[i][j+1] == -2)and (state[i][j+2] == 0 or state[i][j-1] == 0): #print('2 cercles en ligne') somme += 500 elif(i >= 2): if(state[i][j]+state[i-1][j+1] == -2)and (state[i-2][j+2] == 0 or state[i+1][j-1] == 0): #print('2 cercles en diagonales') somme += 1000 elif(i<10): if(state[i][j]+state[i+1][j+1] == -2)and (state[i+2][j+2] == 0 or state[i-1][j-1] == 0): #print('2 cercles en diagonales') somme += 1000 if(i >= 2): if(state[i][j]+state[i-1][j] == -2)and (state[i-2][j] == 0 or state[i+1][j] == 0): #print('2 cercles en colonne') somme += 500 if(i<10): if(state[i][j]+state[i+1][j] == -2)and (state[i+2][j] == 0 or state[i-1][j] == 0): #print('2 cercles en colonne') somme += 500 for i in range(1,14): for j in range(1,14): if(j>= 2): if (state[i][j]+state[i][j-1] == 2) and (state[i][j-2] == 0 or state[i][j+1] == 0): #print('2 cercles en ligne') somme -= 500 elif(i >= 2): if(state[i][j]+state[i-1][j-1] == 2) and (state[i-2][j-2] == 0 or state[i+1][j+1] == 0): #print('2 cercles en diagonales') somme -= 1000 elif(i<10): if(state[i][j]+state[i+1][j-1] == 2)and (state[i+2][j-2] == 0 or state[i-1][j+1] == 0): #print('2 cercles en diagonales') somme -= 1000 if(j< 10): if(state[i][j]+state[i][j+1] == 2)and (state[i][j+2] == 0 or state[i][j-1] == 0): #print('2 cercles en ligne') somme -= 500 elif(i >= 2): if(state[i][j]+state[i-1][j+1] == 2)and (state[i-2][j+2] == 0 or state[i+1][j-1] == 0): #print('2 cercles en diagonales') somme -= 1000 elif(i<10): if(state[i][j]+state[i+1][j+1] == 2)and (state[i+2][j+2] == 0 or state[i-1][j-1] == 0): #print('2 cercles en diagonales') somme -= 1000 if(i >= 2): if(state[i][j]+state[i-1][j] == 2)and (state[i-2][j] == 0 or state[i+1][j] == 0): #print('2 cercles en colonne') somme -= 500 if(i<10): if(state[i][j]+state[i+1][j] == 2)and (state[i+2][j] == 0 or state[i-1][j] == 0): #print('2 cercles en colonne') somme -= 500 return somme AfficherPlateau([[ 0 for x in range (15)] for y in range(15)]) rejouer = 'y' while(rejouer == 'y' or rejouer == 'Y'): print("\n\n***** Bonjour et bienvenue dans le jeux Gomoko ***") mat = [[ 0 for x in range (15)] for y in range(15)] AfficherPlateau(mat) Joueur=0 niveau=0 while(Joueur!=1 and Joueur!=2): saisieValide=False while(saisieValide==False): print("\nVeuillez choisir un joueur\n") print("\t1 - Joueur 1\n\t2 - Joueur 2") Joueur=input("Votre choix : ") try: Joueur=int(Joueur) saisieValide=(Joueur==1) or (Joueur==2) if(not saisieValide): print("Saisie invalide, veuillez réessayer") except: print("Saisie invalide, veuillez réessayer") if Joueur==1: col,ligne=Saisie(mat,Joueur) mat[ligne][col]= 1 AfficherPlateau(mat) rejouer=MinMax(mat,2) rejouer=input() ``` <h1> <em>Projet IA - Gomoku<em> </h1> <ul> <li style="color: green;">Fonction 1 : Méthode initialisation (creation plateau de jeu) V</li> <li style="color=green;">Fonction 2: Attribution Joueur V</li> <li style="color=green;">Fonction 3: Affichage du plateau + avec coordonées V</li> <li style="color=green;"> Fonction 4: Saisie sécurisée de placement + affichage placement chosii par l'IA (en cours)</li> <li style="color=green;">Fonction 5: menu V</li> <li style="color=green;">Fonction 5,5: Saisie 1ers pions (à compléter)</li> <li style="color=green;">Fonction 6: timer V</li> <li style="color=green;">Fonction 7: Fonction minimax</li> <li style="color=green;">Fonction 8: Terminal Test return 0 (égalité) 1(on a gagné) 2 (aversaire a gagné) -1 (partie non finie) V</li> <li style="color=green;">Fonction 9: Actions (récapitule toutes les acttions) (en cours)</li> <li style="color=green;">Fonction 10: utility (détermine quelle est la meilleure action)</li> <li style="color=green;">Fonction 11:décision,/result (applique l'action) V</li> <li style="color=green;">class Gomoko) V</li> </ul> ``` import numpy as np import time #FONCTION 1: INITIALISATION def init_Game(): mat = [[0]*15 for _ in range(15)] return mat #Fonction 3: AFFICHAGE PLATEAU def Affichage(plateau): print(" A B C D E F G H I J K L M N O") for i in range(15): for j in range(15): if plateau[i][j]==0: if j==0: if i<10: print(i, " . ",end="") else: print(i,". ",end="") else: print(". ", end="") elif plateau[i][j]==1: if j==0: if i<10: print(i, " O ",end="") else: print(i,"O ",end="") else: print("O ", end="") else: if j==0: if i<10: print(i, " X ",end="") else: print(i,"X ",end="") else: print("X ", end="") print("") #TAB_ACTIONS_POSSIBLES 9 def Actions(plateau): tab_actions=[] nbr_cases=0 for i in range(15): for j in range(15): if plateau[i][j]==0: tab_actions.append([i,j]) nbr_cases+=1 return tab_actions #Fonction 5 : MENU def Menu(): print("Bonjour et bienvenue dans le Gomoko") saisieValide=False while(saisieValide==False): print("Veuillez choisir un joueur") print("1 - Joueur 1\n2 - Joueur 2") joueur=input("Votre choix : ") try: joueur=int(joueur) saisieValide=(joueur==1) or (joueur==2) if(not saisieValide): print("Saisie invalide, veuillez réessayer") except: print("Saisie invalide, veuillez réessayer") print() print("C'est bon") estTermine=False gomoko=Gomoko(joueur) t=-1 gomoko.Initialisation() #A modifier lorsque la méthode sera crée while(not estTermine): gomoko.Affichage() # A modifier si besoin est gomoko.Saisie() gomoko.MinMax() # Ajouter d'autres fonctions si y a lieu t=gomoko.TerminalTest() if(t!=-1): estTermine=True if(t==0): print("Et nous avons une égalité") elif(t==1): print("Et notre IA vous a battu.e") elif(t==2): print("Et c'est gagné") print("Merci d'avoir joué") #Fonction 6 : timer #L'idée, c'est qu'il soit appliqué à une autre méthode, via un @timer juste avant def timer(fonction): """ Méthode qui chronomètre le temps d'exécution d'une méthode Sera utilisé pour mesurer le temps de traitement du MinMax Parameters ---------- fonction : Fonction sur laquelle appliquer le timer Returns ------- None. """ def inner(*args,**kwargs): #Coeur de la méthode de mesure du temps t=time.time()#Temps en entrée f=fonction(*args,**kwargs) print(time.time()-t)#Affichage du temps finel (en secondes) return f return inner #Fonction 7 : MinMax #Celle-ci est divisée en 3 fonctions différentes, pour simplifier le traitement @timer def MaxValue(state): if TerminalTest(state): return Utility(state) v=-float("inf") for a in Actions(state): v=max(v,MinValue(application_result(state,a))) return v def MinValue(state): if TerminalTest(state): return Utility(state) v=float("inf") for a in Actions(state): v=min(v,MaxValue(application_result(state,a))) return v def MinMax(): #Réaliser un classement des différentes valeurs trouvées a=Actions(self.plateau) b=(MinValue(Result(action,a)) for action in a) dico=dict(a,b) #Attention, cette ligne pourrait crash dico = sorted(dico.items(), key=lambda x: x[1], reverse=True) actionValide=dico.keys()[0] application_result(self.plateau,actionValide[0],actionValide[1],self.joueur) #Fonction 8 : determination du gagnant COL_COUNT=15 ROW_COUNT=15 def gagnant(plateau, pion): for c in range(COL_COUNT - 4): for r in range(ROW_COUNT): if plateau[r][c] == pion and plateau[r][c+1] == pion and plateau[r][c+2] == pion and plateau[r][c+3] == pion and plateau[r][c+4] == pion: return True for c in range(COL_COUNT): for r in range(ROW_COUNT-4): if plateau[r][c] == pion and plateau[r+1][c] == pion and plateau[r+2][c] == pion and plateau[r+3][c] == pion and plateau[r+4][c] == pion: return True for c in range(COL_COUNT-4): for r in range(4,ROW_COUNT): if plateau[r][c] == pion and plateau[r-1][c+1] == pion and plateau[r-2][c+2] == pion and plateau[r-3][c+3] == pion and plateau[r-4][c+4] == pion: return True for c in range(COL_COUNT-4): for r in range(ROW_COUNT-4): if plateau[r][c] == pion and plateau[r+1][c+1] == pion and plateau[r+2][c+2] == pion and plateau[r+3][c+3] == pion and plateau[r+4][c+4] == pion: return True #FONCTION 11 RESULT/Application def application_result(plateau,i,j,joueur): plateau[i][j]=joueur return plateau class Gomoko () : def __init__ ( self): self.plateau = [[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]] self.joueur def __str__ (self): for i in range( len(self.plateau) ) : for j in range (len(self.plateau) ) : print(self.plateau[i][j], end = "") print( ) def Saisie (self) : lettre = {"A" : 1 , "B" : 2 , "C" : 3 , "D" : 4, "E" : 5, "F": 6 , "G" : 7 , "H" :8 , "I" :9 , "J" : 10 , "K" :11 , "L" :12 , "M" :13 , "N":14 , "O": 15 } secu3 = False while secu3 == False : secu1 = False while secu1 == False : saisiex =input("Saisir la ligne ou vous voulez jouer : (chiffre) ") try: saisiex = int(saisiex) if saisiex > 0 and saisiex <= 15 : secu1 = True if(not secu1): print("Saisie invalide, veuillez réessayer") except: print("Saisie invalide, veuillez réessayer") secu2 = False while secu2 == False : saisiey =input("Saisir la colonnes ou vous voulez jouer : (lettre)") try: if saisiey in lettre : secu2 = True if(not secu2): print("Saisie invalide, veuillez réessayer") except: print("Saisie invalide, veuillez réessayer") try: if self.plateau[saisiex-1][lettre[saisiey]-1] == 1 or self.plateau[saisiex-1][lettre[saisiey]-1] == 2 : print( "pas bien ") secu3 = False else : secu3 = True if(not secu3): print("Position non valide, veuillez réessayer") except: print("Saisie invalide, veuillez réessayer") def Start(joueur): if(joueur==1): #Placement du joueur 1 au centre du plateau self.plateau[8][8]=1 #Placement selon l'IA de notre point self.MinMax() # A modifier, si elle ne fait pas le placement #Saisie des valeurs lettre = {"A" : 1 , "B" : 2 , "C" : 3 , "D" : 4, "E" : 5, "L" :12 , "M" :13 , "N":14 , "O": 15 } secu3 = False while secu3 == False : secu1 = False while secu1 == False : saisiex =input("Saisir la ligne ou vous voulez jouer : (chiffre) ") try: saisiex = int(saisiex) if saisiex > 0 and saisiex <= 15 and (saisiex<5 or saisiex>11) : secu1 = True if(not secu1): print("Saisie invalide, veuillez réessayer") except: print("Saisie invalide, veuillez réessayer") secu2 = False while secu2 == False : saisiey =input("Saisir la colonnes ou vous voulez jouer : (lettre)") try: if saisiey in lettre: secu2 = True if(not secu2): print("Saisie invalide, veuillez réessayer") except: print("Saisie invalide, veuillez réessayer") try: if self.plateau[saisiex-1][lettre[saisiey]-1] == 1 or self.plateau[saisiex-1][lettre[saisiey]-1] == 2 : print( "pas bien ") secu3 = False else : secu3 = True if(not secu3): print("Position non valide, veuillez réessayer") except: print("Saisie invalide, veuillez réessayer") else: self.plateau[8][8]=2 Saisie() # A voir en fonction de ce qui est souhaité #Réaliser un classement des différentes valeurs trouvées a=Actions(self.plateau) cpt=0 for i in range(len(a)): if (a[i][0]>5 or a[i][0]<11) or (a[i][1]>5 or a[i][1]<11): a.pop(i) i-=1 b=(MinValue(Result(action,a)) for action in a) dico=dict(a,b) #Attention, cette ligne pourrait crash dico = sorted(dico.items(), key=lambda x: x[1], reverse=True) actionValide=dico.keys()[0] application_result(self.plateau,actionValide[0],actionValide[1],self.joueur) def MaxValue(self, state): if self.TerminalTest(state): return self.Utility(state) v=-float("inf") for a in self.Actions(state): v=max(v,self.MinValue(self.application_result(state,a[0],a[1],self.joueur))) return v def MinValue(self, state): if self.TerminalTest(state): return self.Utility(state) v=float("inf") for a in self.Actions(state): self.plateau , action[0] , action[1] v=min(v,self.MaxValue(self.application_result(state,a[0],a[1],self.joueur))) return v ``` ### VERSION 1 ``` import numpy as np import time def Menu(): #%% Accueil print("\n\nBonjour et bienvenue dans le Gomoko") saisieValide=False joueur = 0 while(saisieValide==False): print("\nVeuillez choisir un joueur\n") print("\t1 - Joueur 1\n\t2 - Joueur 2") joueur=input("Votre choix : ") try: joueur=int(joueur) saisieValide=(joueur==1) or (joueur==2) if(not saisieValide): print("Saisie invalide, veuillez réessayer") except: print("Saisie invalide, veuillez réessayer") print() print("C'est bon") #%% Debut jeux estTermine=False gomoko=Gomoko(joueur) t=-1 gomoko.Start(joueur) #A modifier lorsque la méthode sera crée while(not estTermine): gomoko.Affichage() # A modifier si besoin est gomoko.Saisie() gomoko.MinMax() # Ajouter d'autres fonctions si y a lieu t=gomoko.TerminalTest() if(t!=-1): estTermine=True if(t==0): print("Et nous avons une égalité") elif(t==1): print("Et notre IA vous a battu.e") elif(t==2): print("Et c'est gagné") print("Merci d'avoir joué") class Gomoko () : def __init__ ( self, joueur ): self.plateau = [[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]] self.joueur = joueur def __str__ (self): for i in range( len(self.plateau) ) : for j in range (len(self.plateau) ) : print(self.plateau[i][j], end = "") print( ) def application_result(self, plateau,i,j,joueur): plateau[i][j]=joueur return plateau def MaxValue(self, state): if self.TerminalTest(state): return self.Utility(state) v=-float("inf") for a in self.Actions(state): v=max(v,self.MinValue(self.application_result(state,a[0],a[1],self.joueur))) return v def MinValue(self, state): if self.TerminalTest(state): return self.Utility(state) v=float("inf") for a in self.Actions(state): v=min(v,self.MaxValue(self.application_result(state,a[0],a[1],self.joueur))) return v def MinMax( self): #Réaliser un classement des différentes valeurs trouvées a=self.Actions(self.plateau) b=(self.MinValue(self.application_result(self.plateau , action[0] , action[1] , self.joueur)) for action in a) print ( a) a2=[] for b in a: print(b) a2.append(str(b[0])+" "+str(b[1])) zip_iterator = zip(a2 ,b ) dico=dict(zip_iterator) dico = dict(sorted(dico.items(), key=lambda x: x[1], reverse=True)) print(dico.keys()) actionValide=dico.keys()[0].split(sep=" ") actionValide[0]=int(actionValide[0]) actionValide[1]=int(actionValide[1]) self.application_result(self.plateau,actionValide[0],actionValide[1],self.joueur) def Actions(self, plateau): tab_actions=[] nbr_cases=0 for i in range(15): for j in range(15): if plateau[i][j]==0: tab_actions.append([i,j]) nbr_cases+=1 return tab_actions def timer(self, fonction): """ Méthode qui chronomètre le temps d'exécution d'une méthode Sera utilisé pour mesurer le temps de traitement du MinMax Parameters ---------- fonction : Fonction sur laquelle appliquer le timer Returns ------- None. """ def inner(*args,**kwargs): #Coeur de la méthode de mesure du temps t=time.time()#Temps en entrée f=fonction(*args,**kwargs) print(time.time()-t)#Affichage du temps finel (en secondes) return f return inner def TerminalTest(self,pion): COL_COUNT=15 ROW_COUNT=15 for c in range(COL_COUNT - 4): for r in range(ROW_COUNT): if self.plateau[r][c] == pion and self.plateau[r][c+1] == pion and self.plateau[r][c+2] == pion and self.plateau[r][c+3] == pion and self.plateau[r][c+4] == pion: return True for c in range(COL_COUNT): for r in range(ROW_COUNT-4): if self.plateau[r][c] == pion and self.plateau[r+1][c] == pion and self.plateau[r+2][c] == pion and self.plateau[r+3][c] == pion and self.plateau[r+4][c] == pion: return True for c in range(COL_COUNT-4): for r in range(4,ROW_COUNT): if self.plateau[r][c] == pion and self.plateau[r-1][c+1] == pion and self.plateau[r-2][c+2] == pion and self.plateau[r-3][c+3] == pion and self.plateau[r-4][c+4] == pion: return True for c in range(COL_COUNT-4): for r in range(ROW_COUNT-4): if self.plateau[r][c] == pion and self.plateau[r+1][c+1] == pion and self.plateau[r+2][c+2] == pion and self.plateau[r+3][c+3] == pion and self.plateau[r+4][c+4] == pion: return True def Saisie (self) : lettre = {"A" : 1 , "B" : 2 , "C" : 3 , "D" : 4, "E" : 5, "F": 6 , "G" : 7 , "H" :8 , "I" :9 , "J" : 10 , "K" :11 , "L" :12 , "M" :13 , "N":14 , "O": 15 } secu3 = False while secu3 == False : secu1 = False while secu1 == False : saisiex =input("Saisir la ligne ou vous voulez jouer : (chiffre) ") try: saisiex = int(saisiex) if saisiex > 0 and saisiex <= 15 : secu1 = True if(not secu1): print("Saisie invalide, veuillez réessayer") except: print("Saisie invalide, veuillez réessayer") secu2 = False while secu2 == False : saisiey =input("Saisir la colonnes ou vous voulez jouer : (lettre)") try: if saisiey in lettre : secu2 = True if(not secu2): print("Saisie invalide, veuillez réessayer") except: print("Saisie invalide, veuillez réessayer") try: if self.plateau[saisiex-1][lettre[saisiey]-1] == 1 or self.plateau[saisiex-1][lettre[saisiey]-1] == 2 : print( "pas bien ") secu3 = False else : secu3 = True if(not secu3): print("Position non valide, veuillez réessayer") except: print("Saisie invalide, veuillez réessayer") #Fonction 5,5 : saisie des premiers points #Ira dans la classe Gomoko def Start(self, joueur): if(joueur==1): #Placement du joueur 1 au centre du plateau self.plateau[8][8]=1 #Placement selon l'IA de notre point self.MinMax() # A modifier, si elle ne fait pas le placement #Saisie des valeurs lettre = {"A" : 1 , "B" : 2 , "C" : 3 , "D" : 4, "E" : 5, "L" :12 , "M" :13 , "N":14 , "O": 15 } secu3 = False while secu3 == False : secu1 = False while secu1 == False : saisiex =input("Saisir la ligne ou vous voulez jouer : (chiffre) ") try: saisiex = int(saisiex) if saisiex > 0 and saisiex <= 15 and (saisiex<5 or saisiex>11) : secu1 = True if(not secu1): print("Saisie invalide, veuillez réessayer") except: print("Saisie invalide, veuillez réessayer") secu2 = False while secu2 == False : saisiey =input("Saisir la colonnes ou vous voulez jouer : (lettre)") try: if saisiey in lettre: secu2 = True if(not secu2): print("Saisie invalide, veuillez réessayer") except: print("Saisie invalide, veuillez réessayer") try: if self.plateau[saisiex-1][lettre[saisiey]-1] == 1 or self.plateau[saisiex-1][lettre[saisiey]-1] == 2 : print( "pas bien ") secu3 = False else : secu3 = True if(not secu3): print("Position non valide, veuillez réessayer") except: print("Saisie invalide, veuillez réessayer") else: self.plateau[8][8]=2 self.Saisie() # A voir en fonction de ce qui est souhaité #Réaliser un classement des différentes valeurs trouvées a=self.Actions(self.plateau) cpt=0 for i in range(len(a)): if (a[i][0]>5 or a[i][0]<11) or (a[i][1]>5 or a[i][1]<11): a.pop(i) i-=1 b=(self.MinValue(self.Result(action,a)) for action in a) dico=dict(a,b) #Attention, cette ligne pourrait crash dico = sorted(dico.items(), key=lambda x: x[1], reverse=True) actionValide=dico.keys()[0] self.application_result(self.plateau,actionValide[0],actionValide[1],self.joueur) Menu() ``` ### **VERSION JEUX A DEUX ( sans IA) --> pour faire des tests** ``` class Gomoko () : def __init__ ( self ): #ok self.plateau = [[ 0 for x in range ( 15)] for y in range(15)] self.nbPions = 0 self.lettres= {"A" : 1 , "B" : 2 , "C" : 3 , "D" : 4, "E" : 5, "F": 6 , "G" : 7 , "H" :8 , "I" :9 , "J" : 10 , "K" :11 , "L" :12 , "M" :13 , "N":14 , "O": 15 } def __str__ (self): ## ok rep = "" rep += "\n\t " rep += " ".join(self.lettres) if self.nbPions % 2 == 0 : rep += "\t\t\t Joueur 1 : O (noir) \n" elif self.nbPions % 2 != 0 : rep += "\t\t\t Joueur 2 : X (blanc) \n" else : rep += "\n" for i in range(len(self.lettres)): for j in range(len(self.lettres)): if self.plateau[i][j]==0: if j==0: if i<9: rep += "\t " +str(i+1) + " . " else: rep += "\t" + str(i+1) +" . " else: rep += ". " elif self.plateau[i][j]==1: if j==0: if i<9: rep += "\t " + str(i+1) + " O " else: rep += "\t" + str(i+1) + " O " else: rep += "O " else: if j==0: if i<9: rep += "\t " + str(i+1) + " X " else: rep += "\t" + str(i+1) +" X " else: rep += "X " rep += "\n" return rep def Saisie (self) : ## ok secu3 = False while secu3 == False : secu1 = False while secu1 == False : saisiex =input("Saisir la ligne ou vous voulez jouer : (chiffre) ") try: saisiex = int(saisiex) if saisiex > 0 and saisiex <= len(self.lettres) : secu1 = True if(not secu1): print("Saisie invalide, veuillez réessayer") except: print("Saisie invalide, veuillez réessayer") secu2 = False while secu2 == False : saisiey =input("Saisir la colonnes ou vous voulez jouer : (lettres)") try: saisiey = "".join(saisiey.split() ).upper() if saisiey in self.lettres : secu2 = True if(not secu2): print("Saisie invalide, veuillez réessayer") except: print("Saisie invalide, veuillez réessayer") try: if self.plateau[saisiex-1][self.lettres[saisiey]-1] == 1 or self.plateau[saisiex-1][self.lettres[saisiey]-1] == 2 : print( "pas bien ") secu3 = False else : secu3 = True if(not secu3): print("Position non valide, veuillez réessayer") except: print("Saisie invalide, veuillez réessayer") return (saisiex-1 , self.lettres[saisiey]-1) def Appliquer(self,i,j): if self.nbPions % 2 == 0 : joueur = 0 else : joueur = 1 self.plateau[i][j]=joueur + 1 self.nbPions += 1 print ( "nb pions " , self.nbPions) def Jouer (self) : boucle = True while ( boucle ) : print ( "terminal " ,self.TerminalTest() ) if self.nbPions % 2 == 0 : print ( gomoko) x, y = self.Saisie() self.Appliquer( x , y ) elif self.nbPions % 2 != 0: print ( gomoko) x, y = self.Saisie() self.Appliquer( x , y ) else : print ( "Erreur") if self.TerminalTest() or self.nbPions >= 120 : boucle = False print ( "fin de la partie") gomoko=Gomoko() gomoko.Jouer() ``` ## version 2 ( by romu ) ``` def Menu(): print("\n\n***** Bonjour et bienvenue dans le jeux Gomoko *****") saisieValide=False joueur = 0 while(saisieValide==False): print("\nVeuillez choisir un joueur\n") print("\t1 - Joueur 1\n\t2 - Joueur 2") joueur=input("Votre choix : ") try: joueur=int(joueur) saisieValide=(joueur==1) or (joueur==2) if(not saisieValide): print("Saisie invalide, veuillez réessayer") except: print("Saisie invalide, veuillez réessayer") print("\nC'est bon") gomoko = Gomoko(joueur-1) gomoko.Jouer() class Gomoko () : def __init__ ( self , joueur): #ok self.plateau = [[ 0 for x in range ( 15)] for y in range(15)] self.nbPions = 0 self.joueur = joueur self.lettres= {"A" : 1 , "B" : 2 , "C" : 3 , "D" : 4, "E" : 5, "F": 6 , "G" : 7 , "H" :8 , "I" :9 , "J" : 10 , "K" :11 , "L" :12 , "M" :13 , "N":14 , "O": 15 } def __str__ (self): ## ok rep = "\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t Pions\t : " + str(self.nbPions) rep += "\n\t " rep += " ".join(self.lettres) if self.joueur == 0 : rep += "\t\t\t joueur 1 : O (noir) \n" elif self.joueur == 1 : rep += "\t\t\t joueur 2 : X (blanc) \n" else : rep += "\n" for i in range(len(self.lettres)): for j in range(len(self.lettres)): if self.plateau[i][j]==0: if j==0: if i<9: rep += "\t " +str(i+1) + " . " else: rep += "\t" + str(i+1) +" . " else: rep += ". " elif self.plateau[i][j]==1: if j==0: if i<9: rep += "\t " + str(i+1) + " O " else: rep += "\t" + str(i+1) + " O " else: rep += "O " else: if j==0: if i<9: rep += "\t " + str(i+1) + " X " else: rep += "\t" + str(i+1) +" X " else: rep += "X " rep += "\n" return rep def MaxValue(self, state): if self.TerminalTest(state): return self.Utility(state) v=-float("inf") for a in self.Actions(state): v=max(v,self.MinValue(self.application_result(state.copy(),a[0],a[1],self.joueur))) return v def application_result(self, state ,i,j,joueur ): state[i][j]=joueur + 1 return state def MinValue(self, state): if self.TerminalTest(state): return self.Utility(state) v=float("inf") for a in self.Actions(state): v=min(v,self.MaxValue(self.application_result(state.copy(),a[0],a[1],self.joueur))) return v #@timer def MinMax(self): #Réaliser un classement des différentes valeurs trouvées a=self.Actions(self.plateau.copy()) print ("actions " , a) print ( "taille actionc " , len(a)) t=self.plateau.copy() b=[self.MinValue(self.application_result(t, action[0] , action[1] , self.joueur)) for action in a] print ("b = " , b) a2=[] for t in a: c=str(t[0])+" "+str(t[1]) a2.append(c) zip_iterator = zip(a2 ,b) dico=dict(zip_iterator) dico = dict(sorted(dico.items(), key=lambda x: x[1], reverse=True)) print ( dico) actionValide=list( dico.keys()) [0].split(sep=" ") actionValide[0]=int(actionValide[0]) actionValide[1]=int(actionValide[1]) self.application_result(self.plateau,actionValide[0],actionValide[1],self.joueur) def Actions(self, state): tab_actions = [] [tab_actions.append([i,j]) if state[i][j]==0 else "" for i in range ( len(self.lettres)) for j in range(len(self.lettres))] return tab_actions def Utility(self,state): if(self.TerminalTest(state)): return -1 t=self.joueur if(t==1): t=2 else: t=1 if(self.TerminalTest(state)): return 1 return 0 def TerminalTest(self , state): for c in range(len(self.lettres) - 4): for r in range(len(self.lettres)): if state[r][c]== state[r][c+1]== state[r][c+2] == state[r][c+3]== state[r][c+4] and state[r][c]!=0: return True for c in range(len(self.lettres)): for r in range(len(self.lettres)-4): if state[r][c] == state[r+1][c] == state[r+2][c] == state[r+3][c] == state[r+4][c] and state[r][c]!=0: return True for c in range(len(self.lettres)-4): for r in range(4,len(self.lettres)): if state[r][c] == state[r-1][c+1] == state[r-2][c+2] == state[r-3][c+3] == state[r-4][c+4] and state[r][c]!=0: return True for c in range(len(self.lettres)-4): for r in range(len(self.lettres)-4): if state[r][c] == state[r+1][c+1] == state[r+2][c+2] == state[r+3][c+3] == state[r+4][c+4] and state[r][c]!=0: return True return False def Saisie (self) : ## ok secu = False while secu == False : saisie =input("\t Coordonnées (D;7) \t : ") try: saisieTab = saisie.split(";") if len (saisieTab) == 2 and int(saisieTab[1]) > 0 and int(saisieTab[1]) <= len(self.lettres) and "".join(saisieTab[0].split() ).upper() in self.lettres : secu = True saisiex = int(saisieTab[1]) saisiey = "".join(saisieTab[0].split() ).upper() if self.plateau[saisiex-1][self.lettres[saisiey]-1] == 1 or self.plateau[saisiex-1][self.lettres[saisiey]-1] == 2 : secu = False except: print("Saisie invalide, veuillez réessayer") return (saisiex-1 , self.lettres[saisiey]-1) def Appliquer(self,i,j): self.plateau[i][j]=self.joueur + 1 self.nbPions += 1 if self.joueur == 1 : self.joueur = 0 elif self.joueur == 0 : self.joueur = 1 """ Long Pro def JouerLongPro(self): ## ## ## Si Joueur 1 commance ## ## if(self.joueur == 0): #Placement du joueur 1 au centre du plateau ("humain") self.Appliquer(8,8) #Placement selon l'IA de notre point ("IA") self.MinMax() # A modifier, si elle ne fait pas le placement #Saisie des valeurs carré de 7 ("humain") secu = False while secu == False : print (" MERCI DE SAISIR DES VALEURS ENTRE 5 & 10 ET E & K") print ( self) x, y = self.Saisie() if ( x >4 and x <9 and y >4 and y <9 ) : secu = True self.Appliquer( x , y ) # (suite classique) else: ## ## ## Si Joueur 2 commance ## ## #Placement du joueur 2 au centre du plateau (IA) self.Appliquer(8,8) # PLacement print ( self) # (Humaine) x, y = self.Saisie() self.Appliquer( x , y ) #Réaliser un classement des différentes valeurs trouvées (IA) a=self.Actions(self.plateau) cpt=0 for i in range(len(a)): if (a[i][0]>5 or a[i][0]<11) or (a[i][1]>5 or a[i][1]<11): a.pop(i) i-=1 b=(MinValue(Result(action,a)) for action in a) dico=dict(a,b) #Attention, cette ligne pourrait crash dico = sorted(dico.items(), key=lambda x: x[1], reverse=True) actionValide=dico.keys()[0] application_result(self.plateau,actionValide[0],actionValide[1],self.joueur) """ def JouerADeux (self) : boucle = True while ( boucle ) : if self.joueur == 0 : print ( self) x, y = self.Saisie() self.Appliquer( x , y ) elif self.joueur == 1: print ( self) x, y = self.Saisie() self.Appliquer( x , y ) else : print ( "Erreur") if self.TerminalTest(self.plateau) or self.nbPions >= 120 : boucle = False print ( "\n\n************************************************************") print ( "*** FIN DE LA PARTIE ***") print ( "*** ***") if self.joueur == 0 : print ( "*** Le joueur 1 à gagné la partie! (pion noir) ***") elif self.joueur == 1 : print ( "*** Le joueur 2 à gagné la partie! (pion blanc) ***") else: print ( "*** ERREUR GAGNANT ***") print ( "*** ***") print ( "*** Vous avez utilisé", self.nbPions, "pions. ***") print ( "*** ***") print ( "************************************************************") def JouerIA (self) : boucle = True while ( boucle ) : if self.joueur == 0 : print ( self) x, y = self.Saisie() self.Appliquer( x , y ) elif self.joueur == 1: print ( self) print(len (self.Actions(self.plateau))) self.MinMax() else : print ( "Erreur") if self.TerminalTest(self.plateau) or self.nbPions >= 120 : boucle = True ## attention print ( "fin de la partie") gomoko=Gomoko(0) gomoko.JouerADeux () #Menu() # -*- coding: utf-8 -*- """ Created on Fri May 6 17:18:09 2022 @author: Romuald """ import time def Menu(): print("\n\n***** Bonjour et bienvenue dans le jeux Gomoko *****") saisieValide=False joueur = 0 while(saisieValide==False): print("\nVeuillez choisir un joueur\n") print("\t1 - Joueur 1\n\t2 - Joueur 2") joueur=input("Votre choix : ") try: joueur=int(joueur) saisieValide=(joueur==1) or (joueur==2) if(not saisieValide): print("Saisie invalide, veuillez réessayer") except: print("Saisie invalide, veuillez réessayer") print("\nC'est bon") gomoko = Gomoko(joueur-1) gomoko.Jouer() class Gomoko () : def __init__ ( self , joueur): #ok self.plateau = [[ 0 for x in range ( 15)] for y in range(15)] self.nbPions = 0 self.joueur = joueur self.player_turn = 2 self.lettres= {"A" : 1 , "B" : 2 , "C" : 3 , "D" : 4, "E" : 5, "F": 6 } #, "G" : 7 , "H" :8 , "I" :9 , "J" : 10 , "K" :11 # , "L" :12 , "M" :13 , "N":14 , "O": 15 } def __str__ (self): ## ok rep = "\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t Pions\t : " + str(self.nbPions) rep += "\n\t " rep += " ".join(self.lettres) if self.joueur == 0 : rep += "\t\t\t joueur 1 : O (noir) \n" elif self.joueur == 1 : rep += "\t\t\t joueur 2 : X (blanc) \n" else : rep += "\n" for i in range(len(self.lettres)): for j in range(len(self.lettres)): if self.plateau[i][j]==0: if j==0: if i<9: rep += "\t " +str(i+1) + " . " else: rep += "\t" + str(i+1) +" . " else: rep += ". " elif self.plateau[i][j]==1: if j==0: if i<9: rep += "\t " + str(i+1) + " O " else: rep += "\t" + str(i+1) + " O " else: rep += "O " else: if j==0: if i<9: rep += "\t " + str(i+1) + " X " else: rep += "\t" + str(i+1) +" X " else: rep += "X " rep += "\n" return rep def MaxValue(self, state): if self.TerminalTest(state): return self.Utility(state) v=-float("inf") for a in self.Actions(state): v=max(v,self.MinValue(self.application_result(state.copy(),a[0],a[1],self.joueur))) return v def application_result(self, state ,i,j,joueur ): state[i][j]=joueur + 1 return state def MinValue(self, state): if self.TerminalTest(state): return self.Utility(state) v=float("inf") for a in self.Actions(state): v=min(v,self.MaxValue(self.application_result(state.copy(),a[0],a[1],self.joueur))) return v # @ timer def MinMax(self): #Réaliser un classement des différentes valeurs trouvées a=self.Actions(self.plateau.copy()) print ("actions " , a) print ( "taille actionc " , len(a)) t=self.plateau.copy() b=[self.MinValue(self.application_result(t, action[0] , action[1] , self.joueur)) for action in a] print ("b = " , b) a2=[] for t in a: c=str(t[0])+" "+str(t[1]) a2.append(c) zip_iterator = zip(a2 ,b) dico=dict(zip_iterator) dico = dict(sorted(dico.items(), key=lambda x: x[1], reverse=True)) print ( dico) actionValide=list( dico.keys()) [0].split(sep=" ") actionValide[0]=int(actionValide[0]) actionValide[1]=int(actionValide[1]) self.application_result(self.plateau,actionValide[0],actionValide[1],self.joueur) def Actions(self, state): tab_actions = [] [tab_actions.append([i,j]) if state[i][j]==0 else "" for i in range ( len(self.lettres)) for j in range(len(self.lettres))] return tab_actions def Utility(self,state): if(self.TerminalTest(state)): return -1 t=self.joueur if(t==1): t=2 else: t=1 if(self.TerminalTest(state)): return 1 return 0 def is_end(self ): # internet state = self.plateau for c in range(len(self.lettres) - 4): for r in range(len(self.lettres)): if state[r][c]== state[r][c+1]== state[r][c+2] == state[r][c+3]== state[r][c+4] and state[r][c]!=0: return state[r][c] for c in range(len(self.lettres)): for r in range(len(self.lettres)-4): if state[r][c] == state[r+1][c] == state[r+2][c] == state[r+3][c] == state[r+4][c] and state[r][c]!=0: return state[r][c] for c in range(len(self.lettres)-4): for r in range(4,len(self.lettres)): if state[r][c] == state[r-1][c+1] == state[r-2][c+2] == state[r-3][c+3] == state[r-4][c+4] and state[r][c]!=0: return state[r][c] for c in range(len(self.lettres)-4): for r in range(len(self.lettres)-4): if state[r][c] == state[r+1][c+1] == state[r+2][c+2] == state[r+3][c+3] == state[r+4][c+4] and state[r][c]!=0: return state[r][c] return def min(self): # Possible values for minv are: # -1 - win # 0 - a tie # 1 - loss # We're initially setting it to 2 as worse than the worst case: minv = 2 qx = None qy = None result = self.is_end() if result == 0: return (-1, 0, 0) elif result == 1: return (1, 0, 0) elif result == False: return (0, 0, 0) for i in range(0, len(self.lettres)): for j in range(0, len(self.lettres)): if self.plateau[i][j] == 0: self.plateau[i][j] = 1 (m, max_i, max_j) = self.max() if m < minv: minv = m qx = i qy = j self.plateau[i][j] = 0 return (minv, qx, qy) def play(self): while True: print(self) self.result = self.is_end() # Printing the appropriate message if the game has ended if self.result != None: if self.result == 2: print('The winner is X!') elif self.result == 1: print('The winner is O!') elif self.result == '.': print("It's a tie!") self return # If it's player's turn if self.player_turn == 2: while True: start = time.time() print ( "cocou") (m, qx, qy) = self.min() end = time.time() print('Evaluation time: {}s'.format(round(end - start, 7))) print('Recommended move: X = {}, Y = {}'.format(qx, qy)) px = int(input('Insert the X coordinate: ')) py = int(input('Insert the Y coordinate: ')) (qx, qy) = (px, py) if self.is_valid(px, py): self.current_state[px][py] = 2 self.player_turn = 1 break else: print('The move is not valid! Try again.') # If it's AI's turn else: (m, px, py) = self.max() self.plateau[px][py] = 1 self.player_turn = 2 def max(self): # Possible values for maxv are: # -1 - loss # 0 - a tie # 1 - win # We're initially setting it to -2 as worse than the worst case: maxv = -2 px = None py = None result = self.is_end() # If the game came to an end, the function needs to return # the evaluation function of the end. That can be: # -1 - loss # 0 - a tie # 1 - win if result == 0: return (-1, 0, 0) elif result == 1: return (1, 0, 0) elif result == False: return (0, 0, 0) for i in range(0, len(self.lettres)): print ( "coucou") for j in range(0, len(self.lettres)): if self.plateau[i][j] == 0: # On the empty field player 'O' makes a move and calls Min # That's one branch of the game tree. self.plateau[i][j] = 2 (m, min_i, min_j) = self.min() # Fixing the maxv value if needed if m > maxv: maxv = m px = i py = j # Setting back the field to empty self.plateau[i][j] = 0 return (maxv, px, py) def TerminalTest(self , state ): for c in range(len(self.lettres) - 4): for r in range(len(self.lettres)): if state[r][c]== state[r][c+1]== state[r][c+2] == state[r][c+3]== state[r][c+4] and state[r][c]!=0: return True for c in range(len(self.lettres)): for r in range(len(self.lettres)-4): if state[r][c] == state[r+1][c] == state[r+2][c] == state[r+3][c] == state[r+4][c] and state[r][c]!=0: return True for c in range(len(self.lettres)-4): for r in range(4,len(self.lettres)): if state[r][c] == state[r-1][c+1] == state[r-2][c+2] == state[r-3][c+3] == state[r-4][c+4] and state[r][c]!=0: return True for c in range(len(self.lettres)-4): for r in range(len(self.lettres)-4): if state[r][c] == state[r+1][c+1] == state[r+2][c+2] == state[r+3][c+3] == state[r+4][c+4] and state[r][c]!=0: return True return False def Saisie (self) : ## ok secu = False while secu == False : saisie =input("\t Coordonnées (D;7) \t : ") try: saisieTab = saisie.split(";") if len (saisieTab) == 2 and int(saisieTab[1]) > 0 and int(saisieTab[1]) <= len(self.lettres) and "".join(saisieTab[0].split() ).upper() in self.lettres : secu = True saisiex = int(saisieTab[1]) saisiey = "".join(saisieTab[0].split() ).upper() if self.plateau[saisiex-1][self.lettres[saisiey]-1] == 1 or self.plateau[saisiex-1][self.lettres[saisiey]-1] == 2 : secu = False except: print("Saisie invalide, veuillez réessayer") return (saisiex-1 , self.lettres[saisiey]-1) def Appliquer(self,i,j): self.plateau[i][j]=self.joueur + 1 self.nbPions += 1 if self.joueur == 1 : self.joueur = 0 elif self.joueur == 0 : self.joueur = 1 """ Long Pro def JouerLongPro(self): ## ## ## Si Joueur 1 commance ## ## if(self.joueur == 0): #Placement du joueur 1 au centre du plateau ("humain") self.Appliquer(8,8) #Placement selon l'IA de notre point ("IA") self.MinMax() # A modifier, si elle ne fait pas le placement #Saisie des valeurs carré de 7 ("humain") secu = False while secu == False : print (" MERCI DE SAISIR DES VALEURS ENTRE 5 & 10 ET E & K") print ( self) x, y = self.Saisie() if ( x >4 and x <9 and y >4 and y <9 ) : secu = True self.Appliquer( x , y ) # (suite classique) else: ## ## ## Si Joueur 2 commance ## ## #Placement du joueur 2 au centre du plateau (IA) self.Appliquer(8,8) # PLacement print ( self) # (Humaine) x, y = self.Saisie() self.Appliquer( x , y ) #Réaliser un classement des différentes valeurs trouvées (IA) a=self.Actions(self.plateau) cpt=0 for i in range(len(a)): if (a[i][0]>5 or a[i][0]<11) or (a[i][1]>5 or a[i][1]<11): a.pop(i) i-=1 b=(MinValue(Result(action,a)) for action in a) dico=dict(a,b) #Attention, cette ligne pourrait crash dico = sorted(dico.items(), key=lambda x: x[1], reverse=True) actionValide=dico.keys()[0] application_result(self.plateau,actionValide[0],actionValide[1],self.joueur) """ def JouerADeux (self) : boucle = True while ( boucle ) : if self.joueur == 0 : print ( self) x, y = self.Saisie() self.Appliquer( x , y ) elif self.joueur == 1: print ( self) x, y = self.Saisie() self.Appliquer( x , y ) else : print ( "Erreur") if self.TerminalTest(self.plateau) or self.nbPions >= 120 : boucle = False print ( "\n\n************************************************************") print ( "*** FIN DE LA PARTIE ***") print ( "*** ***") if self.joueur == 0 : print ( "*** Le joueur 1 à gagné la partie! (pion noir) ***") elif self.joueur == 1 : print ( "*** Le joueur 2 à gagné la partie! (pion blanc) ***") else: print ( "*** ERREUR GAGNANT ***") print ( "*** ***") print ( "*** Vous avez utilisé", self.nbPions, "pions. ***") print ( "*** ***") print ( "************************************************************") def JouerIA (self) : boucle = True while ( boucle ) : if self.joueur == 0 : print ( self) x, y = self.Saisie() self.Appliquer( x , y ) elif self.joueur == 1: print ( self) print(len (self.Actions(self.plateau))) self.MinMax() else : print ( "Erreur") if self.TerminalTest(self.plateau) or self.nbPions >= 120 : boucle = True ## attention print ( "fin de la partie") #gomoko=Gomoko(0) #gomoko.JouerADeux() g = Gomoko(3) g.play() #Menu() import time def Menu(): print("\n\n***** Bonjour et bienvenue dans le jeux Gomoko *****") saisieValide=False joueur = 0 while(saisieValide==False): print("\nVeuillez choisir un joueur\n") print("\t1 - Joueur 1\n\t2 - Joueur 2") joueur=input("Votre choix : ") try: joueur=int(joueur) saisieValide=(joueur==1) or (joueur==2) if(not saisieValide): print("Saisie invalide, veuillez réessayer") except: print("Saisie invalide, veuillez réessayer") gomoko = Gomoko(joueur) gomoko.Jouer() def timer(fonction): """ Méthode qui chronomètre le temps d'exécution d'une méthode Sera utilisé pour mesurer le temps de traitement du MinMax Parameters ---------- fonction : Fonction sur laquelle appliquer le timer Returns ------- None. """ def inner(*args,**kwargs): #Coeur de la méthode de mesure du temps t=time.time()#Temps en entrée f=fonction(*args,**kwargs) print(time.time()-t)#Affichage du temps finel (en secondes) return f return inner def Duplicate(t2): t=[] for a in t2: k=[] for b in a: if(b>=3): print("Y a un 3") k.append(b) t.append(k) return t class Gomoko () : iteration=0 def __init__ ( self , joueur): #ok self.plateau = [[ 0 for x in range ( 15)] for y in range(15)] self.nbPions = 0 self.joueur = joueur self.lettres= {"A" : 1 , "B" : 2 , "C" : 3 , "D" : 4, "E" : 5, "F": 6 , "G" : 7 , "H" :8 , "I" :9 , "J" : 10 , "K" :11 , "L" :12 , "M" :13 , "N":14 , "O": 15 } #self.lettres= {"A" : 1 , "B" : 2 , "C" : 3 , "D" : 4, "E" : 5} def __str__ (self): ## ok rep = "\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t Pions\t : " + str(self.nbPions) rep += "\n\t " rep += " ".join(self.lettres) if self.joueur == 0 : rep += "\t\t\t joueur 1 : O (noir) \n" elif self.joueur == 1 : rep += "\t\t\t joueur 2 : X (blanc) \n" else : rep += "\n" for i in range(len(self.lettres)): for j in range(len(self.lettres)): if self.plateau[i][j]==0: if j==0: if i<9: rep += "\t " +str(i+1) + " . " else: rep += "\t" + str(i+1) +" . " else: rep += ". " elif self.plateau[i][j]==1: if j==0: if i<9: rep += "\t " + str(i+1) + " O " else: rep += "\t" + str(i+1) + " O " else: rep += "O " else: if j==0: if i<9: rep += "\t " + str(i+1) + " X " else: rep += "\t" + str(i+1) +" X " else: rep += "X " rep += "\n" return rep def MaxValue(self, state): print("Max") t=Duplicate(state) if self.TerminalTest(t)!=0: return self.Utility(t) v=-float("inf") k=0 for a in self.Actions(t,self.CasesPossibles(t)): v=max(v,self.MinValue(self.application_result(t,a[0],a[1],self.joueur))) return v def application_result(self, state ,i,j,joueur): state[i][j]=joueur + 1 return state def MinValue(self, state): print("Min") t=Duplicate(state) if self.TerminalTest(t)!=0: print("Ici2") return self.Utility(t) print("Ici") v=float("inf") if(self.joueur==1): k=0 else: k=1 for a in self.Actions(t,self.CasesPossibles(t)): v=min(v,self.MaxValue(self.application_result(t,a[0],a[1],k))) return v @timer def MinMax(self): #Réaliser un classement des différentes valeurs trouvées #print("Plateau de base :",self.plateau) t=Duplicate(self.plateau) cas=self.CasesPossibles(t) a=self.Actions(t,cas) #print ("actions " , a) #print ( "taille actionc " , len(a)) b=[] #print("Actions : ",a) if(self.joueur==1): k=0 else: k=1 for action in a: # Le passage à b.append(self.MinValue(self.application_result(t, action[0] , action[1] , k))) #print ("b = " , b) a2=[] for t in a: c=str(t[0])+" "+str(t[1]) a2.append(c) zip_iterator = zip(a2 ,b) dico=dict(zip_iterator) dico = dict(sorted(dico.items(), key=lambda x: x[1], reverse=True)) #print(dico) #print(dico.keys()) actionValide=list( dico.keys()) [0].split(sep=" ") actionValide[0]=int(actionValide[0]) actionValide[1]=int(actionValide[1]) self.application_result(self.plateau,actionValide[0],actionValide[1],self.joueur) def CasesPossibles(self,state): b=[] for i in range(len(state)): for j in range(len(state)): cpt=0 for k in range(-2,3): for l in range(-2,3): if((i+k>=0 and i+k<len(state)) and (j+l>=0 and j+l<len(state))): if(state[i+k][j+l]!=0): b.append([i,j]) cpt=1 break return b def Actions(self, state,cas): tab_actions = [] casO=[] casI=[] for a in cas: casO.append(a[0]) casI.append(a[1]) print(casO) print(casI) [tab_actions.append([i,j]) if state[i][j]==0 else "" for i in casO for j in casI] return tab_actions def Utility2(self,state): if(self.TerminalTest(state)==self.joueur+1): print("-1") return -1 t=self.joueur+1 if(t==1): t=2 else: t=1 if(self.TerminalTest(state)==t): print("1") return 1 print("0") return 0 def Utility(self,state): if(self.TerminalTest(state)==1): print(-1) return -1 elif(self.TerminalTest(state)==2): print(1) return 1 else: print(0) return 0 def TerminalTest(self , state ): for c in range(len(self.lettres) - 4): for r in range(len(self.lettres)): if state[r][c]== state[r][c+1]== state[r][c+2] == state[r][c+3]== state[r][c+4] and state[r][c]!=0: return state[r][c] for c in range(len(self.lettres)): for r in range(len(self.lettres)-4): if state[r][c] == state[r+1][c] == state[r+2][c] == state[r+3][c] == state[r+4][c] and state[r][c]!=0: return state[r][c] for c in range(len(self.lettres)-4): for r in range(4,len(self.lettres)): if state[r][c] == state[r-1][c+1] == state[r-2][c+2] == state[r-3][c+3] == state[r-4][c+4] and state[r][c]!=0: return state[r][c] for c in range(len(self.lettres)-4): for r in range(len(self.lettres)-4): if state[r][c] == state[r+1][c+1] == state[r+2][c+2] == state[r+3][c+3] == state[r+4][c+4] and state[r][c]!=0: return state[r][c] return 0 def Saisie (self) : ## ok secu = False while secu == False : saisie =input("\t Coordonnées (D;7) \t : ") try: saisieTab = saisie.split(";") if len (saisieTab) == 2 and int(saisieTab[1]) > 0 and int(saisieTab[1]) <= len(self.lettres) and "".join(saisieTab[0].split() ).upper() in self.lettres : secu = True saisiex = int(saisieTab[1]) saisiey = "".join(saisieTab[0].split() ).upper() if self.plateau[saisiex-1][self.lettres[saisiey]-1] == 1 or self.plateau[saisiex-1][self.lettres[saisiey]-1] == 2 : secu = False except: print("Saisie invalide, veuillez réessayer") return (saisiex-1 , self.lettres[saisiey]-1) def Appliquer(self,i,j): self.plateau[i][j]=self.joueur + 1 self.nbPions += 1 if self.joueur == 1 : self.joueur = 0 elif self.joueur == 0 : self.joueur = 1 """ Long Pro def JouerLongPro(self): ## ## ## Si Joueur 1 commance ## ## if(self.joueur == 0): #Placement du joueur 1 au centre du plateau ("humain") self.Appliquer(8,8) #Placement selon l'IA de notre point ("IA") self.MinMax() # A modifier, si elle ne fait pas le placement #Saisie des valeurs carré de 7 ("humain") secu = False while secu == False : print (" MERCI DE SAISIR DES VALEURS ENTRE 5 & 10 ET E & K") print ( self) x, y = self.Saisie() if ( x >4 and x <9 and y >4 and y <9 ) : secu = True self.Appliquer( x , y ) # (suite classique) else: ## ## ## Si Joueur 2 commance ## ## #Placement du joueur 2 au centre du plateau (IA) self.Appliquer(8,8) # PLacement print ( self) # (Humaine) x, y = self.Saisie() self.Appliquer( x , y ) #Réaliser un classement des différentes valeurs trouvées (IA) a=self.Actions(self.plateau) cpt=0 for i in range(len(a)): if (a[i][0]>5 or a[i][0]<11) or (a[i][1]>5 or a[i][1]<11): a.pop(i) i-=1 b=(MinValue(Result(action,a)) for action in a) dico=dict(a,b) #Attention, cette ligne pourrait crash dico = sorted(dico.items(), key=lambda x: x[1], reverse=True) actionValide=dico.keys()[0] application_result(self.plateau,actionValide[0],actionValide[1],self.joueur) """ def JouerADeux (self) : boucle = True while ( boucle ) : if self.joueur == 0 : print ( self) x, y = self.Saisie() self.Appliquer( x , y ) elif self.joueur == 1: print ( self) x, y = self.Saisie() self.Appliquer( x , y ) else : print ( "Erreur") if self.TerminalTest(self.plateau)!=0 or self.nbPions >= 120 : boucle = False print ( "\n\n************************************************************") print ( "*** FIN DE LA PARTIE ***") print ( "*** ***") if self.joueur == 0 : print ( "*** Le joueur 1 à gagné la partie! (pion noir) ***") elif self.joueur == 1 : print ( "*** Le joueur 2 à gagné la partie! (pion blanc) ***") else: print ( "*** ERREUR GAGNANT ***") print ( "*** ***") print ( "*** Vous avez utilisé", self.nbPions, "pions. ***") print ( "*** ***") print ( "************************************************************") def JouerIA (self) : boucle = True while ( boucle ) : if self.joueur == 0 : print ( self) x, y = self.Saisie() self.Appliquer( x , y ) elif self.joueur == 1: print ( self) print(len (self.Actions(self.plateau,self.CasesPossibles(self.plateau)))) self.MinMax() print("Ici") self.joueur=0 else : print ( "Erreur") if self.TerminalTest(self.plateau) or self.nbPions >= 120 : boucle = True ## attention print ( "fin de la partie") gomoko=Gomoko(0) gomoko.JouerIA() #Menu() ```
github_jupyter
# Introduction to Python part IV (And a discussion of linear transformations) ## Activity 1: Discussion of linear transformations * Orthogonality also plays a key role in understanding linear transformations. How can we understand linear transformations in terms of a composition of rotations and diagonal matrices? There are two specific matrix factorizations that arise this way, can you name them and describe the conditions in which they are applicable? * What is a linear inverse problem? What conditions guarantee a solution? * What is a pseudo-inverse? How is this related to an orthogonal projection? How is this related to the linear inverse problem? * What is a weighted norm and what is a weighted pseudo-norm? ## Activity 2: Basic data analysis and manipulation ``` import numpy as np ``` ### Exercise 1: Arrays can be concatenated and stacked on top of one another, using NumPy’s `vstack` and `hstack` functions for vertical and horizontal stacking, respectively. ``` A = np.array([[1,2,3], [4,5,6], [7, 8, 9]]) print('A = ') print(A) B = np.hstack([A, A]) print('B = ') print(B) C = np.vstack([A, A]) print('C = ') print(C) ``` Write some additional code that slices the first and last columns of A, and stacks them into a 3x2 array. Make sure to print the results to verify your solution. Note a ‘gotcha’ with array indexing is that singleton dimensions are dropped by default. That means `A[:, 0]` is a one dimensional array, which won’t stack as desired. To preserve singleton dimensions, the index itself can be a slice or array. For example, `A[:, :1]` returns a two dimensional array with one singleton dimension (i.e. a column vector). ``` D = np.hstack((A[:, :1], A[:, -1:])) print('D = ') print(D) ``` An alternative way to achieve the same result is to use Numpy’s delete function to remove the second column of A. Use the search function for the documentation on the `np.delete` function to find the syntax for constructing such an array. ### Exercise 2: The patient data is longitudinal in the sense that each row represents a series of observations relating to one individual. This means that the change in inflammation over time is a meaningful concept. Let’s find out how to calculate changes in the data contained in an array with NumPy. The `np.diff` function takes an array and returns the differences between two successive values. Let’s use it to examine the changes each day across the first week of patient 3 from our inflammation dataset. ``` patient3_week1 = data[3, :7] print(patient3_week1) ``` Calling `np.diff(patient3_week1)` would do the following calculations `[ 0 - 0, 2 - 0, 0 - 2, 4 - 0, 2 - 4, 2 - 2 ]` and return the 6 difference values in a new array. ``` np.diff(patient3_week1) ``` Note that the array of differences is shorter by one element (length 6). When calling `np.diff` with a multi-dimensional array, an axis argument may be passed to the function to specify which axis to process. When applying `np.diff` to our 2D inflammation array data, which axis would we specify? Take the differences in the appropriate axis and compute a basic summary of the differences with our standard statistics above. If the shape of an individual data file is (60, 40) (60 rows and 40 columns), what is the shape of the array after you run the `np.diff` function and why? How would you find the largest change in inflammation for each patient? Does it matter if the change in inflammation is an increase or a decrease? ## Summary of key points Some of the key takeaways from this activity are the following: * Import a library into a program using import libraryname. * Use the numpy library to work with arrays in Python. * The expression `array.shape` gives the shape of an array. * Use `array[x, y]` to select a single element from a 2D array. * Array indices start at 0, not 1. * Use `low:high` to specify a slice that includes the indices from `low` to `high-1`. * Use `# some kind of explanation` to add comments to programs. * Use `np.mean(array)`, `np.std(array)`, `np.quantile(array)`, `np.max(array)`, and `np.min(array)` to calculate simple statistics. * Use `sp.mode(array)` to compute additional statistics. * Use `np.mean(array, axis=0)` or `np.mean(array, axis=1)` to calculate statistics across the specified axis.
github_jupyter
View more, visit my tutorial page: https://morvanzhou.github.io/tutorials/ My Youtube Channel: https://www.youtube.com/user/MorvanZhou Dependencies: * torch: 0.1.11 * torchvision * matplotlib ``` import torch import torch.nn as nn from torch.autograd import Variable import torch.utils.data as Data import torchvision import matplotlib.pyplot as plt %matplotlib inline torch.manual_seed(1) # reproducible # Hyper Parameters EPOCH = 1 # train the training data n times, to save time, we just train 1 epoch BATCH_SIZE = 50 LR = 0.001 # learning rate DOWNLOAD_MNIST = True # set to False if you have downloaded # Mnist digits dataset train_data = torchvision.datasets.MNIST( root='./mnist/', train=True, # this is training data transform=torchvision.transforms.ToTensor(), # Converts a PIL.Image or numpy.ndarray to # torch.FloatTensor of shape (C x H x W) and normalize in the range [0.0, 1.0] download=DOWNLOAD_MNIST, # download it if you don't have it ) # plot one example print(train_data.train_data.size()) # (60000, 28, 28) print(train_data.train_labels.size()) # (60000) plt.imshow(train_data.train_data[0].numpy(), cmap='gray') plt.title('%i' % train_data.train_labels[0]) plt.show() # Data Loader for easy mini-batch return in training, the image batch shape will be (50, 1, 28, 28) train_loader = Data.DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True) # convert test data into Variable, pick 2000 samples to speed up testing test_data = torchvision.datasets.MNIST(root='./mnist/', train=False) test_x = Variable(torch.unsqueeze(test_data.test_data, dim=1)).type(torch.FloatTensor)[:2000]/255. # shape from (2000, 28, 28) to (2000, 1, 28, 28), value in range(0,1) test_y = test_data.test_labels[:2000] class CNN(nn.Module): def __init__(self): super(CNN, self).__init__() self.conv1 = nn.Sequential( # input shape (1, 28, 28) nn.Conv2d( in_channels=1, # input height out_channels=16, # n_filters kernel_size=5, # filter size stride=1, # filter movement/step padding=2, # if want same width and length of this image after con2d, padding=(kernel_size-1)/2 if stride=1 ), # output shape (16, 28, 28) nn.ReLU(), # activation nn.MaxPool2d(kernel_size=2), # choose max value in 2x2 area, output shape (16, 14, 14) ) self.conv2 = nn.Sequential( # input shape (1, 28, 28) nn.Conv2d(16, 32, 5, 1, 2), # output shape (32, 14, 14) nn.ReLU(), # activation nn.MaxPool2d(2), # output shape (32, 7, 7) ) self.out = nn.Linear(32 * 7 * 7, 10) # fully connected layer, output 10 classes def forward(self, x): x = self.conv1(x) x = self.conv2(x) x = x.view(x.size(0), -1) # flatten the output of conv2 to (batch_size, 32 * 7 * 7) output = self.out(x) return output, x # return x for visualization cnn = CNN() print(cnn) # net architecture optimizer = torch.optim.Adam(cnn.parameters(), lr=LR) # optimize all cnn parameters loss_func = nn.CrossEntropyLoss() # the target label is not one-hotted # following function (plot_with_labels) is for visualization, can be ignored if not interested from matplotlib import cm try: from sklearn.manifold import TSNE; HAS_SK = True except: HAS_SK = False; print('Please install sklearn for layer visualization') def plot_with_labels(lowDWeights, labels): plt.cla() X, Y = lowDWeights[:, 0], lowDWeights[:, 1] for x, y, s in zip(X, Y, labels): c = cm.rainbow(int(255 * s / 9)); plt.text(x, y, s, backgroundcolor=c, fontsize=9) plt.xlim(X.min(), X.max()); plt.ylim(Y.min(), Y.max()); plt.title('Visualize last layer'); plt.show(); plt.pause(0.01) plt.ion() # training and testing for epoch in range(EPOCH): for step, (x, y) in enumerate(train_loader): # gives batch data, normalize x when iterate train_loader b_x = Variable(x) # batch x b_y = Variable(y) # batch y output = cnn(b_x)[0] # cnn output loss = loss_func(output, b_y) # cross entropy loss optimizer.zero_grad() # clear gradients for this training step loss.backward() # backpropagation, compute gradients optimizer.step() # apply gradients if step % 100 == 0: test_output, last_layer = cnn(test_x) pred_y = torch.max(test_output, 1)[1].data.squeeze() accuracy = (pred_y == test_y).sum().item() / float(test_y.size(0)) #print('Epoch: ', epoch, '| train loss: %.4f' % loss.data[0], '| test accuracy: %.2f' % accuracy) if HAS_SK: # Visualization of trained flatten layer (T-SNE) tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000) plot_only = 500 low_dim_embs = tsne.fit_transform(last_layer.data.numpy()[:plot_only, :]) labels = test_y.numpy()[:plot_only] plot_with_labels(low_dim_embs, labels) plt.ioff() # print 10 predictions from test data test_output, _ = cnn(test_x[:10]) pred_y = torch.max(test_output, 1)[1].data.numpy().squeeze() print(pred_y, 'prediction number') print(test_y[:10].numpy(), 'real number') ```
github_jupyter
``` import torch torch.cuda.is_available() # check if GPU is available ``` # Automatic differentiation using Autograd ``` x = torch.ones(2, 2, requires_grad=True) print(x) y = x + 2 print(y) z = y * y * 3 out = z.mean() print(z, out) a = torch.randn(2, 2) a = ((a * 3) / (a - 1)) print(a.requires_grad) a.requires_grad_(True) print(a.requires_grad) b = (a * a).sum() print(b.grad_fn) out.backward() print(x.grad) # d(out) / dx x = torch.randn(3, requires_grad=True) y = x * 2 while y.data.norm() < 1000: y = y * 2 print(x) print(y) ``` # Linear regression manually ``` import numpy as np ``` Try to learn the function $f(x) = 2x$ ``` X = np.array([1, 2, 3, 4], dtype=np.float32) Y = np.array([2, 4, 6, 8], dtype=np.float32) w = 0.0 # model prediction def forward(x): return w * x # MSE loss def loss(y, y_pred): return ((y - y_pred) ** 2).mean() # gradient def gradient(x, y, y_pred): return (np.dot(2*x, y_pred - y)).mean() print(f'Prediction before training: f(5) = {forward(5):.3f}') # training learning_rate = 0.01 n_iters = 20 for epoch in range(n_iters): y_pred = forward(X) l = loss(Y, y_pred) dw = gradient(X, Y, y_pred) w -= learning_rate * dw if epoch % 2 == 0: print(f'Epoch {epoch+1}: weights = {w:.3f}, loss = {l:.8f}') print(f'Prediction after training: f(5) = {forward(5):.3f}') ``` # Linear regression using pytorch (only gradient computation) ``` import torch X = torch.tensor([1, 2, 3, 4], dtype=torch.float32) Y = torch.tensor([2, 4, 6, 8], dtype=torch.float32) w = torch.tensor(0.0, dtype=torch.float32, requires_grad=True) # training learning_rate = 0.01 n_iters = 50 for epoch in range(n_iters): y_pred = forward(X) l = loss(Y, y_pred) l.backward() # compute dl/dw # update gradients with torch.no_grad(): w -= learning_rate * w.grad # zero gradients in-place (very imp!) # otherwise gradients will accumulate over iterations w.grad.zero_() if epoch % 5 == 0: print(f'Epoch {epoch+1}: weights = {w:.3f}, loss = {l:.8f}') print(f'Prediction after training: f(5) = {forward(5):.3f}') ``` **Observation**: The autograd function is not as exact as the numerically computed gradient, so it needs more iterations. # Linear regression using pytorch ``` import torch import torch.nn as nn X = torch.tensor([1, 2, 3, 4], dtype=torch.float32) Y = torch.tensor([2, 4, 6, 8], dtype=torch.float32) # pytorch expects data in format n_samples, n_features # therefore we need to reshape the data X = X.view(X.shape[0], 1) Y = Y.view(X.shape[0], 1) print(X) print(Y) # define the model class LinearRegression(nn.Module): def __init__(self, input_size, output_size): super(LinearRegression, self).__init__() self.linear = nn.Linear(input_size, output_size) def forward(self, x): return self.linear(x) n_samples, n_features = X.shape input_size = n_features output_size = 1 model = LinearRegression(input_size, output_size) learning_rate = 0.01 # define loss criterion = nn.MSELoss() # define optimizer # model.parameters() are the weights optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate) # training loop n_iters = 100 for epoch in range(n_iters): y_pred = model(X) # forward pass l = criterion(y_pred, Y) # compute loss l.backward() # backcward pass optimizer.step() # update weights optimizer.zero_grad() w, b = model.parameters() if epoch % 5 == 0: print(f'Epoch {epoch+1}: weights = {w[0][0]:.3f}, loss = {l:.8f}') X_test = torch.tensor([5], dtype=torch.float32) # test point must also be a tensor print(f'Prediction after training: f(5) = {model(X_test).item():.3f}') model(X_test).detach() for parameter in model.parameters(): print(parameter) print(model) ```
github_jupyter
# **Welcome to the Python Workshop! Here is your starter code** ``` %matplotlib inline ``` ### Try out hello world below! It's easy I promise. ``` print("Hello World") ``` ## What are we analyzing? We're going to be looking at the sales numbers over a 5 year period of batmobiles for Wayne Enterprises. We can see that data below here, press ctrl+enter in the code block to run it! ``` import pandas as pd sales = [455, 495, 516, 570, 575] years = [1,2,3,4,5] df = pd.DataFrame({'Total Sales($)':sales, 'Year':years}) df ``` ## Let's do some simple math! In the blocks below, try your hand at calculating the mean, variance and standard deviation. Feel free to comment out the code and see how easy it is to work in python. ``` def calcMean(data): totalSum = sum(sales) numTotal = len(sales) mean = totalSum/numTotal return mean calcMean(sales) def calcVariance(data): mean = calcMean(data) var = (sum(pow(x-mean,2) for x in sales)) / len(sales) return var foundVar = calcVariance(sales) print(foundVar) import math def calcStd(varianceData): std = math.sqrt(calcVariance(varianceData)) return std print(calcStd(foundVar)) ``` ## We can make it easier than that! The above was the beauty of python, easy to work with and follow along. But we can make the calculations easier than that! Enter numpy! ``` import numpy as np np.mean(sales) np.var(sales) np.std(sales) ``` ## What about linear regression? Refresher: https://www.statisticshowto.com/probability-and-statistics/regression-analysis/find-a-linear-regression-equation/ Little bit more dicey right? Yeah it sucks. ## Here's the quick and easy alternative... ``` linregressModel = np.polyfit(years, sales, 1) slope = linregressModel[0] intercept = linregressModel[1] print("slope: %f intercept: %f" % (linregressModel[0], linregressModel[1])) modelPredictions = np.polyval(linregressModel, years) absError = modelPredictions - sales SE = np.square(absError) # squared errors MSE = np.mean(SE) # mean squared errors RMSE = np.sqrt(MSE) # Root Mean Squared Error, RMSE Rsquared = 1.0 - (np.var(absError) / np.var(sales)) print('RMSE:', RMSE) print('R-squared:', Rsquared) import matplotlib.pyplot as plt x = np.array(years) y = np.array(sales) plt.plot(x, y, 'o', label='original data') plt.plot(x, intercept + slope*x, 'r', label='fitted line') plt.xlabel('Years', fontsize=15) plt.ylabel('Sales', fontsize=15) plt.legend() plt.show() ``` ## What if we wanted to predict the sales in the future? Well that's easy! We can just use the linear regression model we have above to predict for year 6 for example. Feel free to change the yearToPredict! ``` yearToPredict = 6 predict = np.poly1d(linregressModel) predictedSale = predict(yearToPredict) x_new = np.array([yearToPredict]) y_new = np.array([predictedSale]) print(predictedSale) xFull = np.append(x, x_new) plt.plot(x_new, y_new, 'X', label='new data') plt.plot(x, y, 'o', label='original data') plt.plot(xFull, intercept + slope*xFull, 'g', label='fitted line') plt.xlabel('Years', fontsize=15) plt.ylabel('Sales', fontsize=15) plt.legend() plt.show() ``` ## What about a real life scenario? In real life, data is often a lot larger than just 5 sets of data. They could be in excel files, databases or you have to go hunting for them. Let's look at the sample dataset below and see what magic we can do with that with everything we learned above! Begin by reading in the sampleDataset.csv file :) ``` dfFull = pd.read_csv("sampleDataset.csv") #read in csv file ``` It's always good practice to explore what kind of data we're working with. Sure you can open the file and explore it for yourself but it is often the case that datasets are huge and impractical to manually inspect. Thankfully, we have the pandas library to explore the data for us and give us an idea of what we're working with. ``` dfFull.info() ``` Looks like it's some kind of housing dataset... ## What about common statistics? Well we can just use the .describe() command and it'll give us this beautiful chart of all the meaningful stats you'd ever need to get into data analysis with just one line of code for the entire dataset! ``` dfFull.describe() ``` Now let's actually look at the data! .head() gives us the first 5 columns but you can easily view more than that by adding a number in the brackets! ``` dfFull.head() ``` ## Enter Linear Regression (again)! Now that we have the data, let's just analyze it like we did earlier with the smaller data! Let's apply this to analyze the relationship between price and the sqft_living. ``` xData = np.array(dfFull['sqft_living']) yData = np.array(dfFull['price']) linregressModelFull = np.polyfit(xData, yData, 1) slope = linregressModelFull[0] intercept = linregressModelFull[1] print("slope: %f intercept: %f" % (linregressModelFull[0], linregressModelFull[1])) modelPredictions = np.polyval(linregressModelFull, xData) absError = modelPredictions - yData SE = np.square(absError) # squared errors MSE = np.mean(SE) # mean squared errors RMSE = np.sqrt(MSE) # Root Mean Squared Error, RMSE Rsquared = 1.0 - (np.var(absError) / np.var(yData)) print('RMSE:', RMSE) print('R-squared:', Rsquared) plt.plot(xData, yData, 'o', label='original data') plt.plot(xData, intercept + slope*xData, 'r', label='fitted line') plt.xlabel('Sqft Living', fontsize=15) plt.ylabel('Price', fontsize=15) plt.legend() plt.show() sqftToPredict = 20000 predict = np.poly1d(linregressModelFull) predictedVal = predict(sqftToPredict) x_new = np.array([sqftToPredict]) y_new = np.array([predictedVal]) print(predictedVal) xFull = np.append(xData, x_new) plt.plot(x_new, y_new, 'X', label='new data') plt.plot(xData, yData, 'o', label='original data') plt.plot(xFull, intercept + slope*xFull, 'g', label='fitted line') plt.xlabel('Sqft Living', fontsize=15) plt.ylabel('Price', fontsize=15) plt.legend() plt.show() ``` ## Report Building Now that you have all this work done, you might want to show it to someone or share it with other people. All you have to do is go to File in the top left corner and click Download as .pdf!
github_jupyter
``` import cv2 import matplotlib.pyplot as plt import time import cProfile import numpy as np ``` # Walker detection with openCV ## Open video and get video info ``` video_capture = cv2.VideoCapture('resources/TestWalker.mp4') # From https://www.learnopencv.com/how-to-find-frame-rate-or-frames-per-second-fps-in-opencv-python-cpp/ # Find OpenCV version (major_ver, minor_ver, subminor_ver) = (cv2.__version__).split('.') print major_ver, minor_ver, subminor_ver # With webcam get(CV_CAP_PROP_FPS) does not work. # Let's see for ourselves. if int(major_ver) < 3 : fps = video_capture.get(cv2.cv.CV_CAP_PROP_FPS) print "Frames per second using video.get(cv2.cv.CV_CAP_PROP_FPS): {0}".format(fps) else : fps = video_capture.get(cv2.CAP_PROP_FPS) print "Frames per second using video.get(cv2.CAP_PROP_FPS) : {0}".format(fps) # Number of frames to capture num_frames = 120; print "Capturing {0} frames".format(num_frames) # Start time start = time.time() # Grab a few frames for i in xrange(0, num_frames) : ret, frame = video_capture.read() # End time end = time.time() # Time elapsed seconds = end - start print "Time taken : {0} seconds".format(seconds) # Calculate frames per second fps = num_frames / seconds; print "Estimated frames per second : {0}".format(fps); # cProfile.runctx('video_capture.read()', globals(), locals(), 'profile.prof') # use snakeviz to read the output of the profiling ``` ## Track walker using difference between frames Following http://www.codeproject.com/Articles/10248/Motion-Detection-Algorithms ``` def getSmallGrayFrame(video): ret, frame = video.read() if not ret: return ret, frame frameSmall = frame[::4, ::-4] gray = cv2.cvtColor(frameSmall, cv2.COLOR_BGR2GRAY) return ret, gray #cv2.startWindowThread() count = 0 for x in range(200): count = count + 1 print count ret1, gray1 = getSmallGrayFrame(video_capture) ret2, gray2 = getSmallGrayFrame(video_capture) diff = cv2.absdiff(gray1, gray2) print np.amax(diff), np.amin(diff) print diffThresh = cv2.threshold(diff, 15, 255, cv2.THRESH_BINARY) kernel = np.ones((3,3),np.uint8) erosion = cv2.erode(diffThresh[1],kernel,iterations = 1) dilation = cv2.dilate(erosion,kernel,iterations = 1) color1 = cv2.cvtColor(gray1, cv2.COLOR_GRAY2RGB) color1[:,:,0:1] = color1[:,:,0:1] colorDil = cv2.cvtColor(dilation, cv2.COLOR_GRAY2RGB) colorDil[:,:,1:2] = colorDil[:,:,1:2]*0 total = cv2.add(color1, colorDil) if not ret1 or not ret2: break cv2.imshow('Video', total) cv2.imwrite('resources/frame{}.png'.format(x), total) if cv2.waitKey(1) & 0xFF == ord('q'): # Need the cv2.waitKey to update plot break # To close the windows: http://stackoverflow.com/questions/6116564/destroywindow-does-not-close-window-on-mac-using-python-and-opencv#15058451 cv2.waitKey(1000) cv2.waitKey(1) cv2.destroyAllWindows() cv2.waitKey(1) ```
github_jupyter
# 人脸检测 人脸检测,顾名思义,从图像中找到人脸。这是计算机视觉中一个非常经典的物体检测问题。经典人脸检测算法如Viola-Jones算法已经内置在OpenCV中,一度是使用OpenCV实现人脸检测的默认方案。不过OpenCV最新发布的4.5.4版本中提供了一个全新的基于神经网络的人脸检测器。这篇笔记展示了该检测器的使用方法。 ## 准备工作 首先载入必要的包,并检查OpenCV版本。 如果你还没有安装OpenCV,可以通过如下命令安装: ```bash pip install opencv-python ``` ``` import cv2 from PIL import Image print(f"你需要OpenCV 4.5.4或者更高版本。当前版本为:{cv2.__version__}") ``` 请从下方地址下载模型文件,并将模型文件放置在当前目录下。 模型下载地址:https://github.com/ShiqiYu/libfacedetection.train/tree/master/tasks/task1/onnx 当前目录为: ``` !pwd ``` ## 构建检测器 检测器的构建函数为`FaceDetectorYN_create`,必选参数有三个: - `model` ONNX模型路径 - `config` 配置(使用ONNX时为可选项) - `input_size` 输入图像的尺寸。如果构建时输入尺寸未知,可以在执行前指定。 ``` face_detector = cv2.FaceDetectorYN_create("yunet.onnx", "", (0, 0)) print("检测器构建完成。") ``` ## 执行检测 一旦检测器构建完成,便可以使用`detect`方法检测人脸。注意,如果在构建时未指定输入大小,可以在调用前通过`setInputSzie`方法指定。 ``` # 读入待检测图像。图像作者:@anyataylorjoy on Instagram image = cv2.imread("queen.jpg") # 获取图像大小并设定检测器 height, width, _ = image.shape face_detector.setInputSize((width, height)) # 执行检测 result, faces = face_detector.detect(image) print("检测完成。") ``` ## 绘制检测结果 首先将检测结果打印出来。 ``` print(faces) ``` 检测结果为一个嵌套列表。最外层代表了检测结果的数量,即检测到几个人脸。每一个检测结果包含15个数。其含义如下。 | 序号 | 含义 | | --- | --- | | 0 | 人脸框坐标x | | 1 | 人脸框坐标y | | 2 | 人脸框的宽度 | | 3 | 人脸框的高度 | | 4 | 左眼瞳孔坐标x | | 5 | 左眼瞳孔坐标y | | 6 | 右眼瞳孔坐标x | | 7 | 右眼瞳孔坐标y | | 8 | 鼻尖坐标x | | 9 | 鼻尖坐标y | | 10 | 左侧嘴角坐标x | | 11 | 左侧嘴角坐标y | | 12 | 右侧嘴角坐标x | | 13 | 右侧嘴角坐标y | | 14 | 人脸置信度分值 | 接下来依次在图中绘制这些坐标。 ### 绘制人脸框 OpenCV提供了`rectangle`与`cirle`函数用于在图像中绘制方框与圆点。首先使用`rectangle`绘制人脸框。 ``` # 提取第一个检测结果,并将坐标转换为整数,用于绘制。 face = faces[0].astype(int) # 获得人脸框的位置与宽高。 x, y, w, h = face[:4] # 在图像中绘制结果。 image_with_marks = cv2.rectangle(image, (x, y), (x+w, y+h), (255, 255, 255)) # 显示绘制结果 display(Image.fromarray(cv2.cvtColor(image_with_marks, cv2.COLOR_BGR2RGB))) ``` # 绘制五官坐标 ``` # 绘制瞳孔位置 left_eye_x, left_eye_y, right_eye_x, right_eye_y = face[4:8] cv2.circle(image_with_marks, (left_eye_x, left_eye_y), 2, (0, 255, 0), -1) cv2.circle(image_with_marks, (right_eye_x, right_eye_y), 2, (0, 255, 0), -1) # 绘制鼻尖 nose_x, nose_y = face[8:10] cv2.circle(image_with_marks, (nose_x, nose_y), 2, (0, 255, 0), -1) # 绘制嘴角 mouth_left_x, mouth_left_y, mouth_right_x, mouth_right_y = face[10:14] cv2.circle(image_with_marks, (mouth_left_x, mouth_left_y), 2, (0, 255, 0), -1) cv2.circle(image_with_marks, (mouth_right_x, mouth_right_y), 2, (0, 255, 0), -1) # 显示绘制结果 display(Image.fromarray(cv2.cvtColor(image_with_marks, cv2.COLOR_BGR2RGB))) ``` ## 性能测试 人脸检测器很有可能用在一些实时运算场景。此时的性能便是一个不可忽略的因素。下边这段代码展示了新版人脸检测器在当前计算设备上的运行速度。 ``` tm = cv2.TickMeter() for _ in range(1000): tm.start() _ = face_detector.detect(image) tm.stop() print(f"检测速度:{tm.getFPS():.0f} FPS") ``` ## 总结 OpenCV 4.5.4提供的人脸检测器采用了基于神经网络的方案。与先前基于Viola-Jones算法的方案相比还可以提供面部五官的位置。可以考虑作为默认检测方案使用。
github_jupyter
``` # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load in import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) from fastai.vision import * from fastai import * import os from collections import defaultdict ``` ### Set up paths ``` train_pd = pd.read_csv('/root/.fastai/data/severstal/train.csv') train_pd.head(5) path = Path('/root/.fastai/data/severstal') path.ls() train_images = get_image_files(path/'train_images') train_images[:3] ``` ### Check maximum size of images ``` def check_img_max_size(folder): max_height = 0 max_width = 0 for train_image in train_images: img = open_image(train_image) if max_height < img.shape[1]: max_height = img.shape[1] if max_width < img.shape[2]: max_width = img.shape[2] return max_height, max_width def show_image(images, index): img_f = images[index] print(type(img_f)) img = open_image(img_f) print(img) img.show(figsize=(5,5)) mask_path = Path('/kaggle/mask') if not os.path.exists(mask_path): os.makedirs(str(mask_path)) def convert_encoded_to_array(encoded_pixels): pos_array = [] len_array = [] splits = encoded_pixels.split() pos_array = [int(n) - 1 for i, n in enumerate(splits) if i % 2 == 0] len_array = [int(n) for i, n in enumerate(splits) if i % 2 == 1] return pos_array, len_array def convert_to_pair(pos_array, rows): return [(p % rows, p // rows) for p in pos_array] def create_positions(single_pos, size): return [i for i in range(single_pos, single_pos + size)] def create_positions_pairs(single_pos, size, row_size): return convert_to_pair(create_positions(single_pos, size), row_size) def convert_to_mask(encoded_pixels, row_size, col_size, category): pos_array, len_array = convert_encoded_to_array(encoded_pixels) mask = np.zeros([row_size, col_size]) for(p, l) in zip(pos_array, len_array): for row, col in create_positions_pairs(p, l, row_size): mask[row][col] = category return mask def save_to_image(masked, image_name): im = PIL.Image.fromarray(masked) im = im.convert("L") image_name = re.sub(r'(.+)\.jpg', r'\1', image_name) + ".png" real_path = mask_path/image_name im.save(real_path) return real_path def open_single_image(path): img = open_image(path) img.show(figsize=(20,20)) def get_y_fn(x): return mask_path/(x.stem + '.png') def group_by(train_images, train_pd): tran_dict = {image.name:[] for image in train_images} pattern = re.compile('(.+)_(\d+)') for index, image_path in train_pd.iterrows(): m = pattern.match(image_path['ImageId_ClassId']) file_name = m.group(1) category = m.group(2) tran_dict[file_name].append((int(category), image_path['EncodedPixels'])) return tran_dict def display_image_with_mask(img_name): full_image = path/'train_images'/img_name print(full_image) open_single_image(full_image) mask_image = get_y_fn(full_image) mask = open_mask(mask_image) print(full_image) mask.show(figsize=(20, 20), alpha=0.5) grouped_categories_mask = group_by(train_images, train_pd) ``` ### Create mask files and save these to kaggle/mask/ ``` image_height = 256 image_width = 1600 for image_name, cat_list in grouped_categories_mask.items(): masked = np.zeros([image_height, image_width]) for cat_mask in cat_list: encoded_pixels = cat_mask[1] if pd.notna(cat_mask[1]): masked += convert_to_mask(encoded_pixels, image_height, image_width, cat_mask[0]) if np.amax(masked) > 4: print(f'Check {image_name} for max category {np.amax(masked)}') save_to_image(masked, image_name) ``` ### Prepare Transforms ``` def limited_dihedral_affine(k:partial(uniform_int,0,3)): "Randomly flip `x` image based on `k`." x = -1 if k&1 else 1 y = -1 if k&2 else 1 if k&4: return [[0, x, 0.], [y, 0, 0], [0, 0, 1.]] return [[x, 0, 0.], [0, y, 0], [0, 0, 1.]] dihedral_affine = TfmAffine(limited_dihedral_affine) def get_extra_transforms(max_rotate:float=3., max_zoom:float=1.1, max_lighting:float=0.2, max_warp:float=0.2, p_affine:float=0.75, p_lighting:float=0.75, xtra_tfms:Optional[Collection[Transform]]=None)->Collection[Transform]: "Utility func to easily create a list of flip, rotate, `zoom`, warp, lighting transforms." p_lightings = [p_lighting, p_lighting + 0.2, p_lighting + 0.4, p_lighting + 0.6, p_lighting + 0.7] max_lightings = [max_lighting, max_lighting + 0.2, max_lighting + 0.4, max_lighting + 0.6, max_lighting + 0.7] res = [rand_crop(), dihedral_affine(), symmetric_warp(magnitude=(-max_warp,max_warp), p=p_affine), rotate(degrees=(-max_rotate,max_rotate), p=p_affine), rand_zoom(scale=(1., max_zoom), p=p_affine)] res.extend([brightness(change=(0.5*(1-mp[0]), 0.5*(1+mp[0])), p=mp[1]) for mp in zip(max_lightings, p_lightings)]) res.extend([contrast(scale=(1-mp[0], 1/(1-mp[0])), p=mp[1]) for mp in zip(max_lightings, p_lightings)]) # train , valid return (res, [crop_pad()]) ``` ### Prepare data bunch ``` train_images = (path/'train_images').ls() src_size = np.array(open_image(str(train_images[0])).shape[1:]) valid_pct = 0.10 codes = array(['0', '1', '2', '3', '4']) src = (SegmentationItemList.from_folder(path/'train_images') .split_by_rand_pct(valid_pct=valid_pct) .label_from_func(get_y_fn, classes=codes)) bs = 4 size = src_size//2 data = (src.transform(get_extra_transforms(), size=size, tfm_y=True) .add_test(ImageList.from_folder(path/'test_images'), tfms=None, tfm_y=False) .databunch(bs=bs) .normalize(imagenet_stats)) ``` ### Create learner and training Starting with low resolution training ##### Some metrics functions ``` name2id = {v:k for k,v in enumerate(codes)} void_code = name2id['0'] def acc_camvid(input, target): target = target.squeeze(1) mask = target != void_code argmax = (input.argmax(dim=1)) comparison = argmax[mask]==target[mask] return torch.tensor(0.) if comparison.numel() == 0 else comparison.float().mean() def acc_camvid_with_zero_check(input, target): target = target.squeeze(1) argmax = (input.argmax(dim=1)) batch_size = input.shape[0] total = torch.empty([batch_size]) for b in range(batch_size): if(torch.sum(argmax[b]).item() == 0.0 and torch.sum(target[b]).item() == 0.0): total[b] = 1 else: mask = target[b] != void_code comparison = argmax[b][mask]==target[b][mask] total[b] = torch.tensor(0.) if comparison.numel() == 0 else comparison.float().mean() return total.mean() def calc_dice_coefficients(argmax, target, cats): def calc_dice_coefficient(seg, gt, cat: int): mask_seg = seg == cat mask_gt = gt == cat sum_seg = torch.sum(mask_seg.float()) sum_gt = torch.sum(mask_gt.float()) if sum_seg + sum_gt == 0: return torch.tensor(1.0) return (torch.sum((seg[gt == cat] / cat).float()) * 2.0) / (sum_seg + sum_gt) total_avg = torch.empty([len(cats)]) for i, c in enumerate(cats): total_avg[i] = calc_dice_coefficient(argmax, target, c) return total_avg.mean() def dice_coefficient(input, target): target = target.squeeze(1) argmax = (input.argmax(dim=1)) batch_size = input.shape[0] cats = [1, 2, 3, 4] total = torch.empty([batch_size]) for b in range(batch_size): total[b] = calc_dice_coefficients(argmax[b], target[b], cats) return total.mean() def calc_dice_coefficients_2(argmax, target, cats): def calc_dice_coefficient(seg, gt, cat: int): mask_seg = seg == cat mask_gt = gt == cat sum_seg = torch.sum(mask_seg.float()) sum_gt = torch.sum(mask_gt.float()) return (torch.sum((seg[gt == cat] / cat).float())), (sum_seg + sum_gt) total_avg = torch.empty([len(cats), 2]) for i, c in enumerate(cats): total_avg[i][0], total_avg[i][1] = calc_dice_coefficient(argmax, target, c) total_sum = total_avg.sum(axis=0) if (total_sum[1] == 0.0): return torch.tensor(1.0) return total_sum[0] * 2.0 / total_sum[1] def dice_coefficient_2(input, target): target = target.squeeze(1) argmax = (input.argmax(dim=1)) batch_size = input.shape[0] cats = [1, 2, 3, 4] total = torch.empty([batch_size]) for b in range(batch_size): total[b] = calc_dice_coefficients_2(argmax[b], target[b], cats) return total.mean() def accuracy_simple(input, target): target = target.squeeze(1) return (input.argmax(dim=1)==target).float().mean() def dice_coeff(pred, target): smooth = 1. num = pred.size(0) m1 = pred.view(num, -1) # Flatten m2 = target.view(num, -1) # Flatten intersection = (m1 * m2).sum() return (2. * intersection + smooth) / (m1.sum() + m2.sum() + smooth) ``` ##### The main training function ``` from fastai import callbacks def train_learner(learn, slice_lr, epochs=10, pct_start=0.8, best_model_name='best_model', patience_early_stop=4, patience_reduce_lr = 3): learn.fit_one_cycle(epochs, slice_lr, pct_start=pct_start, callbacks=[callbacks.SaveModelCallback(learn, monitor='dice_coefficient',mode='max', name=best_model_name), callbacks.EarlyStoppingCallback(learn=learn, monitor='dice_coefficient', patience=patience_early_stop), callbacks.ReduceLROnPlateauCallback(learn=learn, monitor='dice_coefficient', patience=patience_reduce_lr), callbacks.TerminateOnNaNCallback()]) metrics=accuracy_simple, acc_camvid_with_zero_check, dice_coefficient, dice_coefficient_2 wd=1e-2 learn = unet_learner(data, models.resnet34, metrics=metrics, wd=wd, bottle=True) learn.loss_func = CrossEntropyFlat(axis=1, weight=torch.tensor([1.5, .5, .5, .5, .5]).cuda()) learn.loss_func learn.model_dir = Path('/kaggle/model') learn = to_fp16(learn, loss_scale=4.0) lr_find(learn, num_it=300) learn.recorder.plot() lr=1e-04 train_learner(learn, slice(lr), epochs=12, pct_start=0.8, best_model_name='bestmodel-frozen-1', patience_early_stop=4, patience_reduce_lr = 3) learn.save('stage-1') learn.load('bestmodel'); learn.export(file='/kaggle/model/export-1.pkl') learn.unfreeze() lrs = slice(lr/100,lr) train_learner(learn, lrs, epochs=10, pct_start=0.8, best_model_name='bestmodel-unfrozen-1', patience_early_stop=4, patience_reduce_lr = 3) learn.save('stage-2'); learn.load('bestmodel-unfrozen-1-mini'); learn.save('stage-2'); learn.export(file='/kaggle/model/export-2.pkl') ``` ### Go Large ``` src = (SegmentationItemList.from_folder(path/'train_images') .split_by_rand_pct(valid_pct=valid_pct) .label_from_func(get_y_fn, classes=codes)) data = (src.transform(get_extra_transforms(), size=src_size, tfm_y=True) .add_test(ImageList.from_folder(path/'test_images'), tfms=None, tfm_y=False) .databunch(bs=bs) .normalize(imagenet_stats)) learn = unet_learner(data, models.resnet34, metrics=metrics, wd=wd, bottle=True) learn.model_dir = Path('/kaggle/model') learn.loss_func = CrossEntropyFlat(axis=1, weight=torch.tensor([2.0, .5, .5, .5, .5]).cuda()) learn = to_fp16(learn, loss_scale=4.0) learn.load('stage-2'); lr_find(learn, num_it=400) learn.recorder.plot() lr=1e-05 train_learner(learn, slice(lr), epochs=10, pct_start=0.8, best_model_name='bestmodel-frozen-3', patience_early_stop=4, patience_reduce_lr = 3) learn.save('stage-3'); learn.load('bestmodel-3'); learn.export(file='/kaggle/model/export-3.pkl') learn.unfreeze() lrs = slice(lr/1000,lr/10) train_learner(learn, lrs, epochs=10, pct_start=0.8, best_model_name='bestmodel-4', patience_early_stop=4, patience_reduce_lr = 3) learn.save('stage-4'); learn.load('bestmodel-4'); learn.export(file='/kaggle/model/export-4.pkl') !pwd !cp /kaggle/model/export.pkl /opt/fastai/fastai-exercises/nbs_gil from IPython.display import FileLink FileLink(r'export-4.pkl') ``` ### Inference ``` learn=None gc.collect() test_images = (path/'test_images').ls() !mv /kaggle/model/export-4.pkl /kaggle/model/export.pkl inference_learn = load_learner('/kaggle/model/') def predict(img_path): pred_class, pred_idx, outputs = inference_learn.predict(open_image(str(img_path))) return pred_class, pred_idx, outputs def encode_classes(pred_class_data): pixels = np.concatenate([[0], torch.transpose(pred_class_data.squeeze(), 0, 1).flatten(), [0]]) classes_dict = {1: [], 2: [], 3: [], 4: []} count = 0 previous = pixels[0] for i, val in enumerate(pixels): if val != previous: if previous in classes_dict: classes_dict[previous].append((i - count, count)) count = 0 previous = val count += 1 return classes_dict def convert_classes_to_text(classes_dict, clazz): return ' '.join([f'{v[0]} {v[1]}' for v in classes_dict[clazz]]) image_to_predict = train_images[16].name display_image_with_mask(image_to_predict) pred_class, pred_idx, outputs = predict(path/f'train_images/{image_to_predict}') pred_class torch.transpose(pred_class.data.squeeze(), 0, 1).shape ``` #### Checking encoding methods ``` encoded_all = encode_classes(pred_class.data) print(convert_classes_to_text(encoded_all, 3)) image_name = train_images[16] print(get_y_fn(image_name)) img = open_mask(get_y_fn(image_name)) img_data = img.data print(convert_classes_to_text(encode_classes(img_data), 3)) img_data.shape ``` ### Loop through the test images and create submission csv ``` import time start_time = time.time() defect_classes = [1, 2, 3, 4] with open('submission.csv', 'w') as submission_file: submission_file.write('ImageId_ClassId,EncodedPixels\n') for i, test_image in enumerate(test_images): pred_class, pred_idx, outputs = predict(test_image) encoded_all = encode_classes(pred_class.data) for defect_class in defect_classes: submission_file.write(f'{test_image.name}_{defect_class},{convert_classes_to_text(encoded_all, defect_class)}\n') if i % 5 == 0: print(f'Processed {i} images\r', end='') print(f"--- {time.time() - start_time} seconds ---") ``` ### Alternative prediction methods ``` preds,y = learn.get_preds(ds_type=DatasetType.Test, with_loss=False) preds.shape pred_class_data = preds.argmax(dim=1) len((path/'test_images').ls()) data.test_ds.x ```
github_jupyter
<a href="https://colab.research.google.com/github/Eurus-Holmes/PyTorch-Tutorials/blob/master/Training_a__Classifier.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` %matplotlib inline ``` Training a Classifier ===================== This is it. You have seen how to define neural networks, compute loss and make updates to the weights of the network. Now you might be thinking, What about data? ---------------- Generally, when you have to deal with image, text, audio or video data, you can use standard python packages that load data into a numpy array. Then you can convert this array into a ``torch.*Tensor``. - For images, packages such as Pillow, OpenCV are useful - For audio, packages such as scipy and librosa - For text, either raw Python or Cython based loading, or NLTK and SpaCy are useful Specifically for vision, we have created a package called ``torchvision``, that has data loaders for common datasets such as Imagenet, CIFAR10, MNIST, etc. and data transformers for images, viz., ``torchvision.datasets`` and ``torch.utils.data.DataLoader``. This provides a huge convenience and avoids writing boilerplate code. For this tutorial, we will use the CIFAR10 dataset. It has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are of size 3x32x32, i.e. 3-channel color images of 32x32 pixels in size. Training an image classifier ---------------------------- We will do the following steps in order: 1. Load and normalizing the CIFAR10 training and test datasets using ``torchvision`` 2. Define a Convolution Neural Network 3. Define a loss function 4. Train the network on the training data 5. Test the network on the test data 1. Loading and normalizing CIFAR10 ---------------------------- # 1. Loading and normalizing CIFAR10 Using ``torchvision``, it’s extremely easy to load CIFAR10. ``` import torch import torchvision import torchvision.transforms as transforms ``` The output of torchvision datasets are PILImage images of range [0, 1]. We transform them to Tensors of normalized range [-1, 1]. ``` transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') ``` Let us show some of the training images, for fun. ``` import matplotlib.pyplot as plt import numpy as np # functions to show an image def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) # get some random training images dataiter = iter(trainloader) images, labels = dataiter.next() # show images imshow(torchvision.utils.make_grid(images)) # print labels print(' '.join('%5s' % classes[labels[j]] for j in range(4))) ``` # 2. Define a Convolution Neural Network ---- Copy the neural network from the Neural Networks section before and modify it to take 3-channel images (instead of 1-channel images as it was defined). ``` import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() ``` # 3. Define a Loss function and optimizer ---- Let's use a Classification Cross-Entropy loss and SGD with momentum. ``` import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) ``` # 4. Train the network ---- This is when things start to get interesting. We simply have to loop over our data iterator, and feed the inputs to the network and optimize. ``` for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') ``` # 5. Test the network on the test data ---- We have trained the network for 2 passes over the training dataset. But we need to check if the network has learnt anything at all. We will check this by predicting the class label that the neural network outputs, and checking it against the ground-truth. If the prediction is correct, we add the sample to the list of correct predictions. Okay, first step. Let us display an image from the test set to get familiar. ``` dataiter = iter(testloader) images, labels = dataiter.next() # print images imshow(torchvision.utils.make_grid(images)) print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4))) ``` Okay, now let us see what the neural network thinks these examples above are: ``` outputs = net(images) ``` The outputs are energies for the 10 classes. Higher the energy for a class, the more the network thinks that the image is of the particular class. So, let's get the index of the highest energy: ``` _, predicted = torch.max(outputs, 1) print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4))) ``` The results seem pretty good. Let us look at how the network performs on the whole dataset. ``` correct = 0 total = 0 with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) ``` That looks waaay better than chance, which is 10% accuracy (randomly picking a class out of 10 classes). Seems like the network learnt something. Hmmm, what are the classes that performed well, and the classes that did not perform well: ``` class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs, 1) c = (predicted == labels).squeeze() for i in range(4): label = labels[i] class_correct[label] += c[i].item() class_total[label] += 1 for i in range(10): print('Accuracy of %5s : %2d %%' % ( classes[i], 100 * class_correct[i] / class_total[i])) ``` Okay, so what next? How do we run these neural networks on the GPU? Training on GPU ---------------- Just like how you transfer a Tensor on to the GPU, you transfer the neural net onto the GPU. Let's first define our device as the first visible cuda device if we have CUDA available: ``` device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # Assume that we are on a CUDA machine, then this should print a CUDA device: print(device) ``` The rest of this section assumes that `device` is a CUDA device. Then these methods will recursively go over all modules and convert their parameters and buffers to CUDA tensors: `net.to(device)` Remember that you will have to send the inputs and targets at every step to the GPU too: `inputs, labels = inputs.to(device), labels.to(device)` Why dont I notice MASSIVE speedup compared to CPU? Because your network is realllly small. **Exercise:** Try increasing the width of your network (argument 2 of the first ``nn.Conv2d``, and argument 1 of the second ``nn.Conv2d`` – they need to be the same number), see what kind of speedup you get. **Goals achieved**: - Understanding PyTorch's Tensor library and neural networks at a high level. - Train a small neural network to classify images Training on multiple GPUs ------------------------- If you want to see even more MASSIVE speedup using all of your GPUs, please check out :doc:`data_parallel_tutorial`. Where do I go next? ------------------- - `Train neural nets to play video games` - `Train a state-of-the-art ResNet network on imagenet` - `Train a face generator using Generative Adversarial Networks` - `Train a word-level language model using Recurrent LSTM networks` - `More examples` - `More tutorials` - `Discuss PyTorch on the Forums` - `Chat with other users on Slack`
github_jupyter
# Direct Outcome Prediction Model Also known as standardization ``` %matplotlib inline from sklearn.linear_model import LinearRegression from sklearn.ensemble import GradientBoostingRegressor from causallib.datasets import load_smoking_weight from causallib.estimation import Standardization, StratifiedStandardization from causallib.evaluation import OutcomeEvaluator ``` #### Data: The effect of quitting to smoke on weight loss. Data example is taken from [Hernan and Robins Causal Inference Book](https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/) ``` data = load_smoking_weight() data.X.join(data.a).join(data.y).head() ``` ## "Standard" Standardization A single model is trained with the treatment assignment as an additional feature. During inference, the model assigns a treatment value for all samples, thus predicting the potential outcome of all samples. ``` std = Standardization(LinearRegression()) std.fit(data.X, data.a, data.y) ``` ##### Outcome Prediction The model can be used to predict individual outcomes: The potential outcome under each intervention ``` ind_outcomes = std.estimate_individual_outcome(data.X, data.a) ind_outcomes.head() ``` The model can be used to predict population outcomes, By aggregating the individual outcome prediction (e.g., mean or median). Providing `agg_func` which is defaulted to `'mean'` ``` median_pop_outcomes = std.estimate_population_outcome(data.X, data.a, agg_func="median") median_pop_outcomes.rename("median", inplace=True) mean_pop_outcomes = std.estimate_population_outcome(data.X, data.a, agg_func="mean") mean_pop_outcomes.rename("mean", inplace=True) pop_outcomes = mean_pop_outcomes.to_frame().join(median_pop_outcomes) pop_outcomes ``` ##### Effect Estimation Similarly, Effect estimation can be done on either individual or population level, depending on the outcomes provided. Population level effect using population outcomes: ``` std.estimate_effect(mean_pop_outcomes[1], mean_pop_outcomes[0]) ``` Population level effect using individual outcome, but asking for aggregation (default behaviour): ``` std.estimate_effect(ind_outcomes[1], ind_outcomes[0], agg="population") ``` Individual level effect using inidiviual outcomes: Since we're using a binary treatment with logistic regression on a standard model, the difference is same for all individuals, and is equal to the coefficient of the treatment varaible ``` print(std.learner.coef_[0]) std.estimate_effect(ind_outcomes[1], ind_outcomes[0], agg="individual").head() ``` Multiple types of effect are also supported: ``` std.estimate_effect(ind_outcomes[1], ind_outcomes[0], agg="individual", effect_types=["diff", "ratio"]).head() ``` ### Treament one-hot encoded For multi-treatment cases, where treatments are coded as 0, 1, 2, ... but have no ordinal interpretation, It is possible to make the model encode the treatment assignment vector as one hot matrix. ``` std = Standardization(LinearRegression(), encode_treatment=True) std.fit(data.X, data.a, data.y) pop_outcomes = std.estimate_population_outcome(data.X, data.a, agg_func="mean") std.estimate_effect(mean_pop_outcomes[1], mean_pop_outcomes[0]) ``` ## Stratified Standarziation While standardization can be viewed as a **"complete pooled"** estimator, as it includes both treatment groups together, Stratified Standardization can viewed as **"complete unpooled"** one, as it completly stratifies the dataset by treatment values and learns a different model for each treatment group. ``` std = StratifiedStandardization(LinearRegression()) std.fit(data.X, data.a, data.y) ``` Checking the core `learner` we can see that it actually has two models, indexed by the treatment value: ``` std.learner ``` We can apply same analysis as above. ``` pop_outcomes = std.estimate_population_outcome(data.X, data.a, agg_func="mean") std.estimate_effect(mean_pop_outcomes[1], mean_pop_outcomes[0]) ``` We can see that internally, when asking for some potential outcome, the model simply applies the model trained on the group of that treatment: ``` potential_outcome = std.estimate_individual_outcome(data.X, data.a)[1] direct_prediction = std.learner[1].predict(data.X) (potential_outcome == direct_prediction).all() ``` #### Providing complex scheme of learners When supplying a single learner to the standardization above, the model simply duplicates it for each treatment value. However, it is possible to specify a different model for each treatment value explicitly. For example, in cases where the treated are more complex than the untreated (because, say, background of those choosed to be treated), it is possible to specify them with a more expressive model: ``` learner = {0: LinearRegression(), 1: GradientBoostingRegressor()} std = StratifiedStandardization(learner) std.fit(data.X, data.a, data.y) std.learner ind_outcomes = std.estimate_individual_outcome(data.X, data.a) ind_outcomes.head() std.estimate_effect(ind_outcomes[1], ind_outcomes[0]) ``` ## Evaluation #### Simple evaluation ``` plots = ["common_support", "continuous_accuracy"] evaluator = OutcomeEvaluator(std) evaluator._regression_metrics.pop("msle") # We have negative values and this is log transforms results = evaluator.evaluate_simple(data.X, data.a, data.y, plots=plots) ``` Results show the results for each treatment group separetly and also combined: ``` results.scores ``` #### Thorough evaluation ``` plots=["common_support", "continuous_accuracy", "residuals"] evaluator = OutcomeEvaluator(Standardization(LinearRegression())) results = evaluator.evaluate_cv(data.X, data.a, data.y, plots=plots) results.scores results.models ```
github_jupyter
Getting rid of bottom bands - Jessica's run (run01) =================================================== Run01 Jessica's runs (360x360x90, her bathymetry and stratification initial files) -------------------------------------------------------------- Initial stratifications, Depths 162, 315, 705 m, Across-shelf slice 40; T, NO3, S, and velocity plots Run01 and run03 from 180x180x35_BodyForcing_6Tr_LinProfiles ``` #KRM import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as mcolors from math import * import scipy.io import scipy as spy %matplotlib inline from netCDF4 import Dataset import pylab as pl #''' #NAME # Custom Colormaps for Matplotlib #PURPOSE # This program shows how to implement make_cmap which is a function that # generates a colorbar. If you want to look at different color schemes, # check out https://kuler.adobe.com/create. #PROGRAMMER(S) # Chris Slocum #REVISION HISTORY # 20130411 -- Initial version created # 20140313 -- Small changes made and code posted online # 20140320 -- Added the ability to set the position of each color #''' def make_cmap(colors, position=None, bit=False): #''' #make_cmap takes a list of tuples which contain RGB values. The RGB #values may either be in 8-bit [0 to 255] (in which bit must be set to #rue when called) or arithmetic [0 to 1] (default). make_cmap returns #a cmap with equally spaced colors. #Arrange your tuples so that the first color is the lowest value for the #colorbar and the last is the highest. #position contains values from 0 to 1 to dictate the location of each color. #''' import matplotlib as mpl import numpy as np bit_rgb = np.linspace(0,1,256) if position == None: position = np.linspace(0,1,len(colors)) else: if len(position) != len(colors): sys.exit("position length must be the same as colors") elif position[0] != 0 or position[-1] != 1: sys.exit("position must start with 0 and end with 1") if bit: for i in range(len(colors)): colors[i] = (bit_rgb[colors[i][0]], bit_rgb[colors[i][1]], bit_rgb[colors[i][2]]) cdict = {'red':[], 'green':[], 'blue':[]} for pos, color in zip(position, colors): cdict['red'].append((pos, color[0], color[0])) cdict['green'].append((pos, color[1], color[1])) cdict['blue'].append((pos, color[2], color[2])) cmap = mpl.colors.LinearSegmentedColormap('my_colormap',cdict,256) return cmap def unstagger(ugrid, vgrid): """Interpolate u and v component values to values at grid cell centres. The shapes of the returned arrays are 1 less than those of the input arrays in the y and x dimensions. :arg ugrid: u velocity component values with axes (..., y, x) :type ugrid: :py:class:`numpy.ndarray` :arg vgrid: v velocity component values with axes (..., y, x) :type vgrid: :py:class:`numpy.ndarray` :returns u, v: u and v component values at grid cell centres :rtype: 2-tuple of :py:class:`numpy.ndarray` """ u = np.add(ugrid[..., :-1], ugrid[..., 1:]) / 2 v = np.add(vgrid[..., :-1, :], vgrid[..., 1:, :]) / 2 return u[..., 1:, :], v[..., 1:] # Get field from MITgcm netCDF output # ''' :statefile : string with /path/to/state.0000000000.t001.nc :fieldname : string with the variable name as written on the netCDF file ('Temp', 'S','Eta', etc.)''' def getField(statefile, fieldname): StateOut = Dataset(statefile) Fld = StateOut.variables[fieldname][:] shFld = np.shape(Fld) if len(shFld) == 2: Fld2 = np.reshape(Fld,(shFld[0],shFld[1])) # reshape to pcolor order return Fld2 elif len(shFld) == 3: Fld2 = np.zeros((shFld[0],shFld[1],shFld[2])) Fld2 = np.reshape(Fld,(shFld[0],shFld[1],shFld[2])) # reshape to pcolor order return Fld2 elif len(shFld) == 4: Fld2 = np.zeros((shFld[0],shFld[1],shFld[2],shFld[3])) Fld2 = np.reshape(Fld,(shFld[0],shFld[1],shFld[2],shFld[3])) # reshape to pcolor order return Fld2 else: print (' Check size of field ') ``` Inquire variable from NetCDF - RUN01 ``` filenameb='/ocean/kramosmu/MITgcm/CanyonUpwelling/360x360x90_BodyForcing_1Tr/run01/mnc_0001/state.0000000000.t001.nc' StateOutb = Dataset(filenameb) for dimobj in StateOutb.variables.values(): print dimobj filename2b='/ocean/kramosmu/MITgcm/CanyonUpwelling/360x360x90_BodyForcing_1Tr/run01/mnc_0001/grid.t001.nc' GridOutb = Dataset(filename2b) for dimobj in GridOutb.variables.values(): print dimobj filename3b='/ocean/kramosmu/MITgcm/CanyonUpwelling/360x360x90_BodyForcing_1Tr/run01/mnc_0001/ptracers.0000000000.t001.nc' PtracersOutb = Dataset(filename3b) for dimobj in PtracersOutb.variables.values(): print dimobj # General input nx = 360 ny = 360 nz = 90 nta = 10 # t dimension size run 04 and 05 (output every 2 hr for 4.5 days) ntc = 10 # t dimension size run 06 (output every half-day for 4.5 days) z = StateOutb.variables['Z'] print(z[:]) Time = StateOutb.variables['T'] print(Time[:]) xc = getField(filenameb, 'XC') # x coords tracer cells yc = getField(filenameb, 'YC') # y coords tracer cells print(z[65]) #bathy = getField(filename2, 'Depth') #plt.rcParams.update({'font.size': 14}) #fig = plt.figure(figsize=(20,15)) #CS = plt.contour(xc,yc,bathy,30,colors='k' ) #plt.clabel(CS, # inline=1, # fmt='%1.1f', # fontsize=14) #plt.plot(xc[:,:],yc[:,:],linewidth=0.75, linestyle='-', color='0.75') #plt.xlabel('m',fontsize=14) #plt.ylabel('m',fontsize=14) #plt.title('Bathymetry (m) 180x180',fontsize=16) #plt.show ``` Depth 705 m ============ ``` zlev = 65 # 65 corresponds to 710m timesc = [0,1,2,3,4] # These correspond to 1,2,4,6,8,10 days ugridb = getField(filenameb,'U') vgridb = getField(filenameb,'V') print(np.shape(ugridb)) print(np.shape(vgridb)) ``` Get mask from T field (not the best, I know) ``` tempb = getField(filenameb, 'Temp') temp0b = np.ma.masked_values(tempb, 0) MASKb = np.ma.getmask(temp0b) #### T controls for plot #### plt.rcParams.update({'font.size':13}) colorsTemp = [(245.0/255.0,245/255.0,245./255.0), (255/255.0,20/255.0,0)] #(khaki 1246/255.0,143./255.0 ,orangered2) posTemp = [0, 1] NumLev = 30 # number of levels for contour #### PLOT #### plt.rcParams.update({'font.size':14}) kk=1 fig45=plt.figure(figsize=(18,48)) for tt in timesc : ### Temperature run01 plt.subplot(6,2,kk) ax = plt.gca() ax.set_axis_bgcolor((205/255.0, 201/255.0, 201/255.0)) plt.contourf(xc,yc,temp0b[tt,zlev,:,:],NumLev,cmap=make_cmap(colorsTemp, position=posTemp)) plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0)) plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0)) plt.xlabel('m') plt.ylabel('m') cb = plt.colorbar() cb.set_label(r'$^{\circ}$C',position=(1, 0),rotation=0) plt.title(" depth=%1.1f m,%1.1f days " % (z[zlev],tt)) kk=kk+1 ``` NO3 PLOTS ``` #### NO3 controls for plot #### NO3b = getField(filename3b, 'NO3') NO3Maskb = np.ma.array(NO3b,mask=MASKb) colorsNO3 = [(245.0/255.0,245/255.0,245./255.0), (0./255.0,139.0/255.0,69.0/255.0)] #(white-ish, forest green) posNO3 = [0, 1] NumLev = 30 # number of levels for contour #### PLOT #### plt.rcParams.update({'font.size':14}) kk=1 fig45=plt.figure(figsize=(18,48)) for tt in timesc : ### Temperature run06 plt.subplot(6,2,kk) ax = plt.gca() ax.set_axis_bgcolor((205/255.0, 201/255.0, 201/255.0)) plt.contourf(xc,yc,NO3Maskb[tt,zlev,:,:],NumLev,cmap=make_cmap(colorsNO3, position=posNO3)) plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0)) plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0)) plt.xlabel('m') plt.ylabel('m') cb = plt.colorbar() cb.set_label(r'$Mol/m^3$',position=(1, 0),rotation=0) plt.title(" depth=%1.1f m,%1.1f hr " % (z[zlev],tt)) kk=kk+1 ``` Velocity plots ``` #### PLOT #### plt.rcParams.update({'font.size':14}) kk=1 fig45=plt.figure(figsize=(18,48)) for tt in timesc : ### Speed and vel vectors, run01 plt.subplot(6,2,kk) ax = plt.gca() ax.set_axis_bgcolor((205/255.0, 201/255.0, 201/255.0)) u2,v2 = unstagger(ugridb[tt,zlev,:,:-1],vgridb[tt,zlev,:-1,:]) umaskb=np.ma.array(u2,mask=MASKb[tt,zlev,:-1,:-1]) vmaskb=np.ma.array(v2,mask=MASKb[tt,zlev,:-1,:-1]) y_slice = yc[:]#np.arange(0, ny-1) x_slice = xc[:]#np.arange(0, nx-1) arrow_step = 6 y_slice_a = y_slice[::arrow_step,::arrow_step] x_slice_a = x_slice[::arrow_step,::arrow_step] Usliceb = umaskb[::arrow_step,::arrow_step] Vsliceb = vmaskb[::arrow_step,::arrow_step] #print(np.shape(Uslice)) #print(np.shape(Vslice)) #print(np.shape(x_slice_a)) #print(np.shape(y_slice_a)) spdb = np.sqrt(umaskb**2 + vmaskb**2) pos = [0, 1] # to keep white color on zero colorsSpd = [(245.0/255.0,245/255.0,245./255.0), (71./255.0,60.0/255.0,139.0/255.0)] #(white-ish, Slate blue 4) plt.contourf(xc[:-1,:-1],yc[:-1,:-1],spdb,NumLev,cmap=make_cmap(colorsSpd, position=pos)) cb = plt.colorbar() cb.set_label('m/s', position=(1, 0),rotation=0) plt.quiver(y_slice_a,x_slice_a,Usliceb,Vsliceb,pivot='middle') plt.xlabel('m') plt.ylabel('m') kk=kk+1 ``` Line plots across-shelf slice at x = 39.37 km (for T) ``` plt.rcParams.update({'font.size':14}) alongshpos = 40 kk=1 fig45=plt.figure(figsize=(27,10)) for ii in timesc: posTemp = [0, 1] NumLev = 30 plt.subplot(1,3,kk) ax=plt.gca() plt.plot(yc[:,0],temp0b[ii,zlev,:,alongshpos],linewidth = 2) plt.ylabel('Temperature ($^{\circ}C$)') plt.xlabel('m') plt.show plt.title("z=%1.1f m, x=%1.1f m " % (z[zlev],xc[1,alongshpos])) plt.legend(('1 day','2 days','3 days','4 days','5 days'),loc=3) kk=2 ``` Plot depth vs salinity/temperature ``` z = StateOutb.variables['Z'] print(z[:]) zl = GridOutb.variables['Zl'] print(zl[:]) zp1 = GridOutb.variables['Zp1'] print(zp1[:]) zu = GridOutb.variables['Zu'] print(zu[:]) depth= GridOutb.variables['Depth'] drc = GridOutb.variables['drC'] print(drc[:]) drf = GridOutb.variables['drF'] print(drf[:]) T = getField(filenameb, 'Temp') S = getField(filenameb,'S') fig46 = plt.figure(figsize=(10,10)) plt.plot(T[0,:,200,180],z[:],'ro') ```
github_jupyter
# Classifying Fashion-MNIST Now it's your turn to build and train a neural network. You'll be using the [Fashion-MNIST dataset](https://github.com/zalandoresearch/fashion-mnist), a drop-in replacement for the MNIST dataset. MNIST is actually quite trivial with neural networks where you can easily achieve better than 97% accuracy. Fashion-MNIST is a set of 28x28 greyscale images of clothes. It's more complex than MNIST, so it's a better representation of the actual performance of your network, and a better representation of datasets you'll use in the real world. <img src='assets/fashion-mnist-sprite.png' width=500px> In this notebook, you'll build your own neural network. For the most part, you could just copy and paste the code from Part 3, but you wouldn't be learning. It's important for you to write the code yourself and get it to work. Feel free to consult the previous notebooks though as you work through this. First off, let's load the dataset through torchvision. ``` import torch from torchvision import datasets, transforms import helper # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) # Download and load the training data trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # Download and load the test data testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True) ``` Here we can see one of the images. ``` image, label = next(iter(trainloader)) helper.imshow(image[0,:]); ``` ## Building the network Here you should define your network. As with MNIST, each image is 28x28 which is a total of 784 pixels, and there are 10 classes. You should include at least one hidden layer. We suggest you use ReLU activations for the layers and to return the logits or log-softmax from the forward pass. It's up to you how many layers you add and the size of those layers. ``` # TODO: Define your network architecture here from torch import nn import torch.nn.functional as F # model = nn.Sequential(nn.Linear(784, 256), # nn.ReLU(), # nn.Linear(256, 64), # nn.ReLU(), # nn.Linear(64, 10), # # nn.LogSoftmax(dim = 1), # ) class Network(nn.Module): def __init__(self): super().__init__() self.hidden1 = nn.Linear(784, 256) self.hidden2 = nn.Linear(256, 128) self.hidden3 = nn.Linear(128, 64) self.output = nn.Linear(64, 10) def forward(self, x): x = F.relu(self.hidden1(x)) x = F.relu(self.hidden2(x)) x = F.relu(self.hidden3(x)) x = F.log_softmax(self.output(x), dim = 1) return x ``` # Train the network Now you should create your network and train it. First you'll want to define [the criterion](http://pytorch.org/docs/master/nn.html#loss-functions) ( something like `nn.CrossEntropyLoss`) and [the optimizer](http://pytorch.org/docs/master/optim.html) (typically `optim.SGD` or `optim.Adam`). Then write the training code. Remember the training pass is a fairly straightforward process: * Make a forward pass through the network to get the logits * Use the logits to calculate the loss * Perform a backward pass through the network with `loss.backward()` to calculate the gradients * Take a step with the optimizer to update the weights By adjusting the hyperparameters (hidden units, learning rate, etc), you should be able to get the training loss below 0.4. ``` # TODO: Create the network, define the criterion and optimizer from torch import optim model = Network() criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr = 0.01) # TODO: Train the network here # dataiter = iter(trainloader) # images, labels = dataiter.next() print(images.shape) # print(images.shape) epochs = 5 for i in range(epochs): running_loss = 0 for images, labels in trainloader: images = images.view(images.shape[0], -1) output = model(images) loss = criterion(output, labels) optimizer.zero_grad() loss.backward() # optimizer.zero_grad() optimizer.step() running_loss += loss.item() # print(loss.item()) else: # print(len(trainloader)) print(f"The training loss of {i+1}th epoch is: {running_loss}") %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper # Test out your network! # model = Network() dataiter = iter(testloader) images, labels = dataiter.next() img = images[0] # Convert 2D image to 1D vector img = img.resize_(1, 784) # TODO: Calculate the class probabilities (softmax) for img ps = torch.exp(model(img)) print(ps) # Plot the image and probabilities helper.view_classify(img.resize_(1, 28, 28), ps, version='Fashion') ```
github_jupyter
<a href="https://colab.research.google.com/github/HenrryCordovillo/Redes_Neuronales_con_Python/blob/main/Ejercicio_3%2C_Perceptr%C3%B3n.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` import numpy as np from keras.models import Sequential from keras.layers.core import Dense import matplotlib.pyplot as plt # cargamos las 4 combinaciones de las compuertas AND datos_entrenamiento = np.array([[0,0],[0,1],[1,0],[1,1]], "float32") # y estos son los resultados que se obtienen, en el mismo orden datos_etiquetas = np.array([[0],[0],[0],[1]], "float32") x=datos_entrenamiento[:,0] y=datos_entrenamiento[:,1] colors = datos_etiquetas plt.scatter(x,y,s=100,c=colors) plt.xlabel("Eje x") plt.ylabel("Eje y") plt.title("Grafica Datos a clasificar") plt.show() #Modelo para AND modelo = Sequential() modelo.add(Dense(1, input_dim=2, activation='relu')) modelo.compile(loss='mean_squared_error', optimizer='adam', metrics=['binary_accuracy']) modelo.fit(datos_entrenamiento, datos_etiquetas, epochs=800) #AND scores = modelo.evaluate(datos_entrenamiento, datos_etiquetas) print("\n%s: %.2f%%" % (modelo.metrics_names[1], scores[1]*100)) print (modelo.predict(datos_entrenamiento).round()) # cargamos las 4 combinaciones de las compuertas OR datos_entrenamiento = np.array([[0,0],[0,1],[1,0],[1,1]], "float32") # y estos son los resultados que se obtienen, en el mismo orden datos_etiquetas = np.array([[0],[1],[1],[1]], "float32") x=datos_entrenamiento[:,0] y=datos_entrenamiento[:,1] colors = datos_etiquetas plt.scatter(x,y,s=100,c=colors) plt.xlabel("Eje x") plt.ylabel("Eje y") plt.title("Grafica Datos a clasificar") plt.show() #Modelo para OR modelo = Sequential() modelo.add(Dense(1, input_dim=2, activation='relu')) modelo.compile(loss='mean_squared_error', optimizer='adam', metrics=['binary_accuracy']) modelo.fit(datos_entrenamiento, datos_etiquetas, epochs=800) #OR scores = modelo.evaluate(datos_entrenamiento, datos_etiquetas) print("\n%s: %.2f%%" % (modelo.metrics_names[1], scores[1]*100)) print (modelo.predict(datos_entrenamiento).round()) # cargamos las 4 combinaciones de las compuertas XOR datos_entrenamiento = np.array([[0,0],[0,1],[1,0],[1,1]], "float32") # y estos son los resultados que se obtienen, en el mismo orden datos_etiquetas = np.array([[0],[1],[1],[0]], "float32") x=datos_entrenamiento[:,0] y=datos_entrenamiento[:,1] colors = datos_etiquetas plt.scatter(x,y,s=100,c=colors) plt.xlabel("Eje x") plt.ylabel("Eje y") plt.title("Grafica Datos a clasificar") plt.show() #Modelo para XOR modelo = Sequential() modelo.add(Dense(1, input_dim=2, activation='relu')) modelo.compile(loss='mean_squared_error', optimizer='adam', metrics=['binary_accuracy']) modelo.fit(datos_entrenamiento, datos_etiquetas, epochs=800) #XOR scores = modelo.evaluate(datos_entrenamiento, datos_etiquetas) print("\n%s: %.2f%%" % (modelo.metrics_names[1], scores[1]*100)) print (modelo.predict(datos_entrenamiento).round()) ```
github_jupyter
## Observations and Insights ## Dependencies and starter code ``` # Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import scipy.stats as st import numpy as np # Study data files mouse_metadata = "data/Mouse_metadata.csv" study_results = "data/Study_results.csv" # Read the mouse data and the study results mouse_metadata = pd.read_csv(mouse_metadata) study_results = pd.read_csv(study_results) # Combine the data into a single dataset combined_df = pd.merge(mouse_metadata, study_results, how='outer', on='Mouse ID') combined_df.head() ``` ## Summary statistics ``` # Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen group_by_regimen = combined_df.groupby("Drug Regimen") mean = group_by_regimen['Tumor Volume (mm3)'].mean() median = group_by_regimen['Tumor Volume (mm3)'].median() var = group_by_regimen['Tumor Volume (mm3)'].var() sd = group_by_regimen['Tumor Volume (mm3)'].std() sem = group_by_regimen['Tumor Volume (mm3)'].sem() summary_statistics = pd.DataFrame({"Mean": mean, "Median": median, "Variance": var, "Standard Deviation": sd, "SEM": sem }) summary_statistics ``` ## Bar plots ``` # Generate a bar plot showing number of data points for each treatment regimen using pandas combined_df['Drug Regimen'].value_counts().plot(kind='bar') # Generate a bar plot showing number of data points for each treatment regimen using pyplot val_counts = combined_df['Drug Regimen'].value_counts() plt.bar(val_counts.index.values, val_counts.values) plt.xticks(rotation=90) ``` ## Pie plots ``` # Generate a pie plot showing the distribution of female versus male mice using pandas combined_df['Sex'].value_counts().plot(kind='pie') # Generate a pie plot showing the distribution of female versus male mice using pyplot sex_val_counts = combined_df['Sex'].value_counts() plt.pie(sex_val_counts.values, labels = sex_val_counts.index.values) ``` ## Quartiles, outliers and boxplots ``` # Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers. outlier_index = combined_df.set_index('Drug Regimen') top4 = outlier_index.loc[['Capomulin', 'Infubinol', 'Ceftamin', 'Ketapril'], ['Tumor Volume (mm3)']] quartiles = top4['Tumor Volume (mm3)'].quantile([.25,.5,.75]) lowerq = quartiles[0.25] upperq = quartiles[0.75] iqr = upperq-lowerq print(f"The lower quartile of Tumor Volume (mm3) is: {lowerq}") print(f"The upper quartile of Tumor Volume (mm3) is: {upperq}") print(f"The interquartile range of Tumor Volume (mm3) is: {iqr}") print(f"The the median of Tumor Volume (mm3) is: {quartiles[0.5]} ") lower_bound = lowerq - (1.5*iqr) upper_bound = upperq + (1.5*iqr) print(f"Values below {lower_bound} could be outliers.") print(f"Values above {upper_bound} could be outliers.") tumor_outlier = top4.loc[(top4['Tumor Volume (mm3)'] < lower_bound) | (top4['Tumor Volume (mm3)'] > upper_bound)] tumor_outlier # Generate a box plot of the final tumor volume of each mouse across four regimens of interest cap = outlier_index.loc['Capomulin','Tumor Volume (mm3)'] ram = outlier_index.loc['Ramicane','Tumor Volume (mm3)'] inf = outlier_index.loc['Infubinol','Tumor Volume (mm3)'] cef = outlier_index.loc['Ceftamin','Tumor Volume (mm3)'] var = [cap, ram,inf, cef] names = ['Capomulin','Ramicane','Infubinol','Ceftamin'] fig1, ax1 = plt.subplots() ax1.set_title('Top 4 Drug Regimens') ax1.set_ylabel('Tumor Volume (mm3)') ax1.boxplot(var) x_axis = (np.arange(len(var))) + 1 tick_locations = [value for value in x_axis] plt.xticks(tick_locations, names, rotation = 'horizontal') plt.show() ``` ## Line and scatter plots ``` # Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin cap_table = combined_df.loc[combined_df['Drug Regimen'] == 'Capomulin'] mouse = cap_table.loc[cap_table['Mouse ID'] == 's185'] plt.plot(mouse['Timepoint'], mouse['Tumor Volume (mm3)']) # Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen average = cap_table.groupby(['Mouse ID']).mean() plt.scatter(average['Weight (g)'],average['Tumor Volume (mm3)']) # Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen reg_line = st.linregress(average['Weight (g)'],average['Tumor Volume (mm3)']) y_value = average['Weight (g)']*reg_line[0]+reg_line[1] plt.scatter(average['Weight (g)'],average['Tumor Volume (mm3)']) plt.plot(average['Weight (g)'], y_value, color = 'green') ```
github_jupyter
# Accessing data in a DataSet After a measurement is completed all the acquired data and metadata around it is accessible via a `DataSet` object. This notebook presents the useful methods and properties of the `DataSet` object which enable convenient access to the data, parameters information, and more. For general overview of the `DataSet` class, refer to [DataSet class walkthrough](DataSet-class-walkthrough.ipynb). ## Preparation: a DataSet from a dummy Measurement In order to obtain a `DataSet` object, we are going to run a `Measurement` storing some dummy data (see [Dataset Context Manager](Dataset%20Context%20Manager.ipynb) notebook for more details). ``` import tempfile import os import numpy as np import qcodes from qcodes import initialise_or_create_database_at, \ load_or_create_experiment, Measurement, Parameter, \ Station from qcodes.dataset.plotting import plot_dataset db_path = os.path.join(tempfile.gettempdir(), 'data_access_example.db') initialise_or_create_database_at(db_path) exp = load_or_create_experiment(experiment_name='greco', sample_name='draco') x = Parameter(name='x', label='Voltage', unit='V', set_cmd=None, get_cmd=None) t = Parameter(name='t', label='Time', unit='s', set_cmd=None, get_cmd=None) y = Parameter(name='y', label='Voltage', unit='V', set_cmd=None, get_cmd=None) y2 = Parameter(name='y2', label='Current', unit='A', set_cmd=None, get_cmd=None) q = Parameter(name='q', label='Qredibility', unit='$', set_cmd=None, get_cmd=None) meas = Measurement(exp=exp, name='fresco') meas.register_parameter(x) meas.register_parameter(t) meas.register_parameter(y, setpoints=(x, t)) meas.register_parameter(y2, setpoints=(x, t)) meas.register_parameter(q) # a standalone parameter x_vals = np.linspace(-4, 5, 50) t_vals = np.linspace(-500, 1500, 25) with meas.run() as datasaver: for xv in x_vals: for tv in t_vals: yv = np.sin(2*np.pi*xv)*np.cos(2*np.pi*0.001*tv) + 0.001*tv y2v = np.sin(2*np.pi*xv)*np.cos(2*np.pi*0.001*tv + 0.5*np.pi) - 0.001*tv datasaver.add_result((x, xv), (t, tv), (y, yv), (y2, y2v)) q_val = np.max(yv) - np.min(y2v) # a meaningless value datasaver.add_result((q, q_val)) dataset = datasaver.dataset ``` For the sake of demonstrating what kind of data we've produced, let's use `plot_dataset` to make some default plots of the data. ``` plot_dataset(dataset) ``` ## DataSet indentification Before we dive into what's in the `DataSet`, let's briefly note how a `DataSet` is identified. ``` dataset.captured_run_id dataset.exp_name dataset.sample_name dataset.name ``` ## Parameters in the DataSet In this section we are getting information about the parameters stored in the given `DataSet`. > Why is that important? Let's jump into *data*! As it turns out, just "arrays of numbers" are not enough to reason about a given `DataSet`. Even comping up with a reasonable deafult plot, which is what `plot_dataset` does, requires information on `DataSet`'s parameters. In this notebook, we first have a detailed look at what is stored about parameters and how to work with this information. After that, we will cover data access methods. ### Run description Every dataset comes with a "description" (aka "run description"): ``` dataset.description ``` The description, an instance of `RunDescriber` object, is intended to describe the details of a dataset. In the future releases of QCoDeS it will likely be expanded. At the moment, it only contains an `InterDependencies_` object under its `interdeps` attribute - which stores all the information about the parameters of the `DataSet`. Let's look into this `InterDependencies_` object. ### Interdependencies `Interdependencies_` object inside the run description contains information about all the parameters that are stored in the `DataSet`. Subsections below explain how the individual information about the parameters as well as their relationships are captured in the `Interdependencies_` object. ``` interdeps = dataset.description.interdeps interdeps ``` #### Dependencies, inferences, standalones Information about every parameter is stored in the form of `ParamSpecBase` objects, and the releationship between parameters is captured via `dependencies`, `inferences`, and `standalones` attributes. For example, the dataset that we are inspecting contains no inferences, and one standalone parameter `q`, and two dependent parameters `y` and `y2`, which both depend on independent `x` and `t` parameters: ``` interdeps.inferences interdeps.standalones interdeps.dependencies ``` `dependencies` is a dictionary of `ParamSpecBase` objects. The keys are dependent parameters (those which depend on other parameters), and the corresponding values in the dictionary are tuples of independent parameters that the dependent parameter in the key depends on. Coloquially, each key-value pair of the `dependencies` dictionary is sometimes referred to as "parameter tree". `inferences` follows the same structure as `dependencies`. `standalones` is a set - an unordered collection of `ParamSpecBase` objects representing "standalone" parameters, the ones which do not depend on other parameters, and no other parameter depends on them. #### ParamSpecBase objects `ParamSpecBase` object contains all the necessary information about a given parameter, for example, its `name` and `unit`: ``` ps = list(interdeps.dependencies.keys())[0] print(f'Parameter {ps.name!r} is in {ps.unit!r}') ``` `paramspecs` property returns a tuple of `ParamSpecBase`s for all the parameters contained in the `Interdependencies_` object: ``` interdeps.paramspecs ``` Here's a trivial example of iterating through dependent parameters of the `Interdependencies_` object and extracting information about them from the `ParamSpecBase` objects: ``` for d in interdeps.dependencies.keys(): print(f'Parameter {d.name!r} ({d.label}, {d.unit}) depends on:') for i in interdeps.dependencies[d]: print(f'- {i.name!r} ({i.label}, {i.unit})') ``` #### Other useful methods and properties `Interdependencies_` object has a few useful properties and methods which make it easy to work it and with other `Interdependencies_` and `ParamSpecBase` objects. For example, `non_dependencies` returns a tuple of all dependent parameters together with standalone parameters: ``` interdeps.non_dependencies ``` `what_depends_on` method allows to find what parameters depend on a given parameter: ``` t_ps = interdeps.paramspecs[2] t_deps = interdeps.what_depends_on(t_ps) print(f'Following parameters depend on {t_ps.name!r} ({t_ps.label}, {t_ps.unit}):') for t_dep in t_deps: print(f'- {t_dep.name!r} ({t_dep.label}, {t_dep.unit})') ``` ### Shortcuts to important parameters For the frequently needed groups of parameters, `DataSet` object itself provides convenient methods and properties. For example, use `dependent_parameters` property to get only dependent parameters of a given `DataSet`: ``` dataset.dependent_parameters ``` This is equivalent to: ``` tuple(dataset.description.interdeps.dependencies.keys()) ``` ### Note on inferences Inferences between parameters is a feature that has not been used yet within QCoDeS. The initial concepts around `DataSet` included it in order to link parameters that are not directly dependent on each other as "dependencies" are. It is very likely that "inferences" will be eventually deprecated and removed. ### Note on ParamSpec's > `ParamSpec`s originate from QCoDeS versions prior to `0.2.0` and for now are kept for backwards compatibility. `ParamSpec`s are completely superseded by `InterDependencies_`/`ParamSpecBase` bundle and will likely be deprecated in future versions of QCoDeS together with the `DataSet` methods/properties that return `ParamSpec`s objects. In addition to the `Interdependencies_` object, `DataSet` also holds `ParamSpec` objects (not to be confused with `ParamSpecBase` objects from above). Similar to `Interdependencies_` object, the `ParamSpec` objects hold information about parameters and their interdependencies but in a different way: for a given parameter, `ParamSpec` object itself contains information on names of parameters that it depends on, while for the `InterDependencies_`/`ParamSpecBase`s this information is stored only in the `InterDependencies_` object. `DataSet` exposes `paramspecs` property and `get_parameters()` method, both of which return `ParamSpec` objects of all the parameters of the dataset, and are not recommended for use: ``` dataset.paramspecs dataset.get_parameters() dataset.parameters ``` To give an example of what it takes to work with `ParamSpec` objects as opposed to `Interdependencies_` object, here's a function that one needs to write in order to find standalone `ParamSpec`s from a given list of `ParamSpec`s: ``` def get_standalone_parameters(paramspecs): all_independents = set(spec.name for spec in paramspecs if len(spec.depends_on_) == 0) used_independents = set(d for spec in paramspecs for d in spec.depends_on_) standalones = all_independents.difference(used_independents) return tuple(ps for ps in paramspecs if ps.name in standalones) all_parameters = dataset.get_parameters() standalone_parameters = get_standalone_parameters(all_parameters) standalone_parameters ``` ## Getting data from DataSet In this section methods for retrieving the actual data from the `DataSet` are discussed. ### `get_parameter_data` - the powerhorse `DataSet` provides one main method of accessing data - `get_parameter_data`. It returns data for groups of dependent-parameter-and-its-independent-parameters in a form of a nested dictionary of `numpy` arrays: ``` dataset.get_parameter_data() ``` #### Avoid excessive calls to loading data Note that this call actually reads the data of the `DataSet` and in case of a `DataSet` with a lot of data can take noticable amount of time. Hence, it is recommended to limit the number of times the same data gets loaded in order to speed up the user's code. #### Loading data of selected parameters Sometimes data only for a particular parameter or parameters needs to be loaded. For example, let's assume that after inspecting the `InterDependencies_` object from `dataset.description.interdeps`, we concluded that we want to load data of the `q` parameter and the `y2` parameter. In order to do that, we just pass the names of these parameters, or their `ParamSpecBase`s to `get_parameter_data` call: ``` q_param_spec = list(interdeps.standalones)[0] q_param_spec y2_param_spec = interdeps.non_dependencies[-1] y2_param_spec dataset.get_parameter_data(q_param_spec, y2_param_spec) ``` ### `get_data_as_pandas_dataframe` - for `pandas` fans `DataSet` provides one main method of accessing data - `get_data_as_pandas_dataframe`. It returns data for groups of dependent-parameter-and-its-independent-parameters in a form of a dictionary of `pandas.DataFrame` s: ``` dfs = dataset.get_data_as_pandas_dataframe() # For the sake of making this article more readable, # we will print the contents of the `dfs` dictionary # manually by calling `.head()` on each of the DataFrames for parameter_name, df in dfs.items(): print(f"DataFrame for parameter {parameter_name}") print("-----------------------------") print(f"{df.head()!r}") print("") ``` Similar to `get_parameter_data`, `get_data_as_pandas_dataframe` also supports retrieving data for a given parameter(s), as well as `start`/`stop` arguments. `get_data_as_pandas_dataframe` is implemented based on `get_parameter_data`, hence the performance considerations mentioned above for `get_parameter_data` apply to `get_data_as_pandas_dataframe` as well. For more details on `get_data_as_pandas_dataframe` refer to [Working with pandas and xarray article](Working-With-Pandas-and-XArray.ipynb). ### Data extraction into "other" formats If the user desires to export a QCoDeS `DataSet` into a format that is not readily supported by `DataSet` methods, we recommend to use `get_data_as_pandas_dataframe` first, and then convert the resulting `DataFrame` s into a the desired format. This is becuase `pandas` package already implements converting `DataFrame` to various popular formats including comma-separated text file (`.csv`), HDF (`.hdf5`), xarray, Excel (`.xls`, `.xlsx`), and more; refer to [Working with pandas and xarray article](Working-With-Pandas-and-XArray.ipynb), and [`pandas` documentation](https://pandas.pydata.org/pandas-docs/stable/reference/frame.html#serialization-io-conversion) for more information. Nevertheless, `DataSet` also provides the following convenient methods: * `DataSet.write_data_to_text_file` Refer to the docstrings of those methods for more information on how to use them. ### Not recommended data access methods The following tree methods of accessing data in a dataset are not recommended for use, and will be deprecated soon: * `DataSet.get_data` * `DataSet.get_values` * `DataSet.get_setpoints`
github_jupyter
# Inverse Analysis of Turbidites by Machine Learning Technique # Preprocessing of training and test data sets ``` import numpy as np import os import ipdb def connect_dataset(dist_start, dist_end, file_list, outputdir, topodx=5, offset=5000,gclass_num=4,test_data_num=100): """ Connect multiple raw data to produce the training and test data sets """ # Define start and end points in the data sets prox = np.round((dist_start+offset)/topodx).astype(np.int32) dist = np.round((dist_end+offset)/topodx).astype(np.int32) H = np.zeros([0,(dist-prox)* (gclass_num) ]) icond = np.zeros([0,gclass_num + 3]) # Read files and combine them for i in range(len(file_list)): H_temp = np.loadtxt(file_list[i] + '/H1.txt', delimiter = ',')[:,prox:dist] for j in range(2, gclass_num + 1): H_next = np.loadtxt(file_list[i] + '/H{}.txt'.format(j), delimiter = ',')[:,prox:dist] H_temp = np.concatenate([H_temp, H_next], axis = 1) icond_temp = np.loadtxt(file_list[i] + '/initial_conditions.txt', delimiter = ',') if icond_temp.shape[0] != H_temp.shape[0]: icond_temp = icond_temp[:-1,:] H = np.concatenate((H,H_temp),axis=0) icond = np.concatenate((icond,icond_temp),axis = 0) # Detect the maximum and minimum values in data sets max_x = np.max(H) min_x = np.min(H) icond_max = np.max(icond, axis=0) icond_min = np.min(icond, axis=0) # Split data for test and training sets H_train = H[0:-test_data_num,:] H_test = H[H.shape[0] - test_data_num:,:] icond_train = icond[0:-test_data_num,:] icond_test = icond[H.shape[0] - test_data_num:,:] # Save data sets if not os.path.exists(outputdir): os.mkdir(outputdir) np.save(os.path.join(outputdir, 'H_train.npy'), H_train) np.save(os.path.join(outputdir, 'H_test.npy'),H_test) np.save(os.path.join(outputdir, 'icond_train.npy'),icond_train) np.save(os.path.join(outputdir, 'icond_test.npy'),icond_test) np.save(os.path.join(outputdir, 'icond_min.npy'),icond_min) np.save(os.path.join(outputdir, 'icond_max.npy'),icond_max) np.save(os.path.join(outputdir, 'x_minmax.npy'),[min_x, max_x]) if __name__=="__main__": # dist_end = 30000 original_data_dir = "/home/naruse/public/naruse/TC_training_data_4" # parent_dir = "/home/naruse/antidune/Documents/PythonScripts/DeepLearningTurbidite/20201018_30km" parent_prefix = "/home/naruse/public/naruse/DeepLearningTurbidite/distance" if not os.path.exists(parent_prefix): os.mkdir(parent_prefix) output_dir = [] test_distance = [1, 2, 3, 4, 5, 10, 15, 20, 25, 30] dist_start = [0] # test_distance = [95] for i in range(len(test_distance)): parent_dir = os.path.join(parent_prefix, str(test_distance[i])) if not os.path.exists(parent_dir): os.mkdir(parent_dir) output_dir.append(os.path.join(parent_dir, "data")) file_list = [] for j in range(1,23): dirname = os.path.join(original_data_dir, "TCModel_for_ML{0:02d}".format(j), "output") if os.path.exists(dirname): file_list.append(dirname) # connect_dataset(dist_start, dist_end, file_list, outputdir, test_data_num=300) for k in range(len(test_distance)): connect_dataset(dist_start[0] * 1000, (test_distance[k] + dist_start[0]) * 1000, file_list, output_dir[k], test_data_num=300) ``` # Common settings for plotting ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt # settings for plotting linewidth = 0.5 linestyle = ['-', '--', ':', '-.'] linecolor = ["r", "g", "b", "c", "m", "y", "k"] lc_id = 0 params = {'legend.fontsize': 5, 'legend.handlelength': 1., 'legend.frameon': False, 'font.size' : 7, 'font.family': ['sans-serif'], 'font.sans-serif': ['Arial'], 'legend.labelspacing' : 0.5, 'legend.handletextpad' : 0.5, 'legend.markerscale' : 1., } plt.rcParams.update(params) ``` # Check basic properties of training data sets ``` %matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt thick_file = '/home/naruse/public/naruse/DeepLearningTurbidite/fulldata/95/data/H_test.npy' gclass_num = 4 dx = 5.0 gclass_value = np.array([1.5, 2.5, 3.5, 4.5]) gclass_name = [] for i in range(gclass_num): gclass_name.append('{}$\phi$'.format(gclass_value[i])) H_test = np.load(thick_file) # data sets for values of volume-per-unit-area of all grain size classes num_grids = int(H_test.shape[1]/gclass_num) num_data = H_test.shape[0] # split data sets for every grain size classes volume_unit_area = np.empty([gclass_num, num_data, num_grids]) # array for volume-per-unit-area for each grain size classes for i in range(gclass_num): volume_unit_area[i, :, :] = H_test[:,i*num_grids:(i+1)*num_grids] thickness = np.sum(volume_unit_area, axis=0) # total thickness # Calculate longitudinal variation of mean grain size mean_grain_size = np.zeros([num_data, num_grids]) significant_thick = np.where(thickness > 0.01) for i in range(gclass_num): mean_grain_size[significant_thick] += gclass_value[i] * volume_unit_area[i][significant_thick] mean_grain_size[significant_thick] /= thickness[significant_thick] # Calculate mean and standard deviation of thickness and maximum reach of beds mean_max_thick = np.average(np.max(thickness, axis=1)) std_max_thick = np.std(np.max(thickness, axis=1), ddof=1) x = np.tile(np.arange(0, num_grids * dx, dx), num_data).reshape(num_data, num_grids) x[thickness < 0.01] = 0 mean_max_reach = np.average(np.max(x, axis=1)) std_max_reach = np.std(np.max(x, axis=1), ddof=1) print('Mean of maximum thickness of beds: {} m'.format(mean_max_thick)) print('Standard deviation of maximum thickness of beds: {} m'.format(std_max_thick)) print('Mean of maximum reach of bed (> 1cm): {}'.format(mean_max_reach)) print('Standard deviation of maximum reach of bed (> 1cm): {}'.format(std_max_reach)) # plot data sets xrange=np.array([0, 50000]) xrange_grid = (xrange / dx).astype(np.int32) x = np.arange(xrange[0], xrange[1], dx) start_id = 6 num_beds = 4 # settings for plotting linewidth = 0.5 linestyle = ['-', '--', ':', '-.'] linecolor = ["r", "g", "b", "c", "m", "y", "k"] lc_id = 0 params = {'legend.fontsize': 5, 'legend.handlelength': 3, 'legend.frameon': False, 'font.size' : 7, 'font.family': ['sans-serif'], 'font.sans-serif': ['Arial'], } plt.rcParams.update(params) # Plot results fig, ax = plt.subplots(2, 1, figsize=(8/2.54,8/2.54)) plt.subplots_adjust(bottom=0.3, wspace=0.4) for i in range(start_id, start_id + num_beds): ax[0].plot(x / 1000, thickness[i,xrange_grid[0]:xrange_grid[1]], lw=linewidth, linestyle=linestyle[(i - start_id)%4], color=linecolor[lc_id%7], label='bed {}'.format(i - start_id + 1)) lc_id += 1 ax[0].set_xlabel('Distance (km)', fontsize=7) ax[0].set_ylabel('Thickness (m)', fontsize=7) ax[0].legend() ylim = ax[0].get_ylim() xlim = ax[0].get_xlim() ax[0].text(xlim[0] - 0.1 * xlim[1], ylim[0] + (ylim[1] - ylim[0])*1.05, 'a.', fontweight='bold', fontsize=9) # for k in range(start_id, start_id + num_beds): # ax[0,1].plot(x, mean_grain_size[k, xrange_grid[0]:xrange_grid[1]],label='bed{}'.format(k)) # ax[0,1].legend() # ax[0,1].set_ylim([1.5, 4.5]) # for j in range(gclass_num): for j in range(gclass_num): ax[1].plot(x / 1000, volume_unit_area[j, start_id, xrange_grid[0]:xrange_grid[1]], lw=linewidth, color=linecolor[lc_id%7], label=gclass_name[j]) lc_id += 1 ax[1].set_xlabel('Distance (km)', fontsize=7) ax[1].set_ylabel('Volume per Unit Area (m)', fontsize=7) # ax[1].set_xlim(0,) # ax[1].set_ylim(0,) ax[1].legend() ylim = ax[1].get_ylim() xlim = ax[1].get_xlim() ax[1].text(xlim[0] - 0.1 * xlim[1], ylim[0] + (ylim[1] - ylim[0])*1.05, 'b.', fontweight='bold', fontsize=9) # for j in range(gclass_num): # ax[1,1].plot(x, volume_unit_area[j, start_id + 1, xrange_grid[0]:xrange_grid[1]],label=gclass_name[j]) # ax[1,1].legend() #plt.tight_layout() plt.tight_layout() plt.savefig('tex/fig04.eps') plt.show() ``` # Show training results depending on number of training data sets and length of sampling window ``` import os from os.path import join import numpy as np import matplotlib.pyplot as plt %matplotlib inline datadir = '/home/naruse/public/naruse/DeepLearningTurbidite/distance' resdir_train_num = '/home/naruse/public/naruse/DeepLearningTurbidite/result_training_num_10' resdir_distance = '/home/naruse/public/naruse/DeepLearningTurbidite/result_distance_3500' base_distance = 10 base_train_num = 3500 case_train_num = [500, 1000, 1500, 2000, 2500, 3000, 3500] case_distance = [1, 2, 3, 4, 5, 10, 15, 20, 25, 30] # settings for plotting linewidth = 0.5 linestyle = ['-', '--', ':', '-.'] linecolor = ["r", "g", "b", "c", "m", "y", "k"] lc_id = 0 params = {'legend.fontsize': 5, 'legend.handlelength': 1., 'legend.frameon': False, 'font.size' : 7, 'font.family': ['sans-serif'], 'font.sans-serif': ['Arial'], 'legend.labelspacing' : 0.5, 'legend.handletextpad' : 0.5, 'legend.markerscale' : 1., } plt.rcParams.update(params) # Plot results fig, ax = plt.subplots(2, 1, figsize=(8/2.54,8/2.54)) plt.subplots_adjust(bottom=0.3, wspace=0.5) # Plot results depending on number of training data sets loss_train_num = [] val_loss_train_num = [] for train_num in case_train_num: loss_train_num.append( np.loadtxt(join(resdir_train_num, '{}'.format(train_num), 'loss.txt'), delimiter=',')[-1]) val_loss_train_num.append( np.loadtxt(join(resdir_train_num, '{}'.format(train_num), 'val_loss.txt'), delimiter=',')[-1]) ax[0].plot(case_train_num, loss_train_num, 'bo', markerfacecolor='w', label='Training', markersize=3) ax[0].plot(case_train_num, val_loss_train_num, 'ro', markerfacecolor='r', label='Validation', markersize=3) ax[0].set_xlabel('Number of Data Sets', fontsize=7) ax[0].set_ylabel('Loss function (MSE)', fontsize=7) ax[0].legend() ylim = ax[0].get_ylim() xlim = ax[0].get_xlim() ax[0].text(xlim[0] - 0.1 * xlim[1], ylim[0] + (ylim[1] - ylim[0])*1.05, 'a.', fontweight='bold', fontsize=9) # Plot results depending on lengths of sampling window loss_distance = [] val_loss_distance = [] for distance in case_distance: loss_distance.append( np.loadtxt(join(resdir_distance, '{}'.format(distance), 'loss.txt'), delimiter=',')[-1]) val_loss_distance.append( np.loadtxt(join(resdir_distance, '{}'.format(distance), 'val_loss.txt'), delimiter=',')[-1]) ax[1].plot(case_distance, loss_distance, 'go', markerfacecolor='w', label='Training', markersize=3) ax[1].plot(case_distance, val_loss_distance, 'mo', markerfacecolor='m', label='Validation', markersize=3) ax[1].set_xlabel('Length of Sampling Window (km)', fontsize=7) ax[1].set_ylabel('Loss function (MSE)', fontsize=7) ax[1].legend() ylim = ax[1].get_ylim() xlim = ax[1].get_xlim() ax[1].text(xlim[0] - 0.1 * xlim[1], ylim[0] + (ylim[1] - ylim[0])*1.05, 'b.', fontweight='bold', fontsize=9) # Save figures plt.tight_layout() plt.savefig('tex/fig05.eps') ``` # Show test results ``` import os import numpy as np import matplotlib.pyplot as plt from sklearn.metrics import r2_score from scipy import stats from sklearn.utils import resample import pandas as pd %matplotlib inline # datadir = '/home/naruse/antidune/Documents/PythonScripts/DeepLearningTurbidite/20180419/data/' # resdir = '/home/naruse/antidune/Documents/PythonScripts/DeepLearningTurbidite/20180419/result_testGPU_4layers/2670/' datadir = '/home/naruse/public/naruse/DeepLearningTurbidite/distance/10/data/' resdir = '/home/naruse/public/naruse/DeepLearningTurbidite/result_training_num_10/3500/' test_result = np.loadtxt(os.path.join(resdir, 'test_result.txt'),delimiter=',') icond = np.load(os.path.join(datadir, 'icond_test.npy')) loss = np.loadtxt(os.path.join(resdir, 'loss.txt'), delimiter=',') vloss = np.loadtxt(os.path.join(resdir, 'val_loss.txt'), delimiter=',') epoch = range(0,loss.shape[0]) # Calculate statistics resi_ratio = (test_result - icond) / icond resi = test_result - icond r2value = [] for i in range(icond.shape[1]): r2value.append(r2_score(icond[:, i], test_result[:, i])) mean_bias = np.average(resi,axis=0) std_bias = np.std(resi,axis=0, ddof=1) rmse = np.sqrt(np.sum(resi ** 2, axis=0) / resi.shape[0]) mae = np.sum(np.abs(resi), axis=0) / resi.shape[0] mean_bias_ratio = np.average(resi_ratio,axis=0) std_bias_ratio = np.std(resi_ratio,axis=0, ddof=1) rmse_ratio = np.sqrt(np.sum(resi_ratio ** 2, axis=0) / resi_ratio.shape[0]) mae_ratio = np.sum(np.abs(resi_ratio), axis=0) / resi.shape[0] # make a table for exhibiting statistics df_stats = pd.DataFrame( { "R^2" : r2value, "RMSE" : rmse, "RMSE (normalized)" : rmse_ratio * 100, "MAE" : mae, "MAE (normalized)" : mae_ratio * 100, "Mean bias" : mean_bias, "Mean bias (normalized)" : mean_bias_ratio * 100, }, index = [ 'Initial height', 'Initial length', 'C_1', 'C_2', 'C_3', 'C_4', 'S_l'] ) df_stats.loc['C_1':'S_l' ,['RMSE', 'MAE', 'Mean bias']] *= 100 print(df_stats.to_latex(float_format='%.2f')) # Boostrap resampling # n = 10000 # resampled_resi = np.empty(resi.shape) # resampled_mean = np.zeros([n, resi.shape[1]]) # for i in range(resi.shape[1]): # for j in range(n): # resampled_resi[:,i] = resample(resi_ratio[:,i]) # resampled_mean[j, i] = np.average(resampled_resi[:,i]) # Bootstrap mean and error range # mean_bias_bootstrap = np.average(resampled_mean, axis=0) # lowerbounds_bias_bootstrap = np.percentile(resampled_mean, 2.5, axis=0) # upperbounds_bias_bootstrap = np.percentile(resampled_mean, 97.5, axis=0) # settings for plotting linewidth = 0.5 linestyle = ['-', '--', ':', '-.'] linecolor = ["r", "g", "b", "c", "m", "y", "k"] lc_id = 0 params = {'legend.fontsize': 5, 'legend.handlelength': 1., 'legend.frameon': False, 'font.size' : 7, 'font.family': ['sans-serif'], 'font.sans-serif': ['Arial'], 'legend.labelspacing' : 0.5, 'legend.handletextpad' : 0.5, 'legend.markerscale' : 1., } plt.rcParams.update(params) # plot training history fig, ax = plt.subplots(1,1, figsize=(8/2.54,4/2.54)) ax.plot(epoch, loss, 'b-',label='Loss', lw=0.5) ax.plot(epoch, vloss, 'y-',label='Validation', lw=0.5) ax.set_xlabel('Epoch') ax.set_ylabel('Loss function (MSE)') ax.legend(loc="upper right") plt.savefig('tex/fig06.eps') print('Training loss: {}'.format(loss[-1])) print('Validation loss: {}'.format(vloss[-1])) hfont = {'fontname':'Century Gothic'} textcol = 'k' titlelabel = ['Initial Length\n(m)', 'Initial Height\n(m)', '$C_1$', '$C_2$', '$C_3$', '$C_4$', '$S_L$'] # Scattered plots to compare the predicted values with the true values fig2, ax2 = plt.subplots(int(len(titlelabel)/2) + 1, 2, figsize=(12/2.54, 19/2.54)) plt.subplots_adjust(wspace=0.1, hspace=0.6) for i in range(len(titlelabel)): x_fig = int(i/2) y_fig = i%2 ax2[x_fig, y_fig].plot(icond[:,i],test_result[:,i],"o", markersize=1) ax2[x_fig, y_fig].plot([0,np.max(test_result[:,i])], [0, np.max(test_result[:,i])], "-", lw=linewidth*2) ax2[x_fig, y_fig].set_xlabel('True Value',color=textcol,fontsize=7) ax2[x_fig, y_fig].set_ylabel('Estimated Value',color=textcol,fontsize=7) ax2[x_fig, y_fig].set_title(titlelabel[i],color=textcol,fontsize=9) ax2[x_fig, y_fig].tick_params(colors=textcol,length=2,labelsize=5) ax2[x_fig, y_fig].set_aspect('equal') xlim = ax2[x_fig, y_fig].get_xlim() ylim = ax2[x_fig, y_fig].get_ylim() xloc = xlim[0] + (xlim[1] - xlim[0]) * 0.1 yloc = ylim[0] + (ylim[1] - ylim[0]) * 0.85 ax2[x_fig, y_fig].text(xloc, yloc, '$R^2 = ${:.3f}'.format(r2value[i])) # fig.tight_layout() plt.savefig('tex/fig07.eps') #plt.show() # Histograms for prediction errors fig3, ax3 = plt.subplots(int(len(titlelabel)/2) + 1, 2, figsize=(12/2.54, 16/2.54)) plt.subplots_adjust(wspace=0.5, hspace=0.7) for i in range(len(titlelabel)): x_fig = int(i/2) y_fig = i%2 ax3[x_fig, y_fig].hist(resi[:,i],bins=20) ax3[x_fig, y_fig].set_title(titlelabel[i],color=textcol) ax3[x_fig, y_fig].set_xlabel('Deviation from true value',color=textcol, fontsize=7) ax3[x_fig, y_fig].set_ylabel('Frequency',color=textcol, fontsize=7) ax3[x_fig, y_fig].tick_params(colors=textcol, length=2, labelsize=5) # xlim = ax3[x_fig, y_fig].get_xlim() # ylim = ax3[x_fig, y_fig].get_ylim() # xloc = xlim[0] + (xlim[1] - xlim[0]) * 0.1 # yloc = ylim[0] + (ylim[1] - ylim[0]) * 0.7 ax3[x_fig, y_fig].text(0.99, 0.95, 'RMSE = {0:.1f} %\n Mean Bias = {1:.1f}'.format( rmse[i] * 100, mean_bias_ratio[i] * 100), # lowerbounds_bias_bootstrap[i] * 100, # upperbounds_bias_bootstrap[i] * 100), horizontalalignment='right', verticalalignment='top', transform=ax3[x_fig, y_fig].transAxes, fontsize=5) fig.tight_layout() plt.savefig('tex/fig08.eps') #plt.show() ``` # Check bias and errors of predicted values ``` from scipy import stats import numpy as np from sklearn.utils import resample import ipdb resi_ratio = (test_result - icond) / icond resi = test_result - icond print("mean bias") print(np.average(resi,axis=0)) print("2σ of bias") print(np.std(resi,axis=0, ddof=1)*2) print("RMSE") print(np.sqrt(np.sum(resi**2)/resi.shape[0]/resi.shape[1])) print("mean bias (ratio)") print(np.average(resi_ratio,axis=0)) print("2σ of bias (ratio)") print(np.std(resi_ratio,axis=0, ddof=1)*2) print("RMSE (ratio)") print(np.sqrt(np.sum(resi_ratio**2)/resi_ratio.shape[0]/resi_ratio.shape[1])) print("p-values of the Shapiro-Wilk test for normality") for i in range(resi.shape[1]): print(stats.shapiro(resi[:,i])[1]) # Bootstrap mean and error range print("mean bias (bootstrap samples)") print(np.average(resampled_mean, axis=0)) print("2.5 percentile of biases (bootstrap samples)") print(np.percentile(resampled_mean, 2.5, axis=0)) print("97.5 percentile of biases (bootstrap samples)") print(np.percentile(resampled_mean, 97.5, axis=0)) # Histograms of bootstrap samples hfont = {'fontname':'Century Gothic'} textcol = 'k' titlelabel = ['Initial Length', 'Initial Height', '$C_1$', '$C_2$', '$C_3$', '$C_4$', '$S_L$'] fig4, ax4 = plt.subplots(int(len(titlelabel)/2) + 1, 2, figsize=(8, 4 * np.ceil(len(titlelabel) / 2))) plt.subplots_adjust(wspace=0.6, hspace=0.4) for i in range(len(titlelabel)): ax4[int(i/2), i%2].hist(resampled_mean[:,i],bins=20) ax4[int(i/2), i%2].set_title(titlelabel[i],color=textcol,size=14,**hfont) ax4[int(i/2), i%2].set_xlabel('Bias in Bootstrap sample',color=textcol,size=14,**hfont) ax4[int(i/2), i%2].set_ylabel('Frequency',color=textcol,size=14,**hfont) ax4[int(i/2), i%2].tick_params(labelsize=14,colors=textcol) fig.tight_layout() plt.savefig('hist_bootstrap.pdf') ``` # Compare time evolution of reconstructed parameters with original ones ``` import numpy as np import matplotlib.pyplot as plt from os.path import join from os import mkdir from scipy.interpolate import interp1d import pandas as pd %matplotlib original_dir = '/home/naruse/antidune/Documents/MATLAB/TCtrainData_forML/TCModel_for_MLTEST/test_output_original5' estimated_dir = '/home/naruse/antidune/Documents/MATLAB/TCtrainData_forML/TCModel_for_MLTEST/test_output_reconst5' dist_offset = 5000. dist_max = 30000. topodx = 5 grid_origin = int(dist_offset / topodx) grid_end = int((dist_max + dist_offset)/topodx) snapshot_time = np.array([2000, 3500, 5000]) time_interval = 200. time_frame = (snapshot_time / time_interval).astype(np.int64) icond_estimated = np.loadtxt(join(estimated_dir, 'icond.txt'),delimiter=',') Ht_estimated = np.loadtxt(join(estimated_dir, 'Ht.txt'),delimiter=',') Ct_estimated = np.loadtxt(join(estimated_dir, 'Ct.txt'),delimiter=',') U_estimated = np.loadtxt(join(estimated_dir, 'U.txt'),delimiter=',') x_estimated = np.loadtxt(join(estimated_dir, 'x.txt'),delimiter=',') x_bed = np.loadtxt(join(estimated_dir, 'x_init.txt'),delimiter=',') time_estimated = np.loadtxt(join(estimated_dir, 'time.txt'),delimiter=',') icond_original = np.loadtxt(join(original_dir, 'icond.txt'),delimiter=',') Ht_original = np.loadtxt(join(original_dir, 'Ht.txt'),delimiter=',') Ct_original = np.loadtxt(join(original_dir, 'Ct.txt'),delimiter=',') U_original = np.loadtxt(join(original_dir, 'U.txt'),delimiter=',') x_original = np.loadtxt(join(original_dir, 'x.txt'),delimiter=',') time_original = np.loadtxt(join(original_dir, 'time.txt'),delimiter=',') print('Reconstructed values: {}'.format(icond_estimated)) print('True values: {}'.format(icond_original)) print('RMSE: {}'.format(np.sqrt(np.sum(((icond_estimated - icond_original)/icond_original)**2)/icond_estimated.shape[0]))) # Make a table to exhibit true and predicted values of model input parameters df = pd.DataFrame(np.array([[icond_original[:]], [icond_estimated[:]]]).reshape(2, 7), columns=[ 'Initial height (m)', 'Initial length (m)', 'C_1 (%)', 'C_2 (%)', 'C_3 (%)', 'C_4 (%)', 'S_l (%)' ], index=[ 'True input parameters', 'Estimated parameters' ]) df.loc[:, 'C_1 (%)':'S_l (%)'] *= 100 print(df.to_latex(float_format='%.2f')) # settings for plotting linewidth = 0.5 linestyle = ['-', '--', ':', '-.'] linecolor = ["r", "g", "b", "c", "m", "y", "k"] lc_id = 0 params = {'legend.fontsize': 5, 'legend.handlelength': 1., 'legend.frameon': False, 'font.size' : 7, 'font.family': ['sans-serif'], 'font.sans-serif': ['Arial'], 'legend.labelspacing' : 0.5, 'legend.handletextpad' : 0.5, 'legend.markerscale' : 1., } plt.rcParams.update(params) # Plot results fig1, ax1 = plt.subplots(3, 1, figsize=(8/2.54, 12/2.54)) plt.subplots_adjust(bottom=0.3, wspace=0.5) # plot flow velocity for tframe, col in zip(time_frame, linecolor): ax1[0].plot(x_estimated[tframe,:]/1000, U_estimated[tframe,:], '-', color=col, lw=linewidth, label='{} sec.'.format(tframe*time_interval)) ax1[0].plot(x_original[tframe,:]/1000, U_original[tframe,:],'--', color=col, lw=linewidth, label=None) # ax1[0].set_title('Flow Velocity', fontsize=9) ax1[0].set_xlabel('Distance (km)', fontsize = 7) ax1[0].set_ylabel('Velocity (m/s)', fontsize = 7) ax1[0].legend() xlim = ax1[0].get_xlim() ylim = ax1[0].get_ylim() ax1[0].text(xlim[0] - 0.1 * xlim[1], ylim[0] + (ylim[1] - ylim[0])*1.05, 'a.', fontweight='bold', fontsize=9) # plot sediment concentration for tframe, col in zip(time_frame, linecolor): ax1[1].plot(x_estimated[tframe,:]/1000, Ct_estimated[tframe,:] * 100, '-', color=col, lw=linewidth, label='{} sec.'.format(tframe*time_interval)) ax1[1].plot(x_original[tframe,:]/1000, Ct_original[tframe,:] * 100, '--', color=col, lw=linewidth, label=None) # ax1[1].set_title('Total Concentration', fontsize = 9) ax1[1].set_xlabel('Distance (km)', fontsize = 7) ax1[1].set_ylabel('Concentration (%)', fontsize = 7) ax1[1].legend() xlim = ax1[1].get_xlim() ylim = ax1[1].get_ylim() ax1[1].text(xlim[0] - 0.1 * xlim[1], ylim[0] + (ylim[1] - ylim[0])*1.05, 'b.', fontweight='bold', fontsize=9) # plot thickness ax1[2].plot(x_bed[grid_origin:grid_end]/1000, Ht_estimated[-1,grid_origin:grid_end],'k--', lw=linewidth, label='Estimated') ax1[2].plot(x_bed[grid_origin:grid_end]/1000, Ht_original[-1,grid_origin:grid_end],'k-', lw=linewidth, label='Original') # ax1[2].set_title('Bed thickness', size = 9, **hfont) ax1[2].set_xlabel('Distance (km)', fontsize = 7) ax1[2].set_ylabel('Thickness (m)', fontsize = 7) xlim = ax1[2].get_xlim() ylim = ax1[2].get_ylim() ax1[2].legend() ax1[2].text(xlim[0] - 0.1 * xlim[1], ylim[0] + (ylim[1] - ylim[0])*1.05, 'c.', fontweight='bold', fontsize=9) # save figure plt.tight_layout() plt.savefig('tex/fig09.eps') # Time evolution at fixed location start = 0.0 endtime = 5000.0 start_d = int(start / time_interval) endtime_d = int(endtime / time_interval) outcrop = np.array([5*1000, 8 * 1000, 10 * 1000]) linecolor = ['r', 'g', 'b'] U_original_loc = np.zeros([len(time_original),len(outcrop)]) U_estimated_loc = np.zeros([len(time_original),len(outcrop)]) if len(time_original) > len(time_estimated): time_length = len(time_estimated) else: time_length = len(time_original) for j in range(time_length): f_original = interp1d(x_original[j,:], U_original[j,:], kind="linear", bounds_error=False, fill_value=0) U_original_loc[j,:] = f_original(outcrop) f_estimated = interp1d(x_estimated[j,:], U_estimated[j,:], kind="linear", bounds_error=False, fill_value=0) U_estimated_loc[j,:] = f_estimated(outcrop) #図にプロットする fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4)) plt.subplots_adjust(wspace=0.6, hspace=0.4) for k in range(len(outcrop)): ax2.plot(time_original[start_d:endtime_d], U_original_loc[start_d:endtime_d,k], '--', color= linecolor[k], label=None) ax2.plot(time_estimated[start_d:endtime_d], U_estimated_loc[start_d:endtime_d,k], '-', color= linecolor[k], label='{} km'.format(outcrop[k] / 1000)) ax2.legend() ax2.set_xlabel('Time (s.)') ax2.set_ylabel('Velocity (m/s)') # ax2.set_title('Velocity') plt.savefig('compare_result_fixedloc.svg') ``` # tests with normal random numbers ``` import numpy as np import matplotlib.pyplot as plt from tensorflow.keras.models import load_model from scipy import stats from scipy.stats import sem import os %matplotlib inline def check_noise(model=None, X_test=None, y_test=None, y_min=None, y_max=None, min_x=None, max_x=None, err_rate=0.10, datadir = None, resdir = None, gclass = 4, topodx = 5, plot_fig = True, ): # Obtain the original data sets if X_test is None: X_test = np.load(os.path.join(datadir, 'H_test.npy')) if y_test is None: y_test = np.load(os.path.join(datadir, 'icond_test.npy')) if y_min is None: y_min = np.load(os.path.join(datadir, 'icond_min.npy')) if y_max is None: y_max = np.load(os.path.join(datadir, 'icond_max.npy')) # normalization if min_x is None or max_x is None: min_x, max_x = np.load(os.path.join(datadir, 'x_minmax.npy')) X_test_norm = (X_test - min_x) / (max_x - min_x) # add noise # 2 sigma = true parameter times err_rate err = np.random.normal(size=X_test_norm.shape) x_test_norm_w_error = X_test_norm + err * 0.5 * err_rate * X_test_norm num_node_per_gclass = int(X_test_norm.shape[1] / gclass) dist = np.arange(0,num_node_per_gclass)* topodx #print(X_test_norm[1,1000:1010]) #print(x_test_norm_w_error[1,1000:1010]) #print(err[1,1000:1010]) # load the model if the model is None # model = load_model(resdir+'model.hdf5') test_result = model.predict(X_test_norm) test_result = test_result * (y_max - y_min) + y_min test_result_w_error = model.predict(x_test_norm_w_error) test_result_w_error = test_result_w_error * (y_max - y_min) + y_min # Load true parameters icond = np.load(os.path.join(datadir, 'icond_test.npy')) loss = np.loadtxt(os.path.join(resdir, 'loss.txt'), delimiter=',') epoch = range(0,len(loss)) vloss = np.loadtxt(resdir+'val_loss.txt',delimiter=',') # Calculate residuals resi = (test_result - icond) resi_w_error = (test_result_w_error - icond) resi_w_error_ratio = (test_result_w_error - icond) / icond # Plot figure of each test if plot_fig: plt.figure() plt.plot(x_test_norm_w_error[1,0:num_node_per_gclass], label='With Error') plt.plot(X_test_norm[1,0:num_node_per_gclass], label='Original') plt.xlabel('Distance') plt.ylabel('Normalized thickness') plt.legend() titlelabel = ['Initial Length', 'Initial Height', '$C_1$', '$C_2$', '$C_3$', '$C_4$', '$S_1$'] hfont = {'fontname':'Century Gothic'} textcol = 'k' for i in range(len(titlelabel)): plt.figure() plt.plot(icond[:,i],test_result[:,i],"bo",label='without error') plt.plot(icond[:,i],test_result_w_error[:,i],"ro",label='with error ({:.0f}%)'.format(err_rate*100)) plt.title(titlelabel[i],color=textcol,size=14,**hfont) plt.xlabel('True values',color=textcol,size=14,**hfont) plt.ylabel('Estimated values',color=textcol,size=14,**hfont) plt.legend() plt.tick_params(labelsize=14,colors=textcol) plt.savefig(titlelabel[i] + 'err{:.0f}'.format(err_rate*100) + '.pdf') #plt.show() for i in range(len(titlelabel)): plt.figure() plt.hist(resi_w_error[:,i],bins=20) plt.title(titlelabel[i]) plt.xlabel('Deviation from true value') plt.ylabel('Frequency') #plt.show() # print("Mean Square error") # print(np.average(resi**2,axis=0)) # print("MSE with noise") # print(np.average(resi_w_error**2,axis=0)) # print("Mean error") # print(np.average(resi,axis=0)) # print("Mean error with noise) # print(np.average(resi_w_error,axis=0)) # print("2 sigma of residuals") # print(np.std(resi,axis=0)*2) # print("2 sigma of residuals with noise") # print(np.std(resi_w_error,axis=0)*2) # print("ratio of residuals to true value") # print(np.average(np.abs(resi)/icond,axis=0)) # print("ratio of residuals to true value with noise") # print(np.average(np.abs(resi_w_error)/icond,axis=0)) # print("p-values of the Shapiro-Wilk test for normality") # for i in range(resi.shape[1]): # print(stats.shapiro(resi[:,i])[1]) # print("p-values of the Shapiro-Wilk test for normality (with error)") # for i in range(resi_w_error.shape[1]): # print(stats.shapiro(resi_w_error[:,i])[1]) # Return normalized RMSE RMS = np.sqrt(np.sum(resi_w_error_ratio ** 2) / resi_w_error_ratio.shape[0] / resi_w_error_ratio.shape[1]) return RMS if __name__ == "__main__": datadir = '/home/naruse/public/naruse/DeepLearningTurbidite/distance/10/data/' resdir = '/home/naruse/public/naruse/DeepLearningTurbidite/result_training_num_10/3500/' model = load_model(os.path.join(resdir, 'model.hdf5')) noisetest_err_rate = np.linspace(0,2.0,40) result_noise = np.zeros(len(noisetest_err_rate)) result_noise_stderr = np.zeros(len(noisetest_err_rate)) num_tests = 20 for i in range(len(noisetest_err_rate)): testres = np.zeros([num_tests]) for j in range(num_tests): testres[j] = check_noise(model, datadir=datadir, resdir=resdir, err_rate=noisetest_err_rate[i], plot_fig=False) result_noise[i] = np.average(testres) result_noise_stderr[i] = sem(testres) np.savetxt("result_w_error.csv",result_noise,delimiter=',') %matplotlib inline # plot result of noise tests fig1, ax1 = plt.subplots(1, 1, figsize=(8/2.54, 5/2.54)) plt.subplots_adjust(bottom=0.3, wspace=0.5, hspace=0.3) ax1.errorbar(noisetest_err_rate*100, result_noise, color='g', yerr=result_noise_stderr, ecolor='k', capsize=1.) # ax1.title("$S_L$") ax1.set_xlabel('Ratio of standard deviation of\n random noise to original value (%)') ax1.set_ylabel('RMS error') ax1.set_ylim([0,0.5]) # ax1.legend() # plt.tick_params(labelsize=14,colors=textcol) plt.tight_layout() plt.savefig("tex/fig10.eps") ``` # Subsampling tests ``` import numpy as np import matplotlib.pyplot as plt from tensorflow.keras.models import load_model from scipy import stats from scipy.interpolate import interp1d from scipy.stats import sem from os.path import join %matplotlib inline def check_interp(model=None, X_test=None, y_test=None, y_min=None, y_max=None, frac = 0.005, datadir = None, resdir = None, plot_fig = True, ): # Obtain the original data sets if X_test is None: X_test = np.load(join(datadir, 'H_test.npy')) if y_test is None: y_test = np.load(join(datadir, 'icond_test.npy')) if y_min is None: y_min = np.load(join(datadir, 'icond_min.npy')) if y_max is None: y_max = np.load(join(datadir, 'icond_max.npy')) # normalization min_x, max_x = np.load(join(datadir, 'x_minmax.npy')) X_test_norm = (X_test - min_x) / (max_x - min_x) # Subsampling #frac = 0.005 # ratio of subsampling gclass = 4 # number of grain size classes coord_num = X_test_norm.shape[1] / gclass # number of grids sam_coord_num = np.round(frac * coord_num) # number of subsampled grids x_coord = np.arange(X_test_norm.shape[1]/ gclass) # Index number of grids sampleid = np.sort(np.random.choice(x_coord,int(sam_coord_num),replace=False)) # subsampled id of grids thick_interp = np.zeros(X_test.shape) # interpolated thickness data for j in range(gclass): sid = sampleid + coord_num * j #print(sid) sindex = sid.astype(np.int32) f = interp1d(sid,X_test_norm[:,sindex], kind="linear", fill_value='extrapolate') # interpolation funciton for the jth grain size class coord_range = np.arange(coord_num*j, coord_num*(j+1)) # range to interpolate thick_interp[:,coord_range.astype(np.int32)] = f(coord_range) # interpolated data # Load the model and predict from subsampled data if model is None: model = load_model(join(resdir, 'model.hdf5')) test_result = model.predict(X_test_norm) test_result = test_result * (y_max - y_min) + y_min test_result_sample = model.predict(thick_interp) test_result_sample = test_result_sample * (y_max - y_min) + y_min # calculate residuals icond = np.load(join(datadir, 'icond_test.npy')) resi = test_result - icond resi_sample = test_result_sample - icond resi_sample_ratio = (test_result_sample - icond) / icond # comparison with original reconstruction titlelabel = ['Initial Length', 'Initial Height', '$C_1$', '$C_2$', '$C_3$', '$C_4$','$S_1$'] hfont = {'fontname':'Century Gothic'} textcol = 'w' if plot_fig: for i in range(len(titlelabel)): plt.figure() plt.plot(icond[:,i],test_result[:,i],"bo",label='Original') plt.plot(icond[:,i],test_result_sample[:,i],"ro",label='Resampled data ({:.1f}%)'.format(frac*100)) plt.title(titlelabel[i],color=textcol,size=14,**hfont) plt.xlabel('True values',color=textcol,size=14,**hfont) plt.ylabel('Estimated values',color=textcol,size=14,**hfont) plt.legend() plt.tick_params(labelsize=14,colors=textcol) plt.savefig(titlelabel[i] + 'resample{:.1f})'.format(frac*100) + '.pdf') plt.show() for i in range(len(titlelabel)): plt.figure() plt.hist(resi_sample[:,i],bins=20) plt.title(titlelabel[i]) plt.xlabel('Deviation from true value') plt.ylabel('Frequency') plt.show() print("mean residuals") print(np.average(resi,axis=0)) print("mean residuals (subsampled)") print(np.average(resi_sample,axis=0)) print("2 sigma of residuals") print(np.std(resi,axis=0)*2) print("2 sigma of residuals (subsampled)") print(np.std(resi_sample,axis=0)*2) print() print("p-values of the Shapiro-Wilk test for normality") for i in range(resi.shape[1]): print(stats.shapiro(resi[:,i])[1]) print("p-values of the Shapiro-Wilk test for normality (with error)") for i in range(resi_sample.shape[1]): print(stats.shapiro(resi_sample[:,i])[1]) # Return normalized RMSE RMS = np.sqrt(np.sum(resi_sample_ratio ** 2) / resi_sample_ratio.shape[0] / resi_sample_ratio.shape[1]) return RMS if __name__ == "__main__": datadir = '/home/naruse/public/naruse/DeepLearningTurbidite/distance/10/data/' resdir = '/home/naruse/public/naruse/DeepLearningTurbidite/result_training_num_10/3500/' subsampling_result_file = join(resdir, 'subsampling_result.npy') subsampling_result_error_file = join(resdir, 'subsampling_result_error.npy') model = load_model(join(resdir, 'model.hdf5')) subsampling_test_err_rate = np.linspace(0.05,0.001,50) result_subsampling = np.zeros([len(subsampling_test_err_rate)]) result_subsampling_error = np.zeros([len(subsampling_test_err_rate)]) num_tests = 20 for i in range(len(subsampling_test_err_rate)): testres = np.zeros([num_tests]) for j in range(num_tests): testres[j] = check_interp(model, datadir=datadir, resdir=resdir, frac=subsampling_test_err_rate[i], plot_fig=False) result_subsampling[i] = np.average(testres) result_subsampling_error[i] = sem(testres) np.save(subsampling_result_file, result_subsampling) np.save(subsampling_result_error_file, result_subsampling_error) %matplotlib inline fig1, ax1 = plt.subplots(1, 1, figsize=(8/2.54, 5/2.54)) plt.subplots_adjust(bottom=0.3, wspace=0.5, hspace=0.3) plt.errorbar(subsampling_test_err_rate*100, result_subsampling, yerr=result_subsampling_error, ecolor='k', capsize=1.) ax1.set_xlabel('Ratio of Subsampled Grids (%)') ax1.set_ylabel('RMS error') ax1.set_xticks(np.arange(0, 5, 0.5)) # ax1.legend() # plt.tick_params(labelsize=14,colors=textcol) plt.tight_layout() plt.savefig("tex/fig11.eps") ```
github_jupyter
Plotting with matplotlib - 1 ======================== ``` # plotting imports import matplotlib.pyplot as plt import seaborn as sns # other imports import numpy as np import pandas as pd from scipy import stats ``` Hello world --- Using the `pyplot` notation, very similar to how MATLAB works ``` plt.plot([0, 1, 2, 3, 4], [0, 1, 2, 5, 10], 'bo-') plt.text(1.5, 5, 'Hello world', size=14) plt.xlabel('X axis\n($\mu g/mL$)') plt.ylabel('y axis\n($X^2$)'); ``` Hello world, reprise --- Using the reccommended "object-oriented" (OO) style ``` fig, ax = plt.subplots() ax.plot([0, 1, 2, 3, 4], [0, 1, 2, 5, 10], 'bo-') ax.text(1.5, 5, 'Hello world', size=14) ax.set_xlabel('X axis\n($\mu g/mL$)') ax.set_ylabel('y axis\n($X^2$)'); # create some data x = np.linspace(0, 2, 100) fig, ax = plt.subplots() ax.plot(x, x, label='linear') ax.plot(x, x**2, label='quadratic') ax.plot(x, x**3, label='cubic') ax.set_xlabel('x label') ax.set_ylabel('y label') ax.set_title('Simple Plot') ax.legend() ``` Controlling a figure aspect --- ``` # figure size # width / height fig, ax = plt.subplots(figsize=(9, 4)) ax.plot(x, x, label='linear') ax.plot(x, x**2, label='quadratic') ax.plot(x, x**3, label='cubic') ax.set_xlabel('x label') ax.set_ylabel('y label') ax.set_title('Simple Plot') ax.legend(); fig, ax = plt.subplots(figsize=(9, 4)) # change markers ax.plot(x, x, '--', color='grey', label='linear') ax.plot(x, x**2, '.-', color='red', label='quadratic') ax.plot(x, x**3, '*', color='#3bb44a', label='cubic') ax.set_xlabel('x label') ax.set_ylabel('y label') ax.set_title('Simple Plot') # move the legend ax.legend(loc='upper right'); # alternative ways to move it # ax.legend(loc='center left', # bbox_to_anchor=(1, 0.5), # ncol=3); ``` Multiple panels --- ``` x1 = np.linspace(0.0, 5.0) x2 = np.linspace(0.0, 2.0) y1 = np.cos(2 * np.pi * x1) * np.exp(-x1) y2 = np.cos(2 * np.pi * x2) # rows, columns fig, axes = plt.subplots(2, 1, figsize=(6, 4)) # axes is a list of "panels" print(axes) ax = axes[0] ax.plot(x1, y1, 'o-') ax.set_title('A tale of 2 subplots') ax.set_ylabel('Damped oscillation') ax = axes[1] ax.plot(x2, y2, '.-') ax.set_xlabel('time (s)') ax.set_ylabel('Undamped'); ``` Automagically adjust panels so that they fit in the figure --- ``` def example_plot(ax, fontsize=12): ax.plot([1, 2]) ax.set_xlabel('x-label', fontsize=fontsize) ax.set_ylabel('y-label', fontsize=fontsize) ax.set_title('Title', fontsize=fontsize) fig, axs = plt.subplots(2, 2, figsize=(4, 4), constrained_layout=False) print(axs) for ax in axs.flat: example_plot(ax) # warning: "constrained_layout" is an experimental feature fig, axs = plt.subplots(2, 2, figsize=(4, 4), constrained_layout=True) for ax in axs.flat: example_plot(ax) # alternative way fig, axs = plt.subplots(2, 2, figsize=(4, 4), constrained_layout=False) for ax in axs.flat: example_plot(ax) # alternative to constrained_layout plt.tight_layout(); ``` Example of manipulating axes limits --- Extra: a look at ways to choose colors and manipulating transparency ``` fig, axes = plt.subplots(1, 2, figsize=(9, 4)) # same plot for both panels # we are just gonna change the axes' limits for ax in axes: # more color choices # (see here for a full list: https://matplotlib.org/tutorials/colors/colors.html) # xkcd rgb color survey: https://xkcd.com/color/rgb/ ax.plot(x, x, '--', color='xkcd:olive green', label='linear') # RGBA (red, green, blue, alpha) ax.plot(x, x**2, '.-', color=(0.1, 0.2, 0.5, 0.3), label='quadratic') # one of {'b', 'g', 'r', 'c', 'm', 'y', 'k', 'w'} # they are the single character short-hand notations for: # blue, green, red, cyan, magenta, yellow, black, and white ax.plot(x, x**3, '*', color='m', label='cubic') # transparency can be manipulated with the "alpha" kwarg (= keyword argument) ax.plot(x, x**4, '-', color='b', linewidth=4, alpha=0.3, label='white house') ax.set_xlabel('x label') ax.set_ylabel('y label') ax.set_title('Simple Plot') # only manipulate last axes ax.set_ylim(1, 16.4) ax.set_xlim(1.65, 2.03) ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), title='Fit'); ``` Other sample plots using "vanilla" matplotlib --- ``` # scatter plot fig, ax = plt.subplots(figsize=(6, 4)) N = 10 x = np.linspace(0, 1, N) y = x ** 2 # colors is a list of colors # in the same format as shown before colors = np.linspace(0, 1, N) # alternative # colors = ['b', 'b', 'b', # 'k', 'k', 'k', # 'r', 'r', 'r', # 'xkcd:jade'] area = 5 + (20 * x) ** 3 print(f'x: {x}') print(f'y: {y}') print(f'colors: {colors}') print(f'area: {area}') ax.scatter(x, y, s=area, c=colors, alpha=0.9, edgecolors='w', linewidths=3, label='Data') ax.legend(loc='upper left'); # generate 2d random data data = np.random.randn(2, 100) data # histogram fig, axs = plt.subplots(1, 2, figsize=(6, 3)) bins = 25 axs[0].hist(data[0], bins=bins) axs[1].hist2d(data[0], data[1], bins=bins); ``` Other useful tips --- ``` # scatter plot with log axes fig, ax = plt.subplots(figsize=(6, 4)) N = 10 x = np.linspace(0, 10, N) y = 2 ** x colors = np.linspace(0, 1, N) area = 500 ax.scatter(x, y, s=area, c=colors, alpha=0.9, edgecolors='w', linewidths=3, label='Data') ax.set_yscale('log', base=10); # scatter plot with log axes fig, ax = plt.subplots(figsize=(6, 4)) N = 10 x = 10 ** np.linspace(1, 4, N) y = x ** 2 colors = np.linspace(0, 1, N) area = 500 ax.scatter(x, y, s=area, c=colors, alpha=0.9, edgecolors='w', linewidths=3, label='Data') ax.set_yscale('log', base=2) ax.set_xscale('log', base=10); # changing colormap # find an exhaustive list here: # https://matplotlib.org/3.1.0/tutorials/colors/colormaps.html fig, ax = plt.subplots(figsize=(6, 4)) N = 10 x = 10 ** np.linspace(1, 4, N) y = x ** 2 colors = np.linspace(0, 1, N) area = 500 ax.scatter(x, y, s=area, c=colors, alpha=0.9, edgecolors='w', linewidths=3, label='Data', # cmap='plasma', # cmap='jet', # cmap='Blues', # cmap='Blues_r', cmap='tab20', ) ax.set_yscale('log', base=2) ax.set_xscale('log', base=10); ``` Saving your plot --- ``` fig, ax = plt.subplots(figsize=(3, 2)) N = 10 x = 10 ** np.linspace(1, 4, N) y = x ** 2 colors = np.linspace(0, 1, N) area = 500 ax.scatter(x, y, s=area, c=colors, alpha=0.9, edgecolors='w', linewidths=3, cmap='tab20', label='My awesome data is the best thing ever', # rasterized=True ) ax.legend(bbox_to_anchor=(1, 0.5), loc='center left') ax.set_yscale('log', basey=2) ax.set_xscale('log', basex=10) plt.savefig('the_awesomest_plot_ever.png', dpi=300, bbox_inches='tight', transparent=True ) plt.savefig('the_awesomest_plot_ever.svg', dpi=300, bbox_inches='tight', transparent=True); ``` --- Exercises --------- Using the data from this URL: https://evocellnet.github.io/ecoref/data/phenotypic_data.tsv Can you make a scatterplot for the relationship between s-scores and the corrected p-value? Can you make a scatterplot for the relationship between s-scores and the corrected p-value, but only considering two strains plotted with different colors? Select four conditions and create a multipanel figure with the same scatterplot for each condition. Experiment with different layouts Using the [Iris dataset](https://en.wikipedia.org/wiki/Iris_flower_data_set) (which you can find at `../data/iris.csv`), prepare the following plot: for each pair of variables, prepare a scatterplot with each species having its own color Make the same series of plots as before but in a single figure Make a single panel now, changing the dots' sizes according to the third variable
github_jupyter
## Interacting with CerebralCortex Data Cerebral Cortex is MD2K's big data cloud tool designed to support population-scale data analysis, visualization, model development, and intervention design for mobile-sensor data. It provides the ability to do machine learning model development on population scale datasets and provides interoperable interfaces for aggregation of diverse data sources. This page provides an overview of the core Cerebral Cortex operations to familiarilze you with how to discover and interact with different sources of data that could be contained within the system. _Note:_ While some of these examples are showing generated data, they are designed to function on real-world mCerebrum data and the signal generators were built to facilitate the testing and evaluation of the Cerebral Cortex platform by those individuals that are unable to see those original datasets or do not wish to collect data before evaluating the system. ## Setting Up Environment Notebook does not contain the necessary runtime enviornments necessary to run Cerebral Cortex. The following commands will download and install these tools, framework, and datasets. ``` import importlib, sys, os from os.path import expanduser sys.path.insert(0, os.path.abspath('..')) DOWNLOAD_USER_DATA=False ALL_USERS=False #this will only work if DOWNLOAD_USER_DATA=True IN_COLAB = 'google.colab' in sys.modules MD2K_JUPYTER_NOTEBOOK = "MD2K_JUPYTER_NOTEBOOK" in os.environ if (get_ipython().__class__.__name__=="ZMQInteractiveShell"): IN_JUPYTER_NOTEBOOK = True JAVA_HOME_DEFINED = "JAVA_HOME" in os.environ SPARK_HOME_DEFINED = "SPARK_HOME" in os.environ PYSPARK_PYTHON_DEFINED = "PYSPARK_PYTHON" in os.environ PYSPARK_DRIVER_PYTHON_DEFINED = "PYSPARK_DRIVER_PYTHON" in os.environ HAVE_CEREBRALCORTEX_KERNEL = importlib.util.find_spec("cerebralcortex") is not None SPARK_VERSION = "3.1.2" SPARK_URL = "https://archive.apache.org/dist/spark/spark-"+SPARK_VERSION+"/spark-"+SPARK_VERSION+"-bin-hadoop2.7.tgz" SPARK_FILE_NAME = "spark-"+SPARK_VERSION+"-bin-hadoop2.7.tgz" CEREBRALCORTEX_KERNEL_VERSION = "3.3.14" DATA_PATH = expanduser("~") if DATA_PATH[:-1]!="/": DATA_PATH+="/" USER_DATA_PATH = DATA_PATH+"cc_data/" if MD2K_JUPYTER_NOTEBOOK: print("Java, Spark, and CerebralCortex-Kernel are installed and paths are already setup.") else: SPARK_PATH = DATA_PATH+"spark-"+SPARK_VERSION+"-bin-hadoop2.7/" if(not HAVE_CEREBRALCORTEX_KERNEL): print("Installing CerebralCortex-Kernel") !pip -q install cerebralcortex-kernel==$CEREBRALCORTEX_KERNEL_VERSION else: print("CerebralCortex-Kernel is already installed.") if not JAVA_HOME_DEFINED: if not os.path.exists("/usr/lib/jvm/java-8-openjdk-amd64/") and not os.path.exists("/usr/lib/jvm/java-11-openjdk-amd64/"): print("\nInstalling/Configuring Java") !sudo apt update !sudo apt-get install -y openjdk-8-jdk-headless os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64/" elif os.path.exists("/usr/lib/jvm/java-8-openjdk-amd64/"): print("\nSetting up Java path") os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64/" elif os.path.exists("/usr/lib/jvm/java-11-openjdk-amd64/"): print("\nSetting up Java path") os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-11-openjdk-amd64/" else: print("JAVA is already installed.") if (IN_COLAB or IN_JUPYTER_NOTEBOOK) and not MD2K_JUPYTER_NOTEBOOK: if SPARK_HOME_DEFINED: print("SPARK is already installed.") elif not os.path.exists(SPARK_PATH): print("\nSetting up Apache Spark ", SPARK_VERSION) !pip -q install findspark import pyspark spark_installation_path = os.path.dirname(pyspark.__file__) import findspark findspark.init(spark_installation_path) if not os.getenv("PYSPARK_PYTHON"): os.environ["PYSPARK_PYTHON"] = os.popen('which python3').read().replace("\n","") if not os.getenv("PYSPARK_DRIVER_PYTHON"): os.environ["PYSPARK_DRIVER_PYTHON"] = os.popen('which python3').read().replace("\n","") else: print("SPARK is already installed.") else: raise SystemExit("Please check your environment configuration at: https://github.com/MD2Korg/CerebralCortex-Kernel/") if DOWNLOAD_USER_DATA: if not os.path.exists(USER_DATA_PATH): if ALL_USERS: print("\nDownloading all users' data.") !rm -rf $USER_DATA_PATH !wget -q http://mhealth.md2k.org/images/datasets/cc_data.tar.bz2 && tar -xf cc_data.tar.bz2 -C $DATA_PATH && rm cc_data.tar.bz2 else: print("\nDownloading a user's data.") !rm -rf $USER_DATA_PATH !wget -q http://mhealth.md2k.org/images/datasets/s2_data.tar.bz2 && tar -xf s2_data.tar.bz2 -C $DATA_PATH && rm s2_data.tar.bz2 else: print("Data already exist. Please remove folder", USER_DATA_PATH, "if you want to download the data again") ``` # Import Your Own Data mCerebrum is not the only way to collect and load data into *Cerebral Cortex*. It is possible to import your own structured datasets into the platform. This example will demonstrate how to load existing data and subsequently how to read it back from Cerebral Cortex through the same mechanisms you have been utilizing. Additionally, it demonstrates how to write a custom data transformation fuction to manipulate data and produce a smoothed result which can then be visualized. ## Initialize the system ``` from cerebralcortex.kernel import Kernel CC = Kernel(cc_configs="default", study_name="default", new_study=True) ``` # Import Data Cerebral Cortex provides a set of predefined data import routines that fit typical use cases. The most common is CSV data parser, `csv_data_parser`. These parsers are easy to write and can be extended to support most types of data. Additionally, the data importer, `import_data`, needs to be brought into this notebook so that we can start the data import process. The `import_data` method requires several parameters that are discussed below. - `cc_config`: The path to the configuration files for Cerebral Cortex; this is the same folder that you would utilize for the `Kernel` initialization - `input_data_dir`: The path to where the data to be imported is located; in this example, `sample_data` is available in the file/folder browser on the left and you should explore the files located inside of it - `user_id`: The universally unique identifier (UUID) that owns the data to be imported into the system - `data_file_extension`: The type of files to be considered for import - `data_parser`: The import method or another that defines how to interpret the data samples on a per-line basis - `gen_report`: A simple True/False value that controls if a report is printed to the screen when complete ### Download sample data ``` sample_file = DATA_PATH+"data.csv" !wget -q https://raw.githubusercontent.com/MD2Korg/CerebralCortex/master/jupyter_demo/sample_data/data.csv -O $sample_file iot_stream = CC.read_csv(file_path=sample_file, stream_name="some-sample-iot-stream", column_names=["timestamp", "some_vals", "version", "user"]) ``` ## View Imported Data ``` iot_stream.show(4) ``` ## Document Data ``` from cerebralcortex.core.metadata_manager.stream.metadata import Metadata, DataDescriptor, ModuleMetadata stream_metadata = Metadata() stream_metadata.set_name("iot-data-stream").set_description("This is randomly generated data for demo purposes.") \ .add_dataDescriptor( DataDescriptor().set_name("timestamp").set_type("datetime").set_attribute("description", "UTC timestamp of data point collection.")) \ .add_dataDescriptor( DataDescriptor().set_name("some_vals").set_type("float").set_attribute("description", \ "Random values").set_attribute("range", \ "Data is between 0 and 1.")) \ .add_dataDescriptor( DataDescriptor().set_name("version").set_type("int").set_attribute("description", "version of the data")) \ .add_dataDescriptor( DataDescriptor().set_name("user").set_type("string").set_attribute("description", "user id")) \ .add_module(ModuleMetadata().set_name("cerebralcortex.data_importer").set_attribute("url", "hhtps://md2k.org").set_author( "Nasir Ali", "dev@md2k.org")) iot_stream.metadata = stream_metadata ``` ## View Metadata ``` iot_stream.metadata ``` ## How to write an algorithm This section provides an example of how to write a simple smoothing algorithm and apply it to the data that was just imported. ### Import the necessary modules ``` from pyspark.sql.functions import pandas_udf, PandasUDFType from pyspark.sql.types import StructField, StructType, StringType, FloatType, TimestampType, IntegerType from pyspark.sql.functions import minute, second, mean, window from pyspark.sql import functions as F import numpy as np ``` ### Define the Schema This schema defines what the computation module will return to the execution context for each row or window in the datastream. ``` # column name and return data type # acceptable data types for schem are - "null", "string", "binary", "boolean", # "date", "timestamp", "decimal", "double", "float", "byte", "integer", # "long", "short", "array", "map", "structfield", "struct" schema="timestamp timestamp, some_vals double, version int, user string, vals_avg double" ``` ### Write a user defined function The user-defined function (UDF) is one of two mechanisms available for distributed data processing within the Apache Spark framework. In this case, we are computing a simple windowed average. ``` def smooth_algo(key, df): # key contains all the grouped column values # In this example, grouped columns are (userID, version, window{start, end}) # For example, if you want to get the start and end time of a window, you can # get both values by calling key[2]["start"] and key[2]["end"] some_vals_mean = df["some_vals"].mean() df["vals_avg"] = some_vals_mean return df ``` ## Run the smoothing algorithm on imported data The smoothing algorithm is applied to the datastream by calling the `run_algorithm` method and passing the method as a parameter along with which columns, `some_vals`, that should be sent. Finally, the `windowDuration` parameter specified the size of the time windows on which to segment the data before applying the algorithm. Notice that when the next cell is run, the operation completes nearly instantaneously. This is due to the lazy evaluation aspects of the Spark framework. When you run the next cell to show the data, the algorithm will be applied to the whole dataset before displaying the results on the screen. ``` smooth_stream = iot_stream.compute(smooth_algo, schema=schema, windowDuration=10) smooth_stream.show(truncate=False) ``` ## Visualize data These are two plots that show the original and smoothed data to visually check how the algorithm transformed the data. ``` from cerebralcortex.plotting.basic.plots import plot_timeseries plot_timeseries(iot_stream) plot_timeseries(smooth_stream) ```
github_jupyter
``` import numpy as np import pandas as pd from matplotlib import pyplot as plt from tqdm import tqdm %matplotlib inline from torch.utils.data import Dataset, DataLoader import torch import torchvision import torch.nn as nn import torch.optim as optim from torch.nn import functional as F device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print(device) ``` # Generate dataset ``` y = np.random.randint(0,10,5000) idx= [] for i in range(10): print(i,sum(y==i)) idx.append(y==i) x = np.zeros((5000,2)) x[idx[0],:] = np.random.multivariate_normal(mean = [4,6.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[0])) x[idx[1],:] = np.random.multivariate_normal(mean = [5.5,6],cov=[[0.01,0],[0,0.01]],size=sum(idx[1])) x[idx[2],:] = np.random.multivariate_normal(mean = [4.5,4.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[2])) # x[idx[0],:] = np.random.multivariate_normal(mean = [5,5],cov=[[0.1,0],[0,0.1]],size=sum(idx[0])) # x[idx[1],:] = np.random.multivariate_normal(mean = [6,6],cov=[[0.1,0],[0,0.1]],size=sum(idx[1])) # x[idx[2],:] = np.random.multivariate_normal(mean = [5.5,6.5],cov=[[0.1,0],[0,0.1]],size=sum(idx[2])) x[idx[3],:] = np.random.multivariate_normal(mean = [3,3.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[3])) x[idx[4],:] = np.random.multivariate_normal(mean = [2.5,5.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[4])) x[idx[5],:] = np.random.multivariate_normal(mean = [3.5,8],cov=[[0.01,0],[0,0.01]],size=sum(idx[5])) x[idx[6],:] = np.random.multivariate_normal(mean = [5.5,8],cov=[[0.01,0],[0,0.01]],size=sum(idx[6])) x[idx[7],:] = np.random.multivariate_normal(mean = [7,6.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[7])) x[idx[8],:] = np.random.multivariate_normal(mean = [6.5,4.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[8])) x[idx[9],:] = np.random.multivariate_normal(mean = [5,3],cov=[[0.01,0],[0,0.01]],size=sum(idx[9])) for i in range(10): plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i)) plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) foreground_classes = {'class_0','class_1', 'class_2'} background_classes = {'class_3','class_4', 'class_5', 'class_6','class_7', 'class_8', 'class_9'} fg_class = np.random.randint(0,3) fg_idx = np.random.randint(0,9) a = [] for i in range(9): if i == fg_idx: b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1) a.append(x[b]) print("foreground "+str(fg_class)+" present at " + str(fg_idx)) else: bg_class = np.random.randint(3,10) b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1) a.append(x[b]) print("background "+str(bg_class)+" present at " + str(i)) a = np.concatenate(a,axis=0) print(a.shape) print(fg_class , fg_idx) a.shape np.reshape(a,(18,1)) a=np.reshape(a,(3,6)) plt.imshow(a) desired_num = 3000 mosaic_list =[] mosaic_label = [] fore_idx=[] for j in range(desired_num): fg_class = np.random.randint(0,3) fg_idx = np.random.randint(0,9) a = [] for i in range(9): if i == fg_idx: b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1) a.append(x[b]) # print("foreground "+str(fg_class)+" present at " + str(fg_idx)) else: bg_class = np.random.randint(3,10) b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1) a.append(x[b]) # print("background "+str(bg_class)+" present at " + str(i)) a = np.concatenate(a,axis=0) mosaic_list.append(np.reshape(a,(18,1))) mosaic_label.append(fg_class) fore_idx.append(fg_idx) mosaic_list = np.concatenate(mosaic_list,axis=1).T # print(mosaic_list) print(np.shape(mosaic_label)) print(np.shape(fore_idx)) class MosaicDataset(Dataset): """MosaicDataset dataset.""" def __init__(self, mosaic_list, mosaic_label, fore_idx): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.mosaic = mosaic_list self.label = mosaic_label self.fore_idx = fore_idx def __len__(self): return len(self.label) def __getitem__(self, idx): return self.mosaic[idx] , self.label[idx], self.fore_idx[idx] batch = 250 msd = MosaicDataset(mosaic_list, mosaic_label , fore_idx) train_loader = DataLoader( msd,batch_size= batch ,shuffle=True) class Wherenet(nn.Module): def __init__(self): super(Wherenet,self).__init__() self.linear1 = nn.Linear(2,50) self.linear2 = nn.Linear(50,50) self.linear3 = nn.Linear(50,1) def forward(self,z): x = torch.zeros([batch,9],dtype=torch.float64) y = torch.zeros([batch,2], dtype=torch.float64) #x,y = x.to("cuda"),y.to("cuda") for i in range(9): x[:,i] = self.helper(z[:,2*i:2*i+2])[:,0] #print(k[:,0].shape,x[:,i].shape) x = F.softmax(x,dim=1) # alphas x1 = x[:,0] for i in range(9): x1 = x[:,i] #print() y = y+torch.mul(x1[:,None],z[:,2*i:2*i+2]) return y , x def helper(self,x): x = F.relu(self.linear1(x)) x = F.relu(self.linear2(x)) x = self.linear3(x) return x trainiter = iter(train_loader) input1,labels1,index1 = trainiter.next() where = Wherenet().double() where = where out_where,alphas = where(input1) out_where.shape,alphas.shape class Whatnet(nn.Module): def __init__(self): super(Whatnet,self).__init__() self.linear1 = nn.Linear(2,50) self.linear2 = nn.Linear(50,3) # self.linear3 = nn.Linear(8,3) def forward(self,x): x = F.relu(self.linear1(x)) #x = F.relu(self.linear2(x)) x = self.linear2(x) return x what = Whatnet().double() # what(out_where) test_data_required = 1000 mosaic_list_test =[] mosaic_label_test = [] fore_idx_test=[] for j in range(test_data_required): fg_class = np.random.randint(0,3) fg_idx = np.random.randint(0,9) a = [] for i in range(9): if i == fg_idx: b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1) a.append(x[b]) # print("foreground "+str(fg_class)+" present at " + str(fg_idx)) else: bg_class = np.random.randint(3,10) b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1) a.append(x[b]) # print("background "+str(bg_class)+" present at " + str(i)) a = np.concatenate(a,axis=0) mosaic_list_test.append(np.reshape(a,(18,1))) mosaic_label_test.append(fg_class) fore_idx_test.append(fg_idx) mosaic_list_test = np.concatenate(mosaic_list_test,axis=1).T print(mosaic_list_test.shape) test_data = MosaicDataset(mosaic_list_test,mosaic_label_test,fore_idx_test) test_loader = DataLoader( test_data,batch_size= batch ,shuffle=False) focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 col1=[] col2=[] col3=[] col4=[] col5=[] col6=[] col7=[] col8=[] col9=[] col10=[] col11=[] col12=[] col13=[] criterion = nn.CrossEntropyLoss() optimizer_where = optim.SGD(where.parameters(), lr=0.01, momentum=0.9) optimizer_what = optim.SGD(what.parameters(), lr=0.01, momentum=0.9) nos_epochs = 250 train_loss=[] test_loss =[] train_acc = [] test_acc = [] loss_curi = [] for epoch in range(nos_epochs): # loop over the dataset multiple times focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 running_loss = 0.0 cnt=0 ep_lossi = [] iteration = desired_num // batch #training data set for i, data in enumerate(train_loader): inputs , labels , fore_idx = data #inputs,labels,fore_idx = inputs.to(device),labels.to(device),fore_idx.to(device) # zero the parameter gradients optimizer_what.zero_grad() optimizer_where.zero_grad() avg_inp,alphas = where(inputs) outputs = what(avg_inp) _, predicted = torch.max(outputs.data, 1) loss = criterion(outputs, labels) loss.backward() optimizer_what.step() optimizer_where.step() running_loss += loss.item() if cnt % 6 == 5: # print every 6 mini-batches print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / 6)) ep_lossi.append(running_loss/6) running_loss = 0.0 cnt=cnt+1 if epoch % 1 == 0: for j in range (batch): focus = torch.argmax(alphas[j]) if(alphas[j][focus] >= 0.5): argmax_more_than_half +=1 else: argmax_less_than_half +=1 if(focus == fore_idx[j] and predicted[j] == labels[j]): focus_true_pred_true += 1 elif(focus != fore_idx[j] and predicted[j] == labels[j]): focus_false_pred_true +=1 elif(focus == fore_idx[j] and predicted[j] != labels[j]): focus_true_pred_false +=1 elif(focus != fore_idx[j] and predicted[j] != labels[j]): focus_false_pred_false +=1 loss_curi.append(np.mean(ep_lossi)) #loss per epoch if (np.mean(ep_lossi) <= 0.01): break if epoch % 1 == 0: col1.append(epoch) col2.append(argmax_more_than_half) col3.append(argmax_less_than_half) col4.append(focus_true_pred_true) col5.append(focus_false_pred_true) col6.append(focus_true_pred_false) col7.append(focus_false_pred_false) #************************************************************************ #testing data set with torch.no_grad(): focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 for data in test_loader: inputs, labels , fore_idx = data #inputs,labels,fore_idx = inputs.to(device),labels.to(device),fore_idx.to(device) # print(inputs.shtorch.save(where.state_dict(),"model_epoch"+str(epoch)+".pt")ape,labels.shape) avg_inp,alphas = where(inputs) outputs = what(avg_inp) _, predicted = torch.max(outputs.data, 1) for j in range (batch): focus = torch.argmax(alphas[j]) if(alphas[j][focus] >= 0.5): argmax_more_than_half +=1 else: argmax_less_than_half +=1 if(focus == fore_idx[j] and predicted[j] == labels[j]): focus_true_pred_true += 1 elif(focus != fore_idx[j] and predicted[j] == labels[j]): focus_false_pred_true +=1 elif(focus == fore_idx[j] and predicted[j] != labels[j]): focus_true_pred_false +=1 elif(focus != fore_idx[j] and predicted[j] != labels[j]): focus_false_pred_false +=1 col8.append(argmax_more_than_half) col9.append(argmax_less_than_half) col10.append(focus_true_pred_true) col11.append(focus_false_pred_true) col12.append(focus_true_pred_false) col13.append(focus_false_pred_false) torch.save(where.state_dict(),"where_model_epoch"+str(epoch)+".pt") torch.save(what.state_dict(),"what_model_epoch"+str(epoch)+".pt") print('Finished Training') # torch.save(where.state_dict(),"where_model_epoch"+str(nos_epochs)+".pt") # torch.save(what.state_dict(),"what_model_epoch"+str(epoch)+".pt") columns = ["epochs", "argmax > 0.5" ,"argmax < 0.5", "focus_true_pred_true", "focus_false_pred_true", "focus_true_pred_false", "focus_false_pred_false" ] df_train = pd.DataFrame() df_test = pd.DataFrame() df_train[columns[0]] = col1 df_train[columns[1]] = col2 df_train[columns[2]] = col3 df_train[columns[3]] = col4 df_train[columns[4]] = col5 df_train[columns[5]] = col6 df_train[columns[6]] = col7 df_test[columns[0]] = col1 df_test[columns[1]] = col8 df_test[columns[2]] = col9 df_test[columns[3]] = col10 df_test[columns[4]] = col11 df_test[columns[5]] = col12 df_test[columns[6]] = col13 df_train plt.plot(col1,col2, label='argmax > 0.5') plt.plot(col1,col3, label='argmax < 0.5') plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel("epochs") plt.ylabel("training data") plt.title("On Training set") plt.show() plt.plot(col1,col4, label ="focus_true_pred_true ") plt.plot(col1,col5, label ="focus_false_pred_true ") plt.plot(col1,col6, label ="focus_true_pred_false ") plt.plot(col1,col7, label ="focus_false_pred_false ") plt.title("On Training set") plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel("epochs") plt.ylabel("training data") plt.show() df_test plt.plot(col1,col8, label='argmax > 0.5') plt.plot(col1,col9, label='argmax < 0.5') plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel("epochs") plt.ylabel("Testing data") plt.title("On Testing set") plt.show() plt.plot(col1,col10, label ="focus_true_pred_true ") plt.plot(col1,col11, label ="focus_false_pred_true ") plt.plot(col1,col12, label ="focus_true_pred_false ") plt.plot(col1,col13, label ="focus_false_pred_false ") plt.title("On Testing set") plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel("epochs") plt.ylabel("Testing data") plt.show() print(x[0]) for i in range(9): print(x[0,2*i:2*i+2]) correct = 0 total = 0 count = 0 flag = 1 focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 with torch.no_grad(): for data in train_loader: inputs , labels , fore_idx = data #inputs,labels,fore_idx = inputs.to(device),labels.to(device),fore_idx.to(device) # zero the parameter gradients optimizer_what.zero_grad() optimizer_where.zero_grad() avg_inp,alphas = where(inputs) outputs = what(avg_inp) _, predicted = torch.max(outputs.data, 1) for j in range(labels.size(0)): count += 1 focus = torch.argmax(alphas[j]) if alphas[j][focus] >= 0.5 : argmax_more_than_half += 1 else: argmax_less_than_half += 1 if(focus == fore_idx[j] and predicted[j] == labels[j]): focus_true_pred_true += 1 elif(focus != fore_idx[j] and predicted[j] == labels[j]): focus_false_pred_true += 1 elif(focus == fore_idx[j] and predicted[j] != labels[j]): focus_true_pred_false += 1 elif(focus != fore_idx[j] and predicted[j] != labels[j]): focus_false_pred_false += 1 total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 3000 train images: %d %%' % ( 100 * correct / total)) print("total correct", correct) print("total train set images", total) print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) ) print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) ) print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) ) print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) ) print("argmax_more_than_half ==================> ",argmax_more_than_half) print("argmax_less_than_half ==================> ",argmax_less_than_half) print(count) print("="*100) correct = 0 total = 0 count = 0 flag = 1 focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 with torch.no_grad(): for data in test_loader: inputs , labels , fore_idx = data #inputs,labels,fore_idx = inputs.to(device),labels.to(device),fore_idx.to(device) # zero the parameter gradients optimizer_what.zero_grad() optimizer_where.zero_grad() avg_inp,alphas = where(inputs) outputs = what(avg_inp) _, predicted = torch.max(outputs.data, 1) for j in range(labels.size(0)): focus = torch.argmax(alphas[j]) if alphas[j][focus] >= 0.5 : argmax_more_than_half += 1 else: argmax_less_than_half += 1 if(focus == fore_idx[j] and predicted[j] == labels[j]): focus_true_pred_true += 1 elif(focus != fore_idx[j] and predicted[j] == labels[j]): focus_false_pred_true += 1 elif(focus == fore_idx[j] and predicted[j] != labels[j]): focus_true_pred_false += 1 elif(focus != fore_idx[j] and predicted[j] != labels[j]): focus_false_pred_false += 1 total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 1000 test images: %d %%' % ( 100 * correct / total)) print("total correct", correct) print("total train set images", total) print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) ) print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) ) print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) ) print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) ) print("argmax_more_than_half ==================> ",argmax_more_than_half) print("argmax_less_than_half ==================> ",argmax_less_than_half) ```
github_jupyter
# Project: Wrangling and Analyze Data ``` import pandas as pd import numpy as np from twython import Twython import requests import json import time import matplotlib.pyplot as plt import seaborn as sns from wordcloud import WordCloud, STOPWORDS from PIL import Image import urllib ``` ## Data Gathering In the cells below was gathered **all** three pieces of data for this project and loaded them in the notebook. 1. Directly download the WeRateDogs Twitter archive data (twitter_archive_enhanced.csv) ``` # Supplied file archive = pd.read_csv('twitter-archive-enhanced.csv') ``` 2. Use the Requests library to download the tweet image prediction (image_predictions.tsv) ``` # Requesting tweet image predictions with open('image_predictions.tsv' , 'wb') as file: image_predictions = requests.get('https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv', auth=('user', 'pass')) file.write(image_predictions.content) # Reading image predictions predictions = pd.read_csv('image_predictions.tsv', sep='\t') ``` 3. Use the Tweepy library to query additional data via the Twitter API (tweet_json.txt) ``` #Use Python's Tweepy library and store each tweet's entire set of JSON data in a file called tweet_json.txt file. import tweepy consumer_key = '--' consumer_secret = '--' access_token = '--' access_secret = '--' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) collected =[] not_collected = [] with open('tweet_json.txt', 'w') as file: for tweet_id in list(archive['tweet_id']): try: tweet_status = api.get_status(tweet_id,tweet_mode='extended') json.dump(tweet_status._json, file) file.write('\n') collected.append(tweet_id) except Exception as e: not_collected.append(tweet_id) #Reading JSON content as pandas dataframe tweets = pd.read_json('tweet_json.txt', lines = True,encoding='utf-8') ``` ## Assessing Data In this section was detect and documented **nine (9) quality issues and five (5) tidiness issues**. Were use **both** visual assessment programmatic assessement to assess the data. ``` # Load the data gathered data files archive.head() archive.tail() archive.shape archive.info() archive.describe() # Load the data gathered data files predictions.head() predictions.tail() predictions.shape predictions.info() predictions.describe() # Load the data gathered data files tweets.head() tweets.tail() tweets.shape tweets.info() tweets.describe() ``` ### Quality issues #### Archive 1. [The timestamp field is in string format (object) and tweet_id is in int64](#1) 2. [There are only 181 retweets (retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp)](#2) 3. [There are only 78 replies (in_reply_to_status_id, in_reply_to_user_id)](#3) 4. [There are missing values in the column expanded_urls](#4) 5. [Column name floofer should be spelled 'floof'](#5) 6. [Dogs with no name in the description have given names of "a", "an" and "None" instead of "NaN"](#6) 7. [In the column rating_denominator there are votes greater than 10](#7) 8. [Drop unnecessary columns](#8) #### Predictions 9. [The types of dogs in columns p1, p2, and p3 have some lowercase and uppercase letters](#9) 10. [The tweet_id field is in int64, should be in string format](#10) #### Tweets 11. [Rename the column 'id' to 'tweet_id' to facilitate merging](#11) 12. [Clean up text column to show only the text](#12) ### Tidiness issues #### Archive 1. [Several columns representing the same category, which is divided into "doggo", "flooter", "pupper", "puppo" columns, but we need only one column to represent this classifications](#a) 2. [Merge all tables to realize any analysis](#b) ## Cleaning Data In this section were clean up **all** of the issues you documented while assessing. ``` # Make copies of original pieces of data archive_clean = archive.copy() predictions_clean = predictions.copy() tweets_clean = tweets.copy() ``` ### Quality issues ### Issue #1: Erroneous data types <a id="1"></a> #### Define: The timestamp field is in string format (object) and tweet_id is in int64 #### Code ``` #change the dtype of column timestamp from object to datetime archive_clean.timestamp = archive_clean.timestamp.astype('datetime64') archive_clean.tweet_id = archive_clean.tweet_id.astype(str) ``` #### Test ``` #Check for changes archive_clean.info() ``` ### Issue #2: Missing records <a id="2"></a> #### Define: There are only 181 retweets (retweeted_status_id, retweeted_status_user_id, retweeted_status_timestamp) #### Code ``` #Use drop function to drop the non necessary columns archive_clean = archive_clean.drop(['retweeted_status_id', 'retweeted_status_user_id', 'retweeted_status_timestamp'], axis=1) ``` #### Test ``` #Check for changes archive_clean.head() ``` ### Issue #3: Missing records <a id="3"></a> #### Define: There are only 78 replies (in_reply_to_status_id, in_reply_to_user_id) #### Code ``` #Use drop function to drop the non necessary columns archive_clean = archive_clean.drop(['in_reply_to_status_id', 'in_reply_to_user_id'], axis=1) ``` #### Test ``` #Check for changes archive_clean.head() ``` ### Issue #4: Missing records <a id="4"></a> #### Define: There are missing values in the column expanded_urls #### Code ``` #Use drop function to drop the expanded_urls. We wont use this column for the analysis archive_clean = archive_clean.drop(['expanded_urls'], axis=1) ``` #### Test ``` #Check for changes archive_clean.head() ``` ### Issue #5: Correct the column name <a id="5"></a> #### Define: Column name floofer should be spelled 'floof' #### Code ``` # Rename the column 'floofer' archive_clean = archive_clean.rename(columns={'floofer':'floof'}) ``` #### Test ``` #Check for changes archive_clean.head() archive_clean.floof = archive_clean['floof'].map({'floofer':'floof'}, na_action=None) archive_clean ``` ### Issue #6: Differents inputs for the same categories <a id="6"></a> #### Define: Dogs with no name in the description have given names of "a", "an" and "None" instead of "NaN" #### Code ``` # Replace the value 'None' with NaN archive_clean = archive_clean.replace('None', np.nan) archive_clean = archive_clean.replace('a', np.nan) archive_clean = archive_clean.replace('an', np.nan) ``` #### Test ``` #Check for changes archive_clean.name.value_counts() ``` ### Issue #7: There are no delimitations for the rating demonimator <a id="7"></a> #### Define: In the column rating_denominator there are votes greater than 10 #### Code ``` #Select only the values in the column rating_denominator that should only be "10" archive_clean.rating_denominator = archive_clean[archive_clean.rating_denominator == 10] ``` #### Test ``` #Check for changes archive_clean ``` ### Issue #8: Unnecessary columns <a id="8"></a> #### Define: Drop unnecessary columns #### Code ``` #Use drop function to drop source column archive_clean.drop(columns='source', inplace=True) ``` #### Test ``` #Check for change archive_clean.head() ``` ### Issue #9: Differents letter cases <a id="9"></a> #### Define: The types of dogs in columns p1, p2, and p3 have some lowercase and uppercase letters #### Code ``` #Convert all the dogs names to lowercase letters predictions_clean['p1'] = predictions_clean['p1'].str.lower() predictions_clean['p2'] = predictions_clean['p2'].str.lower() predictions_clean['p3'] = predictions_clean['p3'].str.lower() ``` #### Test ``` #Check for changes predictions_clean.p1.head() ``` ### Issue #10: Differents data type format <a id="10"></a> #### Define: The tweet_id field is in int64, should be in string format #### Code ``` #change the dtype of column tweed_id from int64 to string format predictions_clean.tweet_id = predictions_clean.tweet_id.astype(str) tweets_clean.id = tweets_clean.id.astype(str) ``` #### Test ``` #Check for changes predictions_clean.info() #Check for changes tweets_clean.info() ``` ### Issue #11: Differents columns names for the same content <a id="11"></a> #### Define: Rename the column 'id' to 'tweet_id' to facilitate merging #### Code ``` #Use rename() function to rename the column tweets_clean = tweets_clean.rename(columns={'id':'tweet_id'}) ``` #### Test ``` #Check for changes tweets_clean.head() ``` ### Issue #12: Column Text has multiples variables <a id="12"></a> #### Define: Clean up text column to show only the text #### Code ``` #Remove url link archive_clean['text'] = archive_clean.text.str.replace(r"http\S+", "") archive_clean['text'] = archive_clean.text.str.strip() ``` #### Test ``` archive_clean['text'][0] ``` ### Tidiness issues ### Issue #1: Unify the dogs classes <a id="a"></a> #### Define: Several columns representing the same category, which is divided into "doggo", "flooter", "pupper", "puppo" columns, but we need only one column to represent this classifications #### Code ``` #Use loc function to add a new column to represent the dog stage archive_clean.loc[archive_clean['doggo'] == 'doggo', 'stage'] = 'doggo' archive_clean.loc[archive_clean['floof'] == 'floof', 'stage'] = 'floof' archive_clean.loc[archive_clean['pupper'] == 'pupper', 'stage'] = 'pupper' archive_clean.loc[archive_clean['puppo'] == 'puppo', 'stage'] = 'puppo' ``` #### Test ``` #Check for changes archive_clean.head() ``` #### Code ``` #Dropping the columns: doggo, floofer, pupper and poppo archive_clean = archive_clean.drop(['doggo', 'floof', 'pupper', 'puppo'], axis = 1) ``` #### Test ``` #Check the final change in the dogs stages archive_clean.info() ``` ### Issue #2: Separated tables <a id="b"></a> #### Define: Merge all tables to realize any analysis #### Code ``` #Merge the archive_clean and tweets_clean table merge_df = archive_clean.join(tweets_clean.set_index('tweet_id'), on='tweet_id') merge_df.head() ``` #### Test ``` #Check the new df merge_df.info() #Join the merge_df to the predictions_clean table twitter_master = merge_df.join(predictions_clean.set_index('tweet_id'), on='tweet_id') twitter_master.head() twitter_master.info() ``` #### Code ``` #Filter the columns to further analisys twitter_master_clean = twitter_master.filter(['tweet_id','timestamp','text', 'rating_numerator', 'rating_denominator','name','stage','retweet_count', 'favorite_count', 'jpg_url','img_num', 'p1', 'p1_conf','p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog']) twitter_master_clean.head() ``` ## Storing Data Save gathered, assessed, and cleaned master dataset to a CSV file named "twitter_archive_master.csv". ``` #store data with to_csv function twitter_master_clean.to_csv('twitter_archive_master.csv', index = False) ``` ## Analyzing and Visualizing Data In this section, analyze and visualize your wrangled data. You must produce at least **three (3) insights and one (1) visualization.** ``` #Make a copy rate_dogs = twitter_master_clean.copy() rate_dogs.info() #Select missing values from table merged to drop later drop = rate_dogs[pd.isnull(rate_dogs['retweet_count'])].index drop #Drop missing data from merged table rate_dogs.drop(index=drop, inplace=True) #Check the changes rate_dogs.info() #Investigating the time of the tweets rate_dogs.timestamp.min(), rate_dogs.timestamp.max() #Set the index to the datatime rate_dogs = rate_dogs.set_index('timestamp') #Look for informations rate_dogs.describe() rate_dogs.favorite_count.max()/rate_dogs.retweet_count.max() #See if there is any correlations rate_dogs.corr() #Plot the correlations sns.pairplot(rate_dogs, vars = ['rating_numerator', 'retweet_count', 'favorite_count', 'p1_conf'], diag_kind = 'kde', plot_kws = {'alpha': 0.9}); #Check the most favorited tweet rate_dogs.sort_values(by = 'favorite_count', ascending = False).head(3) #Check the most retweeted tweet rate_dogs.sort_values(by = 'retweet_count', ascending = False).head(3) #Check fot the most common dogs stages rate_dogs.stage.value_counts() #Check for the most common dogs breeds rate_dogs.p1.value_counts().head(10) #Plot the most common dogs breeds plt.barh(rate_dogs.p1.value_counts().head(10).index, rate_dogs.p1.value_counts().head(10), color = 'g', alpha=0.9) plt.xlabel('Number of tweets', fontsize = 10) plt.title('Top 10 dog breeds by tweet count', fontsize = 14) plt.gca().invert_yaxis() plt.show(); #Group favorite count with dogs breeds and see what are the most favorites top10 = rate_dogs.favorite_count.groupby(rate_dogs['p1']).sum().sort_values(ascending = False) top10.head(10) #Plot the most favorites dogs breeds plt.barh(top10.head(10).index, top10.head(10), color = 'g', alpha=0.9) plt.xlabel('Favorite count', fontsize = 10) plt.title('Top 10 favorite dog breeds', fontsize = 14) plt.gca().invert_yaxis() plt.show(); #Plot the most favorites dogs stages favorite_count_stages = rate_dogs.groupby('stage').favorite_count.mean().sort_values() favorite_count_stages.plot(x="stage",y='favorite_count',kind='barh',color='g', alpha=0.9) plt.xlabel('Favorite count', fontsize = 10) plt.ylabel('Dogs stages', fontsize = 10) plt.title('Average favorite counts by dogs stages', fontsize = 14) ``` ### Insights: 1. The quantity of people who favorite the posts is 2.039 time higher than people that retweet the posts. This shows a preference of just favorite the posts than retweet. 2. There are a strong correlation between the favorites counts and retweets. To be more precised the correlation is 0.801345. To evidenciate better, the most retweeted and favorited dog is a doggo labrador retriever who received 72474 retweets and 147742 favorites votes. His ID is 744234799360020481. 3. The most common dogs breeds are golden retriever, labrador retriever and pembroke, respectively. They receive the most favorite counts too. ### Visualizations ``` #Plot a scatter plot to verify a possible trend in the amount os favorite count over time plt.scatter(rate_dogs.index, rate_dogs['favorite_count']) plt.title('Daily tweets by favorite count', fontsize = 14) plt.xlabel('Days', fontsize = 14) plt.ylabel('Favorite count', fontsize = 14) plt.show(); #Plot a Word Cloud with the texts written tweets = np.array(rate_dogs.text) list1 = [] for tweet in tweets: list1.append(tweet.replace("\n","")) mask = np.array(Image.open(requests.get('https://img.favpng.com/23/21/16/dog-vector-graphics-bengal-cat-illustration-clip-art-png-favpng-RWmY6zWcLaCxWurMaPEpZpARA.jpg', stream=True).raw)) text = list1 def gen_wc(text, mask): word_cloud = WordCloud(width = 700, height = 400, background_color='white', mask=mask).generate(str(text)) plt.figure(figsize=(16,10),facecolor = 'white', edgecolor='red') plt.imshow(word_cloud) plt.axis('off') plt.tight_layout(pad=0) plt.show() gen_wc(text, mask) # The code used above was modeled from this blog on how to generate a word cloud in python. #https://blog.goodaudience.com/how-to-generate-a-word-cloud-of-any-shape-in-python-7bce27a55f6e ``` ### Insights: 1. We can have a visual look in the Daily tweets by favorite count chart and verify a positive trend in the amount of favorite tweets over time. 2. In the cloud chart we can see that the word pooper, dog, pup and meet are the most frequently written.
github_jupyter
``` import sites_positionWithinProteins as pos reload(pos) fn_fasta = r"/Volumes/Speedy/FASTA/HUMAN20150706.fasta" fn_evidence = r"/Users/dblyon/CloudStation/CPR/BTW_sites/sites_positionsWithinProteins_input_v2.txt" fn_fasta fa = pos.Fasta() fa.set_file(fn_fasta) fa.parse_fasta() COLUMN_MODSEQ = "Modified sequence" COLUMN_PROTEINS = "Proteins" COLUMN_MODPROB = "Acetyl (K) Probabilities" COLUMN_ID = "id" MODTYPE = "(ac)" ##### new columns COLUMN_SITES = "Sites" COLUMN_PROB = "Probability" remove_n_terminal_acetylation = True df = pd.read_csv(fn_evidence, sep='\t', low_memory=False) df.dropna(axis=0, how="all", inplace=True) df["pepseq"] = df[COLUMN_MODSEQ].apply(lambda aaseq: aaseq.replace("_", "").replace(MODTYPE, "")) df["pepseq"] = df["pepseq"].apply(pos.remove_modification_in_parentheses) df["start_pos"] = df.apply(pos.get_start_position_of_sequence_proteinGroups, args=(fa, ), axis=1) df.head() df = df.head() df df.loc[1, "start_pos"] = "413;nan;45" pepseq_mod = "_IVEM(ox)STSK(ac)TGK(ac)_" sites_list = pos.parse_sites_within_pepseq(pepseq_mod, []) sites_list COLUMN_MODSEQ_index = df.columns.tolist().index(COLUMN_MODSEQ) + 1 # due to index start_pos_index = df.columns.tolist().index("start_pos") + 1 # due to index Sites_pos_within_pep_list = [] for row in df.itertuples(): mod_seq = row[COLUMN_MODSEQ_index] start_pos_list = row[start_pos_index].split(";") # string for every protein in proteinGroups the start position of the peptide sites_list = pos.parse_sites_within_pepseq(mod_seq, []) sites_per_row = "" for protein_start_pos in start_pos_list: try: protein_start_pos = int(float(protein_start_pos)) except ValueError: sites_per_row += "(nan)" + ";" continue sites_per_protein = "(" + "+".join([str(site + protein_start_pos) for site in sites_list]) + ")" sites_per_row += sites_per_protein + ";" Sites_pos_within_pep_list.append(sites_per_row[:-1]) df["Sites_pos_within_pep"] = Sites_pos_within_pep_list df.head() def add_length_of_peptide_2_start_pos(row): pepseq = row["Modified sequence"] df["Positions within Proteins"] = df.apply(add_length_of_peptide_2_start_pos, axis=1) aaseq = fa.an2aaseq_dict["I3L397"] aaseq aaseq[39:39+11] len("IVEMSTSKTGK") # import re # my_regex = re.compile(r"(\(\w+\))") # df["pepseq_mod"] = df[COLUMN_MODSEQ].apply(pos.remove_modifications_not_MODTYPE, args=(my_regex, remove_n_terminal_acetylation, )) # df = pos.add_COLUMN_SITES_and_PROB_2_df(df) l = [1, 3, np.nan] ";".join([str(ele) for ele in l]) # add sites and probabilities my_regex = re.compile(r"(\(\w+\))") df["pepseq_mod"] = df[COLUMN_MODSEQ].apply(remove_modifications_not_MODTYPE, args=(my_regex, remove_n_terminal_acetylation, )) df = add_COLUMN_SITES_and_PROB_2_df(df) df = df[df[COLUMN_SITES].notnull()] if probability_threshold > 0: df = df[df[COLUMN_PROB].apply(is_any_above_threshold, args=(probability_threshold,))] if conventional_counting > 0: df[COLUMN_SITES] = df[COLUMN_SITES].apply(start_counting_from_num, args=(conventional_counting, )) #lambda num_string: ";".join([str(int(float(num))) + conventional_counting for num in num_string.split(";")]) # keep only relevant columns and write to file df2write = df[[COLUMN_ID, COLUMN_MODSEQ, COLUMN_MODPROB, COLUMN_LEADRAZPROT, COLUMN_SITES, COLUMN_PROB]] df2write[COLUMN_SITES] = df2write[COLUMN_SITES].apply(lambda ele: ele.replace(".0", "")) df2write.to_csv(fn_output, sep='\t', header=True, index=False) ```
github_jupyter
# Predicting Student Admissions with Neural Networks In this notebook, we predict student admissions to graduate school at UCLA based on three pieces of data: - GRE Scores (Test) - GPA Scores (Grades) - Class rank (1-4) The dataset originally came from here: http://www.ats.ucla.edu/ ## Loading the data To load the data and format it nicely, we will use two very useful packages called Pandas and Numpy. You can read on the documentation here: - https://pandas.pydata.org/pandas-docs/stable/ - https://docs.scipy.org/ ``` # Importing pandas and numpy import pandas as pd import numpy as np # Reading the csv file into a pandas DataFrame data = pd.read_csv('student_data.csv') # Printing out the first 10 rows of our data data.head() ``` ## Plotting the data First let's make a plot of our data to see how it looks. In order to have a 2D plot, let's ingore the rank. ``` # %matplotlib inline import matplotlib.pyplot as plt # Function to help us plot def plot_points(data): X = np.array(data[["gre","gpa"]]) y = np.array(data["admit"]) admitted = X[np.argwhere(y==1)] rejected = X[np.argwhere(y==0)] plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k') plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k') plt.xlabel('Test (GRE)') plt.ylabel('Grades (GPA)') # Plotting the points plot_points(data) plt.show() ``` Roughly, it looks like the students with high scores in the grades and test passed, while the ones with low scores didn't, but the data is not as nicely separable as we hoped it would. Maybe it would help to take the rank into account? Let's make 4 plots, each one for each rank. ``` # Separating the ranks data_rank1 = data[data["rank"]==1] data_rank2 = data[data["rank"]==2] data_rank3 = data[data["rank"]==3] data_rank4 = data[data["rank"]==4] # Plotting the graphs plot_points(data_rank1) plt.title("Rank 1") plt.show() plot_points(data_rank2) plt.title("Rank 2") plt.show() plot_points(data_rank3) plt.title("Rank 3") plt.show() plot_points(data_rank4) plt.title("Rank 4") plt.show() ``` This looks more promising, as it seems that the lower the rank, the higher the acceptance rate. Let's use the rank as one of our inputs. In order to do this, we should one-hot encode it. ## TODO: One-hot encoding the rank Use the `get_dummies` function in pandas in order to one-hot encode the data. Hint: To drop a column, it's suggested that you use `one_hot_data`[.drop( )](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html). ``` # TODO: Make dummy variables for rank and concat existing columns one_hot_data = pd.get_dummies(data, columns=['rank']) # # TODO: Drop the previous rank column # one_hot_data = pass # # Print the first 10 rows of our data # one_hot_data[:10] ``` ## TODO: Scaling the data The next step is to scale the data. We notice that the range for grades is 1.0-4.0, whereas the range for test scores is roughly 200-800, which is much larger. This means our data is skewed, and that makes it hard for a neural network to handle. Let's fit our two features into a range of 0-1, by dividing the grades by 4.0, and the test score by 800. ``` # Making a copy of our data processed_data = one_hot_data[:] # TODO: Scale the columns processed_data['gre'] = processed_data['gre']/800 processed_data['gpa'] = processed_data['gpa']/4.0 # Printing the first 10 rows of our procesed data processed_data[:10] ``` ## Splitting the data into Training and Testing In order to test our algorithm, we'll split the data into a Training and a Testing set. The size of the testing set will be 10% of the total data. ``` sample = np.random.choice(processed_data.index, size=int(len(processed_data)*0.9), replace=False) train_data, test_data = processed_data.iloc[sample], processed_data.drop(sample) print("Number of training samples is", len(train_data)) print("Number of testing samples is", len(test_data)) print(train_data[:10]) print(test_data[:10]) ``` ## Splitting the data into features and targets (labels) Now, as a final step before the training, we'll split the data into features (X) and targets (y). ``` features = train_data.drop('admit', axis=1) targets = train_data['admit'] features_test = test_data.drop('admit', axis=1) targets_test = test_data['admit'] print(features[:10]) print(targets[:10]) ``` ## Training the 1-layer Neural Network The following function trains the 1-layer neural network. First, we'll write some helper functions. ``` # Activation (sigmoid) function def sigmoid(x): return 1 / (1 + np.exp(-x)) def sigmoid_prime(x): return sigmoid(x) * (1-sigmoid(x)) def error_formula(y, output): return - y*np.log(output) - (1 - y) * np.log(1-output) ``` # TODO: Backpropagate the error Now it's your turn to shine. Write the error term. Remember that this is given by the equation $$ (y-\hat{y})x $$ for binary cross entropy loss function and $$ (y-\hat{y})\sigma'(x)x $$ for mean square error. ``` # TODO: Write the error term formula def error_term_formula(x, y, output): return (y - output) * sigmoid_prime(x) * x # Neural Network hyperparameters epochs = 1000 learnrate = 0.0001 # Training function def train_nn(features, targets, epochs, learnrate): # Use to same seed to make debugging easier np.random.seed(42) n_records, n_features = features.shape last_loss = None # Initialize weights weights = np.random.normal(scale=1 / n_features**.5, size=n_features) for e in range(epochs): del_w = np.zeros(weights.shape) for x, y in zip(features.values, targets): # Loop through all records, x is the input, y is the target # Activation of the output unit # Notice we multiply the inputs and the weights here # rather than storing h as a separate variable output = sigmoid(np.dot(x, weights)) # The error term error_term = error_term_formula(x, y, output) # The gradient descent step, the error times the gradient times the inputs del_w += error_term # Update the weights here. The learning rate times the # change in weights # don't have to divide by n_records since it is compensated by the learning rate weights += learnrate * del_w #/ n_records # Printing out the mean square error on the training set if e % (epochs / 10) == 0: out = sigmoid(np.dot(features, weights)) loss = np.mean(error_formula(targets, out)) print("Epoch:", e) if last_loss and last_loss < loss: print("Train loss: ", loss, " WARNING - Loss Increasing") else: print("Train loss: ", loss) last_loss = loss print("=========") print("Finished training!") return weights weights = train_nn(features, targets, epochs, learnrate) ``` ## Calculating the Accuracy on the Test Data ``` # Calculate accuracy on test data test_out = sigmoid(np.dot(features_test, weights)) predictions = test_out > 0.5 accuracy = np.mean(predictions == targets_test) print("Prediction accuracy: {:.3f}".format(accuracy)) ```
github_jupyter
``` import pandas as pd auto_df = pd.read_csv("automatability.csv") #to transpose relative_emp_df = pd.read_csv("relativeEmployment.csv") #to transpose similar_df = pd.read_csv("newsimilarity.csv") wagechange_df = pd.read_csv("wageChange.csv") # all_csvs = [auto_df, relative_emp_df, similar_df, wagechange_df] # for csv in all_csvs: # auto_df.columns.values[0]='Job_Compared' # relative_emp_df.columns.values[0]='Job_Compared' # similar_df.columns.values[0]='Job_Compared' # wagechange_df.columns.values[0]='Job_Compared' auto_df = auto_df.transpose() auto_df.columns = auto_df.iloc[0] auto_df.drop(auto_df.index[0], inplace=True) relative_emp_df = relative_emp_df.transpose() relative_emp_df.columns = relative_emp_df.iloc[0] relative_emp_df.drop(relative_emp_df.index[0], inplace=True) # # auto_df=auto_df.set_index('Job Compared') # # relative_emp_df=relative_emp_df.set_index('Job Compared') # # similar_df=similar_df.set_index('Job Compared') # # wagechange_df = wagechange_df.set_index('Job Compared') # Creating an id/job name hash table # temp_sim = pd.melt(similar_df, id_vars=['Job_Compared'], var_name='job_selected', value_name='similarity') # temp_sim.columns =['job_compared','job_selected','similarity'] # temp_rec_therapist = temp_sim[temp_sim['job_selected']=='Recreational Therapists'] # temp_pre_export = temp_rec_therapist.reset_index() # temp_pre_export = temp_pre_export[['index', 'Job_Compared']] # temp_pre_export.columns=['id','job_name'] # temp_pre_export.to_csv("crosswalk.csv", index=False) # Similarity - setting up data struture temp_sim = pd.melt(similar_df, id_vars=['Job_Compared'], var_name='job_selected', value_name='similarity') temp_sim.columns =['job_compared','job_selected','similarity'] # Intermediate step of setting up hash table for job name/id # First we create the job hash table temp_rec_therapist = temp_sim[temp_sim['job_selected']=='Recreational Therapists'] temp_pre_export = temp_rec_therapist.reset_index() temp_pre_export = temp_pre_export[['index', 'job_compared']] temp_pre_export.columns=['id','job_name'] # Next we load in the job descriptive stats and load join these to the hash table jobs_descriptive = pd.read_csv("jobs.csv") jobs_descriptive_hashed = pd.merge(jobs_descriptive, temp_pre_export, left_on = "Occupation", right_on = "job_name") jobs_descriptive_hashed.columns=['to_delete', 'auto','wage','number','id','job_name'] jobs_descriptive_hashed = jobs_descriptive_hashed[['auto','wage','number','id','job_name']] # Exporting crosswalk table jobs_descriptive_hashed.to_csv("crosswalk.csv", index=False) # Creating melted csv of selected job vs. compared job similarities, with IDs hashed temp_sim = pd.merge(temp_sim,temp_pre_export, left_on='job_selected',right_on='job_name') temp_sim.columns=['job_compared','to_delete_0','similarity', 'id_selected','to_delete_1'] temp_sim = temp_sim[['job_compared', 'similarity', 'id_selected']] temp_sim = pd.merge(temp_sim,temp_pre_export, left_on='job_compared',right_on='job_name') temp_sim.columns = ['to_delete_0','similarity','id_compared','id_selected','to_delete_1'] temp_sim = temp_sim[['similarity','id_compared','id_selected']] temp_sim.to_csv("similarity.csv", index=False) # skills_raw.columns[451:500] # Additional Jobs # Old additional job to include in jobs stacked chart # 'Choreographers', # 'Dentists, General', # 'Registered Nurses', # 'Chiropractors', # 'Farmers, Ranchers, and Other Agricultural Managers', # 'Construction Managers', # 'Firefighters', # 'Geographers', # 'Heavy and Tractor-Trailer Truck Drivers', # 'Embalmers', # 'Pipelayers' # Old column names: ['choreographers','dentists','nurses','chiropractors','farmers', # 'construction_managers','firefighters','geographers','truckers','embalmers','pipelayers','skills'] jobs_to_keep=[] jobs_to_keep_renamed=[] jobs_to_keep = [ 'Choreographers', "Dentists, General", "Registered Nurses",\ "Chiropractors",\ 'Farmers, Ranchers, and Other Agricultural Managers', 'Construction Managers', "Firefighters",\ 'Geographers', 'Heavy and Tractor-Trailer Truck Drivers', 'Embalmers', 'Pipelayers', "Podiatrists", "Fabric and Apparel Patternmakers",\ "Clergy",\ "Makeup Artists, Theatrical and Performance",\ "Marriage and Family Therapists",\ "Chief Executives",\ "Art Directors",\ "Interior Designers",\ "Craft Artists",\ "Meeting, Convention, and Event Planners",\ "Veterinarians",\ "Writers and Authors",\ "Political Scientists",\ "Ship Engineers",\ "Emergency Medical Technicians and Paramedics",\ "Mathematicians",\ "Floral Designers",\ "Travel Guides",\ "Broadcast News Analysts",\ "Musicians and Singers",\ "Fitness Trainers and Aerobics Instructors",\ "Graphic Designers",\ "Childcare Workers",\ "Police and Sheriff's Patrol Officers",\ "Hairdressers, Hairstylists, and Cosmetologists",\ "Reporters and Correspondents",\ "Air Traffic Controllers",\ "Dancers",\ "Optometrists",\ "Physician Assistants",\ "Electricians",\ "Ambulance Drivers and Attendants, Except Emergency Medical Technicians",\ "Athletes and Sports Competitors",\ "Skincare Specialists",\ "Cooks, Private Household",\ "Funeral Attendants",\ "Actors",\ "Judges, Magistrate Judges, and Magistrates",\ "Economists",\ "Historians",\ "Dental Assistants",\ "Shoe and Leather Workers and Repairers",\ "Massage Therapists",\ "Millwrights",\ "Librarians",\ "Maids and Housekeeping Cleaners", "Bartenders", "Dishwashers", "Cooks, Fast Food", "Barbers", "Real Estate Sales Agents", "Proofreaders and Copy Markers"] # New column names for jobs to keep jobs_to_keep_renamed=["Choreographers", "Dentists", "Nurses", "Chiropractors", "Farmers", "Construction_Managers", "Firefighters", "Geographers", "Truck_drivers", "Embalmers", "Piplayers", "Podiatrists", "Fabric_Patternmakers", "Clergy", "Makeup_Artists", "Family_Therapists", "CEOs", "Art_Directors", "Interrior_Designers", "Craft_Artists", "Event_Planners", "Veterinarians", "Writers", "Political_Scientists", "Ship_Engineers", "Paramedics", "Mathematicians", "Florists", "Travel_Guides", "News_Analysts", "Musicians", "Fitness_Trainers", "Graphic_Designers", "Childcare_Workers", "Police_Officers", "Hairdressers", "Journalists", "Air_Traffic_Controllers", "Dancers", "Optometrists", "Physician_Assistants", "Electricians", "Ambulance_Drivers", "Athletes", "Skincare_Specialists", "Private_Cooks", "Funeral_Attendants", "Actors", "Judges", "Economists", "historians", "Dental_Assistants", "Cobblers", "Massage_Therapists", "Millwrights", "Librarians", "Maids", "Bartenders", "Dishwashers", "Fast_Food_Cooks", "Barbers", "Real_Estate_Agents", "Proofreaders", "skills"] #Note we're adding this because there will be a skill columns added to the relevant data frame # Adding in skills crosswalk skills_crosswalk = pd.read_csv("skills_raw.csv") skills_crosswalk.columns.values[0]='skill_type' skills_crosswalk.columns.values[1]='skill_name' skills_crosswalk = pd.melt(skills_crosswalk, id_vars=['skill_type','skill_name'], var_name='job', value_name='imp') skills_crosswalk = skills_crosswalk[skills_crosswalk['skill_type']!='ability'] all_skills = list(set(skills_crosswalk['skill_name'].tolist())) skills_crosswalk = pd.DataFrame(all_skills) skills_crosswalk['skill_id'] = range(0,68) skills_crosswalk.columns=['skill','skill_id'] skills_crosswalk.to_csv("crosswalk_skills.csv", index=False) # Adding in skills for each job skills_raw = pd.read_csv("skills_raw.csv") skills_raw.columns.values[0]='skill_type' skills_raw.columns.values[1]='skill_name' # Creating a skill name and type data frame skills_name_and_type=skills_raw.iloc[:, [0,1]] # Melting the data frame into a d3-friendly format skills_edited = pd.melt(skills_raw, id_vars=['skill_type','skill_name'], var_name='job', value_name='imp') skills_edited = skills_edited[skills_edited['skill_type']!='ability'] skills_edited = pd.merge(skills_edited, jobs_descriptive_hashed, left_on='job', right_on='job_name') skills_edited = skills_edited[['id','skill_name','imp']] skills_edited["rank"] = skills_edited.groupby("id")["imp"].rank("dense", ascending=False) skills_edited=skills_edited[skills_edited['rank']<6] # Sorting the skills df to get rid of the tied ranks skills_edited = skills_edited.sort_values(by=['id','rank','skill_name']) skills_edited['rank_final']=skills_edited.groupby("id")["imp"].rank("first", ascending=False) skills_edited=skills_edited[skills_edited['rank_final']<6] skills_edited.columns=['id_selected','skill','imp','to_delete','rank'] skills_edited=skills_edited[['id_selected','skill','imp','rank']] # Joining skills for each profession to skills crosswalk skills = pd.merge(skills_edited, skills_crosswalk, how='inner', left_on='skill', right_on='skill') skills=skills[['id_selected','imp','skill_id','rank']] skills = skills.sort_values(by=['id_selected','imp']) skills.to_csv("skills.csv", index=False) # Getting full skill list and importance for devs and truck trivers # (Getting a column after it's been made into an index/can't be accessed in another way) skills_list = pd.DataFrame(skills_raw.ix[:,1]) # Creating dev + trucker dF dev_and_trucker_skills=skills_raw[['Software Developers, Applications','Heavy and Tractor-Trailer Truck Drivers']] dev_and_trucker_skills['skill']=skills_list # Renaming the columns in the data frame to reduce size dev_and_trucker_skills.columns=['devs','truckers','skills'] # Filtering out all abilities dev_and_trucker_skills = pd.merge(dev_and_trucker_skills, skills_name_and_type, left_on ='skills', right_on='skill_name') dev_and_trucker_skills = dev_and_trucker_skills[dev_and_trucker_skills['skill_type']!='ability'] # Devs and truckers skills to CSV dev_and_trucker_skills.to_csv("devs_and_truckers_skills.csv", index=False) # ten_profession_skills=skills_raw[jobs_to_keep] # ten_profession_skills['skill']=skills_list # ten_profession_skills.columns=jobs_to_keep_renamed # # Getting difference scores # ten_profession_skills['choreographers_difference']=abs(ten_profession_skills['truckers']-ten_profession_skills['choreographers']) # ten_profession_skills['dentists_difference']=abs(ten_profession_skills['truckers']-ten_profession_skills['dentists']) # ten_profession_skills['nurses_difference']=abs(ten_profession_skills['truckers']-ten_profession_skills['nurses']) # ten_profession_skills['chiropractors_difference']=abs(ten_profession_skills['truckers']-ten_profession_skills['chiropractors']) # ten_profession_skills['farmers_difference']=abs(ten_profession_skills['truckers']-ten_profession_skills['farmers']) # ten_profession_skills['construction_managers_difference']=abs(ten_profession_skills['truckers']-ten_profession_skills['construction_managers']) # ten_profession_skills['firefighters_difference']=abs(ten_profession_skills['truckers']-ten_profession_skills['firefighters']) # ten_profession_skills['geographers_difference']=abs(ten_profession_skills['truckers']-ten_profession_skills['geographers']) # ten_profession_skills['embalmers_difference']=abs(ten_profession_skills['truckers']-ten_profession_skills['embalmers']) # ten_profession_skills['pipelayers_difference']=abs(ten_profession_skills['truckers']-ten_profession_skills['pipelayers']) # ten_profession_skills.to_csv("ten_profession_skills.csv", index=False) # choreographer = ten_profession_skills[['choreographers','truckers','skills','choreographers_difference']] # dentists = ten_profession_skills[['dentists','truckers','skills','dentists_difference']] # nurses = ten_profession_skills[['nurses','truckers','skills','nurses_difference']] # chiropractors = ten_profession_skills[['chiropractors','truckers','skills','chiropractors_difference']] # farmers = ten_profession_skills[['farmers','truckers','skills','farmers_difference']] # construction_managers = ten_profession_skills[['construction_managers','truckers','skills','construction_managers_difference']] # firefighters = ten_profession_skills[['firefighters','truckers','skills','firefighters_difference']] # geographers = ten_profession_skills[['geographers','truckers','skills','geographers_difference']] # embalmers = ten_profession_skills[['embalmers','truckers','skills','embalmers_difference']] # pipelayers = ten_profession_skills[['pipelayers','truckers','skills','pipelayers_difference']] # choreographer.columns=['job_compared','job_selected','skills','difference'] # dentists.columns=['job_compared','job_selected','skills','difference'] # nurses.columns=['job_compared','job_selected','skills','difference'] # chiropractors.columns=['job_compared','job_selected','skills','difference'] # farmers.columns=['job_compared','job_selected','skills','difference'] # construction_managers.columns=['job_compared','job_selected','skills','difference'] # firefighters.columns=['job_compared','job_selected','skills','difference'] # geographers.columns=['job_compared','job_selected','skills','difference'] # embalmers.columns=['job_compared','job_selected','skills','difference'] # pipelayers.columns=['job_compared','job_selected','skills','difference'] # list_of_dfs=[choreographer,dentists,nurses,chiropractors,farmers,construction_managers, # firefighters,geographers,embalmers,pipelayers] # for df in list_of_dfs: # difference_list=[] # difference_list = df['difference'].tolist() # i=0 # for skill in difference_list: # print "i :"+str(i) # print "skill :"+str(skill) # skill+=i # print "skill :"+str(i) +" updated: "+str(skill) # difference_list[i]=skill # i+=1 # df['difference']=pd.Series(difference_list).values # choreographer['job_compared_name']='choreographer' # dentists['job_compared_name']='dentists' # nurses['job_compared_name']='nurses' # chiropractors['job_compared_name']='chiropractors' # farmers['job_compared_name']='farmers' # construction_managers['job_compared_name']='construction_managers' # firefighters['job_compared_name']='firefighters' # geographers['job_compared_name']='geographers' # embalmers['job_compared_name']='embalmers' # pipelayers['job_compared_name']='pipelayers' # choreographer.to_csv('choreographer.csv', index=False) # dentists.to_csv('dentists.csv', index=False) # nurses.to_csv('nurses.csv', index=False) # chiropractors.to_csv('chiropractors.csv', index=False) # farmers.to_csv('farmers.csv', index=False) # construction_managers.to_csv('construction_managers.csv', index=False) # firefighters.to_csv('firefighters.csv', index=False) # geographers.to_csv('geographers.csv', index=False) # embalmers.to_csv('embalmers.csv', index=False) # pipelayers.to_csv('pipelayers.csv', index=False) ten_profession_skills=skills_raw[jobs_to_keep] ten_profession_skills['skill']=skills_list ten_profession_skills.columns=jobs_to_keep_renamed ten_profession_skills['Choreographers_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Choreographers']) ten_profession_skills['Dentists_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Dentists']) ten_profession_skills['Nurses_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Nurses']) ten_profession_skills['Chiropractors_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Chiropractors']) ten_profession_skills['Farmers_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Farmers']) ten_profession_skills['Construction_Managers_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Construction_Managers']) ten_profession_skills['Firefighters_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Firefighters']) ten_profession_skills['Geographers_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Geographers']) ten_profession_skills['Truck_drivers_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Truck_drivers']) ten_profession_skills['Embalmers_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Embalmers']) ten_profession_skills['Piplayers_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Piplayers']) ten_profession_skills['Podiatrists_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Podiatrists']) ten_profession_skills['Fabric_Patternmakers_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Fabric_Patternmakers']) ten_profession_skills['Clergy_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Clergy']) ten_profession_skills['Makeup_Artists_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Makeup_Artists']) ten_profession_skills['Family_Therapists_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Family_Therapists']) ten_profession_skills['CEOs_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['CEOs']) ten_profession_skills['Art_Directors_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Art_Directors']) ten_profession_skills['Interrior_Designers_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Interrior_Designers']) ten_profession_skills['Craft_Artists_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Craft_Artists']) ten_profession_skills['Event_Planners_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Event_Planners']) ten_profession_skills['Veterinarians_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Veterinarians']) ten_profession_skills['Writers_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Writers']) ten_profession_skills['Political_Scientists_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Political_Scientists']) ten_profession_skills['Ship_Engineers_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Ship_Engineers']) ten_profession_skills['Paramedics_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Paramedics']) ten_profession_skills['Mathematicians_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Mathematicians']) ten_profession_skills['Florists_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Florists']) ten_profession_skills['Travel_Guides_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Travel_Guides']) ten_profession_skills['News_Analysts_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['News_Analysts']) ten_profession_skills['Musicians_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Musicians']) ten_profession_skills['Fitness_Trainers_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Fitness_Trainers']) ten_profession_skills['Graphic_Designers_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Graphic_Designers']) ten_profession_skills['Childcare_Workers_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Childcare_Workers']) ten_profession_skills['Police_Officers_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Police_Officers']) ten_profession_skills['Hairdressers_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Hairdressers']) ten_profession_skills['Journalists_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Journalists']) ten_profession_skills['Air_Traffic_Controllers_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Air_Traffic_Controllers']) ten_profession_skills['Dancers_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Dancers']) ten_profession_skills['Optometrists_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Optometrists']) ten_profession_skills['Physician_Assistants_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Physician_Assistants']) ten_profession_skills['Electricians_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Electricians']) ten_profession_skills['Ambulance_Drivers_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Ambulance_Drivers']) ten_profession_skills['Athletes_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Athletes']) ten_profession_skills['Skincare_Specialists_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Skincare_Specialists']) ten_profession_skills['Private_Cooks_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Private_Cooks']) ten_profession_skills['Funeral_Attendants_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Funeral_Attendants']) ten_profession_skills['Actors_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Actors']) ten_profession_skills['Judges_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Judges']) ten_profession_skills['Economists_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Economists']) ten_profession_skills['historians_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['historians']) ten_profession_skills['Dental_Assistants_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Dental_Assistants']) ten_profession_skills['Cobblers_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Cobblers']) ten_profession_skills['Massage_Therapists_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Massage_Therapists']) ten_profession_skills['Millwrights_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Millwrights']) ten_profession_skills['Librarians_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Librarians']) ten_profession_skills['Maids_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Maids']) ten_profession_skills['Bartenders_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Bartenders']) ten_profession_skills['Dishwashers_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Dishwashers']) ten_profession_skills['Fast_Food_Cooks_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Fast_Food_Cooks']) ten_profession_skills['Barbers_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Barbers']) ten_profession_skills['Real_Estate_Agents_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Real_Estate_Agents']) ten_profession_skills['Proofreaders_difference']=abs(ten_profession_skills['Truck_drivers']-ten_profession_skills['Proofreaders']) Choreographers = ten_profession_skills[['Choreographers','Truck_drivers','skills','Choreographers_difference']] Dentists = ten_profession_skills[['Dentists','Truck_drivers','skills','Dentists_difference']] Nurses = ten_profession_skills[['Nurses','Truck_drivers','skills','Nurses_difference']] Chiropractors = ten_profession_skills[['Chiropractors','Truck_drivers','skills','Chiropractors_difference']] Farmers = ten_profession_skills[['Farmers','Truck_drivers','skills','Farmers_difference']] Construction_Managers = ten_profession_skills[['Construction_Managers','Truck_drivers','skills','Construction_Managers_difference']] Firefighters = ten_profession_skills[['Firefighters','Truck_drivers','skills','Firefighters_difference']] Geographers = ten_profession_skills[['Geographers','Truck_drivers','skills','Geographers_difference']] Truck_drivers = ten_profession_skills[['Truck_drivers','Truck_drivers','skills','Truck_drivers_difference']] Embalmers = ten_profession_skills[['Embalmers','Truck_drivers','skills','Embalmers_difference']] Piplayers = ten_profession_skills[['Piplayers','Truck_drivers','skills','Piplayers_difference']] Podiatrists = ten_profession_skills[['Podiatrists','Truck_drivers','skills','Podiatrists_difference']] Fabric_Patternmakers = ten_profession_skills[['Fabric_Patternmakers','Truck_drivers','skills','Fabric_Patternmakers_difference']] Clergy = ten_profession_skills[['Clergy','Truck_drivers','skills','Clergy_difference']] Makeup_Artists = ten_profession_skills[['Makeup_Artists','Truck_drivers','skills','Makeup_Artists_difference']] Family_Therapists = ten_profession_skills[['Family_Therapists','Truck_drivers','skills','Family_Therapists_difference']] CEOs = ten_profession_skills[['CEOs','Truck_drivers','skills','CEOs_difference']] Art_Directors = ten_profession_skills[['Art_Directors','Truck_drivers','skills','Art_Directors_difference']] Interrior_Designers = ten_profession_skills[['Interrior_Designers','Truck_drivers','skills','Interrior_Designers_difference']] Craft_Artists = ten_profession_skills[['Craft_Artists','Truck_drivers','skills','Craft_Artists_difference']] Event_Planners = ten_profession_skills[['Event_Planners','Truck_drivers','skills','Event_Planners_difference']] Veterinarians = ten_profession_skills[['Veterinarians','Truck_drivers','skills','Veterinarians_difference']] Writers = ten_profession_skills[['Writers','Truck_drivers','skills','Writers_difference']] Political_Scientists = ten_profession_skills[['Political_Scientists','Truck_drivers','skills','Political_Scientists_difference']] Ship_Engineers = ten_profession_skills[['Ship_Engineers','Truck_drivers','skills','Ship_Engineers_difference']] Paramedics = ten_profession_skills[['Paramedics','Truck_drivers','skills','Paramedics_difference']] Mathematicians = ten_profession_skills[['Mathematicians','Truck_drivers','skills','Mathematicians_difference']] Florists = ten_profession_skills[['Florists','Truck_drivers','skills','Florists_difference']] Travel_Guides = ten_profession_skills[['Travel_Guides','Truck_drivers','skills','Travel_Guides_difference']] News_Analysts = ten_profession_skills[['News_Analysts','Truck_drivers','skills','News_Analysts_difference']] Musicians = ten_profession_skills[['Musicians','Truck_drivers','skills','Musicians_difference']] Fitness_Trainers = ten_profession_skills[['Fitness_Trainers','Truck_drivers','skills','Fitness_Trainers_difference']] Graphic_Designers = ten_profession_skills[['Graphic_Designers','Truck_drivers','skills','Graphic_Designers_difference']] Childcare_Workers = ten_profession_skills[['Childcare_Workers','Truck_drivers','skills','Childcare_Workers_difference']] Police_Officers = ten_profession_skills[['Police_Officers','Truck_drivers','skills','Police_Officers_difference']] Hairdressers = ten_profession_skills[['Hairdressers','Truck_drivers','skills','Hairdressers_difference']] Journalists = ten_profession_skills[['Journalists','Truck_drivers','skills','Journalists_difference']] Air_Traffic_Controllers = ten_profession_skills[['Air_Traffic_Controllers','Truck_drivers','skills','Air_Traffic_Controllers_difference']] Dancers = ten_profession_skills[['Dancers','Truck_drivers','skills','Dancers_difference']] Optometrists = ten_profession_skills[['Optometrists','Truck_drivers','skills','Optometrists_difference']] Physician_Assistants = ten_profession_skills[['Physician_Assistants','Truck_drivers','skills','Physician_Assistants_difference']] Electricians = ten_profession_skills[['Electricians','Truck_drivers','skills','Electricians_difference']] Ambulance_Drivers = ten_profession_skills[['Ambulance_Drivers','Truck_drivers','skills','Ambulance_Drivers_difference']] Athletes = ten_profession_skills[['Athletes','Truck_drivers','skills','Athletes_difference']] Skincare_Specialists = ten_profession_skills[['Skincare_Specialists','Truck_drivers','skills','Skincare_Specialists_difference']] Private_Cooks = ten_profession_skills[['Private_Cooks','Truck_drivers','skills','Private_Cooks_difference']] Funeral_Attendants = ten_profession_skills[['Funeral_Attendants','Truck_drivers','skills','Funeral_Attendants_difference']] Actors = ten_profession_skills[['Actors','Truck_drivers','skills','Actors_difference']] Judges = ten_profession_skills[['Judges','Truck_drivers','skills','Judges_difference']] Economists = ten_profession_skills[['Economists','Truck_drivers','skills','Economists_difference']] historians = ten_profession_skills[['historians','Truck_drivers','skills','historians_difference']] Dental_Assistants = ten_profession_skills[['Dental_Assistants','Truck_drivers','skills','Dental_Assistants_difference']] Cobblers = ten_profession_skills[['Cobblers','Truck_drivers','skills','Cobblers_difference']] Massage_Therapists = ten_profession_skills[['Massage_Therapists','Truck_drivers','skills','Massage_Therapists_difference']] Millwrights = ten_profession_skills[['Millwrights','Truck_drivers','skills','Millwrights_difference']] Librarians = ten_profession_skills[['Librarians','Truck_drivers','skills','Librarians_difference']] Maids = ten_profession_skills[['Maids','Truck_drivers','skills','Maids_difference']] Bartenders = ten_profession_skills[['Bartenders','Truck_drivers','skills','Bartenders_difference']] Dishwashers = ten_profession_skills[['Dishwashers','Truck_drivers','skills','Dishwashers_difference']] Fast_Food_Cooks = ten_profession_skills[['Fast_Food_Cooks','Truck_drivers','skills','Fast_Food_Cooks_difference']] Barbers = ten_profession_skills[['Barbers','Truck_drivers','skills','Barbers_difference']] Real_Estate_Agents = ten_profession_skills[['Real_Estate_Agents','Truck_drivers','skills','Real_Estate_Agents_difference']] Proofreaders = ten_profession_skills[['Proofreaders','Truck_drivers','skills','Proofreaders_difference']] Choreographers.columns=['job_compared','job_selected','skills','difference'] Dentists.columns=['job_compared','job_selected','skills','difference'] Nurses.columns=['job_compared','job_selected','skills','difference'] Chiropractors.columns=['job_compared','job_selected','skills','difference'] Farmers.columns=['job_compared','job_selected','skills','difference'] Construction_Managers.columns=['job_compared','job_selected','skills','difference'] Firefighters.columns=['job_compared','job_selected','skills','difference'] Geographers.columns=['job_compared','job_selected','skills','difference'] Truck_drivers.columns=['job_compared','job_selected','skills','difference'] Embalmers.columns=['job_compared','job_selected','skills','difference'] Piplayers.columns=['job_compared','job_selected','skills','difference'] Podiatrists.columns=['job_compared','job_selected','skills','difference'] Fabric_Patternmakers.columns=['job_compared','job_selected','skills','difference'] Clergy.columns=['job_compared','job_selected','skills','difference'] Makeup_Artists.columns=['job_compared','job_selected','skills','difference'] Family_Therapists.columns=['job_compared','job_selected','skills','difference'] CEOs.columns=['job_compared','job_selected','skills','difference'] Art_Directors.columns=['job_compared','job_selected','skills','difference'] Interrior_Designers.columns=['job_compared','job_selected','skills','difference'] Craft_Artists.columns=['job_compared','job_selected','skills','difference'] Event_Planners.columns=['job_compared','job_selected','skills','difference'] Veterinarians.columns=['job_compared','job_selected','skills','difference'] Writers.columns=['job_compared','job_selected','skills','difference'] Political_Scientists.columns=['job_compared','job_selected','skills','difference'] Ship_Engineers.columns=['job_compared','job_selected','skills','difference'] Paramedics.columns=['job_compared','job_selected','skills','difference'] Mathematicians.columns=['job_compared','job_selected','skills','difference'] Florists.columns=['job_compared','job_selected','skills','difference'] Travel_Guides.columns=['job_compared','job_selected','skills','difference'] News_Analysts.columns=['job_compared','job_selected','skills','difference'] Musicians.columns=['job_compared','job_selected','skills','difference'] Fitness_Trainers.columns=['job_compared','job_selected','skills','difference'] Graphic_Designers.columns=['job_compared','job_selected','skills','difference'] Childcare_Workers.columns=['job_compared','job_selected','skills','difference'] Police_Officers.columns=['job_compared','job_selected','skills','difference'] Hairdressers.columns=['job_compared','job_selected','skills','difference'] Journalists.columns=['job_compared','job_selected','skills','difference'] Air_Traffic_Controllers.columns=['job_compared','job_selected','skills','difference'] Dancers.columns=['job_compared','job_selected','skills','difference'] Optometrists.columns=['job_compared','job_selected','skills','difference'] Physician_Assistants.columns=['job_compared','job_selected','skills','difference'] Electricians.columns=['job_compared','job_selected','skills','difference'] Ambulance_Drivers.columns=['job_compared','job_selected','skills','difference'] Athletes.columns=['job_compared','job_selected','skills','difference'] Skincare_Specialists.columns=['job_compared','job_selected','skills','difference'] Private_Cooks.columns=['job_compared','job_selected','skills','difference'] Funeral_Attendants.columns=['job_compared','job_selected','skills','difference'] Actors.columns=['job_compared','job_selected','skills','difference'] Judges.columns=['job_compared','job_selected','skills','difference'] Economists.columns=['job_compared','job_selected','skills','difference'] historians.columns=['job_compared','job_selected','skills','difference'] Dental_Assistants.columns=['job_compared','job_selected','skills','difference'] Cobblers.columns=['job_compared','job_selected','skills','difference'] Massage_Therapists.columns=['job_compared','job_selected','skills','difference'] Millwrights.columns=['job_compared','job_selected','skills','difference'] Librarians.columns=['job_compared','job_selected','skills','difference'] Maids.columns=['job_compared','job_selected','skills','difference'] Bartenders.columns=['job_compared','job_selected','skills','difference'] Dishwashers.columns=['job_compared','job_selected','skills','difference'] Fast_Food_Cooks.columns=['job_compared','job_selected','skills','difference'] Barbers.columns=['job_compared','job_selected','skills','difference'] Real_Estate_Agents.columns=['job_compared','job_selected','skills','difference'] Proofreaders.columns=['job_compared','job_selected','skills','difference'] list_of_dfs=[ Choreographers, Dentists, Nurses, Chiropractors, Farmers, Construction_Managers, Firefighters, Geographers, Truck_drivers, Embalmers, Piplayers, Podiatrists, Fabric_Patternmakers, Clergy, Makeup_Artists, Family_Therapists, CEOs, Art_Directors, Interrior_Designers, Craft_Artists, Event_Planners, Veterinarians, Writers, Political_Scientists, Ship_Engineers, Paramedics, Mathematicians, Florists, Travel_Guides, News_Analysts, Musicians, Fitness_Trainers, Graphic_Designers, Childcare_Workers, Police_Officers, Hairdressers, Journalists, Air_Traffic_Controllers, Dancers, Optometrists, Physician_Assistants, Electricians, Ambulance_Drivers, Athletes, Skincare_Specialists, Private_Cooks, Funeral_Attendants, Actors, Judges, Economists, historians, Dental_Assistants, Cobblers, Massage_Therapists, Millwrights, Librarians, Maids, Bartenders, Dishwashers, Fast_Food_Cooks, Barbers, Real_Estate_Agents, Proofreaders] for df in list_of_dfs: difference_list=[] difference_list = df['difference'].tolist() i=0 for skill in difference_list: print "i :"+str(i) print "skill :"+str(skill) skill+=i print "skill :"+str(i) +" updated: "+str(skill) difference_list[i]=skill i+=1 df['difference']=pd.Series(difference_list).values Choreographers['job_compared_name']='Choreographers' Dentists['job_compared_name']='Dentists' Nurses['job_compared_name']='Nurses' Chiropractors['job_compared_name']='Chiropractors' Farmers['job_compared_name']='Farmers' Construction_Managers['job_compared_name']='Construction_Managers' Firefighters['job_compared_name']='Firefighters' Geographers['job_compared_name']='Geographers' Truck_drivers['job_compared_name']='Truck_drivers' Embalmers['job_compared_name']='Embalmers' Piplayers['job_compared_name']='Piplayers' Podiatrists['job_compared_name']='Podiatrists' Fabric_Patternmakers['job_compared_name']='Fabric_Patternmakers' Clergy['job_compared_name']='Clergy' Makeup_Artists['job_compared_name']='Makeup_Artists' Family_Therapists['job_compared_name']='Family_Therapists' CEOs['job_compared_name']='CEOs' Art_Directors['job_compared_name']='Art_Directors' Interrior_Designers['job_compared_name']='Interrior_Designers' Craft_Artists['job_compared_name']='Craft_Artists' Event_Planners['job_compared_name']='Event_Planners' Veterinarians['job_compared_name']='Veterinarians' Writers['job_compared_name']='Writers' Political_Scientists['job_compared_name']='Political_Scientists' Ship_Engineers['job_compared_name']='Ship_Engineers' Paramedics['job_compared_name']='Paramedics' Mathematicians['job_compared_name']='Mathematicians' Florists['job_compared_name']='Florists' Travel_Guides['job_compared_name']='Travel_Guides' News_Analysts['job_compared_name']='News_Analysts' Musicians['job_compared_name']='Musicians' Fitness_Trainers['job_compared_name']='Fitness_Trainers' Graphic_Designers['job_compared_name']='Graphic_Designers' Childcare_Workers['job_compared_name']='Childcare_Workers' Police_Officers['job_compared_name']='Police_Officers' Hairdressers['job_compared_name']='Hairdressers' Journalists['job_compared_name']='Journalists' Air_Traffic_Controllers['job_compared_name']='Air_Traffic_Controllers' Dancers['job_compared_name']='Dancers' Optometrists['job_compared_name']='Optometrists' Physician_Assistants['job_compared_name']='Physician_Assistants' Electricians['job_compared_name']='Electricians' Ambulance_Drivers['job_compared_name']='Ambulance_Drivers' Athletes['job_compared_name']='Athletes' Skincare_Specialists['job_compared_name']='Skincare_Specialists' Private_Cooks['job_compared_name']='Private_Cooks' Funeral_Attendants['job_compared_name']='Funeral_Attendants' Actors['job_compared_name']='Actors' Judges['job_compared_name']='Judges' Economists['job_compared_name']='Economists' historians['job_compared_name']='historians' Dental_Assistants['job_compared_name']='Dental_Assistants' Cobblers['job_compared_name']='Cobblers' Massage_Therapists['job_compared_name']='Massage_Therapists' Millwrights['job_compared_name']='Millwrights' Librarians['job_compared_name']='Librarians' Maids['job_compared_name']='Maids' Bartenders['job_compared_name']='Bartenders' Dishwashers['job_compared_name']='Dishwashers' Fast_Food_Cooks['job_compared_name']='Fast_Food_Cooks' Barbers['job_compared_name']='Barbers' Real_Estate_Agents['job_compared_name']='Real_Estate_Agents' Proofreaders['job_compared_name']='Proofreaders' Choreographers.to_csv('Choreographers.csv', index=False) Dentists.to_csv('Dentists.csv', index=False) Nurses.to_csv('Nurses.csv', index=False) Chiropractors.to_csv('Chiropractors.csv', index=False) Farmers.to_csv('Farmers.csv', index=False) Construction_Managers.to_csv('Construction_Managers.csv', index=False) Firefighters.to_csv('Firefighters.csv', index=False) Geographers.to_csv('Geographers.csv', index=False) Truck_drivers.to_csv('Truck_drivers.csv', index=False) Embalmers.to_csv('Embalmers.csv', index=False) Piplayers.to_csv('Piplayers.csv', index=False) Podiatrists.to_csv('Podiatrists.csv', index=False) Fabric_Patternmakers.to_csv('Fabric_Patternmakers.csv', index=False) Clergy.to_csv('Clergy.csv', index=False) Makeup_Artists.to_csv('Makeup_Artists.csv', index=False) Family_Therapists.to_csv('Family_Therapists.csv', index=False) CEOs.to_csv('CEOs.csv', index=False) Art_Directors.to_csv('Art_Directors.csv', index=False) Interrior_Designers.to_csv('Interrior_Designers.csv', index=False) Craft_Artists.to_csv('Craft_Artists.csv', index=False) Event_Planners.to_csv('Event_Planners.csv', index=False) Veterinarians.to_csv('Veterinarians.csv', index=False) Writers.to_csv('Writers.csv', index=False) Political_Scientists.to_csv('Political_Scientists.csv', index=False) Ship_Engineers.to_csv('Ship_Engineers.csv', index=False) Paramedics.to_csv('Paramedics.csv', index=False) Mathematicians.to_csv('Mathematicians.csv', index=False) Florists.to_csv('Florists.csv', index=False) Travel_Guides.to_csv('Travel_Guides.csv', index=False) News_Analysts.to_csv('News_Analysts.csv', index=False) Musicians.to_csv('Musicians.csv', index=False) Fitness_Trainers.to_csv('Fitness_Trainers.csv', index=False) Graphic_Designers.to_csv('Graphic_Designers.csv', index=False) Childcare_Workers.to_csv('Childcare_Workers.csv', index=False) Police_Officers.to_csv('Police_Officers.csv', index=False) Hairdressers.to_csv('Hairdressers.csv', index=False) Journalists.to_csv('Journalists.csv', index=False) Air_Traffic_Controllers.to_csv('Air_Traffic_Controllers.csv', index=False) Dancers.to_csv('Dancers.csv', index=False) Optometrists.to_csv('Optometrists.csv', index=False) Physician_Assistants.to_csv('Physician_Assistants.csv', index=False) Electricians.to_csv('Electricians.csv', index=False) Ambulance_Drivers.to_csv('Ambulance_Drivers.csv', index=False) Athletes.to_csv('Athletes.csv', index=False) Skincare_Specialists.to_csv('Skincare_Specialists.csv', index=False) Private_Cooks.to_csv('Private_Cooks.csv', index=False) Funeral_Attendants.to_csv('Funeral_Attendants.csv', index=False) Actors.to_csv('Actors.csv', index=False) Judges.to_csv('Judges.csv', index=False) Economists.to_csv('Economists.csv', index=False) historians.to_csv('historians.csv', index=False) Dental_Assistants.to_csv('Dental_Assistants.csv', index=False) Cobblers.to_csv('Cobblers.csv', index=False) Massage_Therapists.to_csv('Massage_Therapists.csv', index=False) Millwrights.to_csv('Millwrights.csv', index=False) Librarians.to_csv('Librarians.csv', index=False) Maids.to_csv('Maids.csv', index=False) Bartenders.to_csv('Bartenders.csv', index=False) Dishwashers.to_csv('Dishwashers.csv', index=False) Fast_Food_Cooks.to_csv('Fast_Food_Cooks.csv', index=False) Barbers.to_csv('Barbers.csv', index=False) Real_Estate_Agents.to_csv('Real_Estate_Agents.csv', index=False) Proofreaders.to_csv('Proofreaders.csv', index=False) ``` # OLD ``` auto_df = auto_df[['Job_Compared','Heavy and Tractor-Trailer Truck Drivers','Athletes and Sports Competitors','Compensation and Benefits Managers','Construction Laborers']] relative_emp_df = relative_emp_df[['Job_Compared','Heavy and Tractor-Trailer Truck Drivers','Athletes and Sports Competitors','Compensation and Benefits Managers','Construction Laborers']] similar_df = similar_df[['Job_Compared','Heavy and Tractor-Trailer Truck Drivers','Athletes and Sports Competitors','Compensation and Benefits Managers','Construction Laborers']] wagechange_df = wagechange_df[['Job_Compared','Heavy and Tractor-Trailer Truck Drivers','Athletes and Sports Competitors','Compensation and Benefits Managers','Construction Laborers']] auto_df=auto_df.set_index('Job_Compared') relative_emp_df=relative_emp_df.set_index('Job_Compared') similar_df=similar_df.set_index('Job_Compared') wagechange_df = wagechange_df.set_index('Job_Compared') auto_df = auto_df.add_prefix('auto_') relative_emp_df = relative_emp_df.add_prefix('empl_') similar_df = similar_df.add_prefix('similar_') wagechange_df = wagechange_df.add_prefix('wage_') auto_df= auto_df= auto_df.rename(columns=lambda x: x.replace(' ','_')) relative_emp_df= relative_emp_df.rename(columns=lambda x: x.replace(' ','_')) similar_df= similar_df.rename(columns=lambda x: x.replace(' ','_')) wagechange_df= wagechange_df.rename(columns=lambda x: x.replace(' ','_')) auto_df= auto_df= auto_df.rename(columns=lambda x: x.replace(' ','_')) relative_emp_df= relative_emp_df.rename(columns=lambda x: x.replace(' ','_')) similar_df= similar_df.rename(columns=lambda x: x.replace(' ','_')) wagechange_df= wagechange_df.rename(columns=lambda x: x.replace(' ','_')) truckers_wage = wagechange_df[wagechange_df.columns[0]].reset_index() athletes_wage = wagechange_df[wagechange_df.columns[1]].reset_index() comp_managers_wage = wagechange_df[wagechange_df.columns[2]].reset_index() laborers_wage = wagechange_df[wagechange_df.columns[3]].reset_index() truckers_similarity = similar_df[similar_df.columns[0]].reset_index() athletes_similarity = similar_df[similar_df.columns[1]].reset_index() comp_managers_similarity = similar_df[similar_df.columns[2]].reset_index() laborers_similarity = similar_df[similar_df.columns[3]].reset_index() truckers_relative_emp = relative_emp_df[relative_emp_df.columns[0]].reset_index() athletes_relative_emp = relative_emp_df[relative_emp_df.columns[1]].reset_index() comp_managers_relative_emp = relative_emp_df[relative_emp_df.columns[2]].reset_index() laborers_relative_emp = relative_emp_df[relative_emp_df.columns[3]].reset_index() truckers_automatability = auto_df[auto_df.columns[0]].reset_index() athletes_automatability = auto_df[auto_df.columns[1]].reset_index() comp_managers_automatability = auto_df[auto_df.columns[2]].reset_index() laborers_automatability = auto_df[auto_df.columns[3]].reset_index() dfs_trucker = [truckers_wage, truckers_similarity, truckers_relative_emp, truckers_automatability] dfs_athlete = [athletes_wage,athletes_similarity,athletes_relative_emp,athletes_automatability] dfs_comp_managers = [comp_managers_wage,comp_managers_similarity,comp_managers_relative_emp,comp_managers_automatability] dfs_laborers = [laborers_wage,laborers_similarity,laborers_relative_emp,laborers_automatability] df_trucker_final =reduce(lambda left,right: pd.merge(left,right,on='Job_Compared'), dfs_trucker) df_athlete_final =reduce(lambda left,right: pd.merge(left,right,on='Job_Compared'), dfs_athlete) df_comp_managers_final=reduce(lambda left,right: pd.merge(left,right,on='Job_Compared'), dfs_comp_managers) df_laborers_final =reduce(lambda left,right: pd.merge(left,right,on='Job_Compared'), dfs_laborers) df_trucker_final = df_trucker_final.reset_index() df_athlete_final = df_athlete_final.reset_index("Job_Compared") df_comp_managers_final = df_comp_managers_final.reset_index("Job_Compared") df_laborers_final = df_laborers_final.reset_index("Job_Compared") raw_wage_auto_jobs = pd.read_csv("jobs.csv") raw_wage_auto_jobs= raw_wage_auto_jobs # .set_index("Occupation") laborers = pd.merge(raw_wage_auto_jobs,df_laborers_final, right_on='Job_Compared',left_on='Occupation') truckers = pd.merge(raw_wage_auto_jobs,df_trucker_final, right_on='Job_Compared',left_on='Occupation') athletes = pd.merge(raw_wage_auto_jobs,df_athlete_final, right_on='Job_Compared',left_on='Occupation') comp_managers = pd.merge(raw_wage_auto_jobs,df_comp_managers_final, right_on='Job_Compared',left_on='Occupation') laborers.to_csv("laborers.csv") truckers.to_csv("truckers.csv") athletes.to_csv("athletes.csv") comp_managers.to_csv("comp_managers.csv") # auto_df.to_csv("auto.csv") # relative_emp.to_csv("relative_emp.csv") # similar_df.to_csv("simlar.csv") # wagechange.to_csv("wage.csv") ```
github_jupyter
``` %reload_ext autoreload %autoreload 2 from fastai.basics import * from pathlib import Path import pandas as pd ``` # Rossmann ## Data preparation / Feature engineering Set `PATH` to the path `~/data/rossmann/`. Create a list of table names, with one entry for each CSV that you'll be loading: - train - store - store_states - state_names - googletrend - weather - test For each csv, read it in using pandas (with `low_memory=False`), and assign it to a variable corresponding with its name. Print out the lengths of the `train` and `test` tables. ``` PATH = Path("~/.fastai/data/rossmann/") csvs = ['train', 'store', 'store_states', 'state_names', 'googletrend', 'weather', 'test'] tables = [pd.read_csv(f"{PATH}/{csv}.csv", low_memory=False) for csv in csvs] train, store, store_states, state_names, googletrend, weather, test = tables ``` Turn the `StateHoliday` column into a boolean indicating whether or not the day was a holiday. ``` train['StateHoliday'] = train['StateHoliday'] != '0' test['StateHoliday'] = test['StateHoliday'] != '0' train['SchoolHoliday'] = train['SchoolHoliday'] != 0 test['SchoolHoliday'] = test['SchoolHoliday'] != 0 train['StateHoliday'].value_counts() train['SchoolHoliday'].value_counts() ``` Print out the head of the dataframe. ``` train.head() ``` Create a function `join_df` that joins two dataframes together. It should take the following arguments: - left (the df on the lft) - right (the df on the right) - left_on (the left table join key) - right_on (the right table join key, defaulting to None; if nothing passed, default to the same as the left join key) - suffix (default to '_y'; a suffix to give to duplicate columns) ``` def join_df(left, right, left_on, right_on=None, suffix='_y'): return left.merge(right, left_on=left_on, right_on=right_on if right_on is not None else left_on, how='left', suffixes=('', suffix)) import pandas as pd df1 = pd.DataFrame({'a': [1,2], 'b': [100.0, 1000.0]}) df2 = pd.DataFrame({'a': [1,1,1,3,3,3], 'b': [10.0, 10.0, 10.0, 20.0, 20.0, 20.0]}) join_df(df1, df2, 'a') ``` Join the weather and state names tables together, and reassign them to the variable `weather`. ``` weather.head() state_names weather = join_df(weather, state_names, left_on='file', right_on='StateName') ``` Show the first few rows of the weather df. ``` weather.head() ``` In the `googletrend` table, set the `Date` variable to the first date in the hyphen-separated date string in the `week` field. Set the `State` field to the third element in the underscore-separated string from the `file` field. In all rows where `State == NI`, make it instead equal `HB,NI` which is how it's referred to throughout the reset of the data. ``` googletrend['Date'] = googletrend['week'].str.split(' - ', expand=True)[0] googletrend['State'] = googletrend['file'].str.split('_', expand=True)[2] googletrend['State'][googletrend['State'] == 'NI'] = 'HB,NI' googletrend.head() ``` Write a function `add_datepart` that takes a date field and adds a bunch of numeric columns containing information about the date. It should take the following arguments: - df (the dataframe you'll be modifying) - fldname (the date field you'll be splitting into new columns) - drop (whether or not to drop the old date field; defaults to True) - time (whether or not to add time fields -- Hour, Minute, Second; defaults to False) It should append ``` ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear','Is_month_end', 'Is_month_start', 'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']``` Remember the edge cases around the dtype of the field. Specifically, if it's of type DatetimeTZDtype, cast it instead to np.datetime64. If it's not a subtype of datetime64 already, infer it (see `pd.to_datetime`). ``` type(np.datetime64) Series(np.array([1,2,3], dtype='int64')).dtype ex_date = pd.to_datetime('2019-01-01') ex_date ex_date def add_datepart(df, fldname, drop=True, time=False): fld = df[fldname] if fld.dtype == pd.DatetimeTZDtype: fld = fld.astype(np.datetime64) if not np.issubdtype(fld.dtype, np.datetime64): fld = pd.to_datetime(fld) fields = [ 'Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start', 'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start' ] for i in fields: df[i] = getattr(fld.dt, i.lower()) if drop: df.drop(fldname, inplace=True) return df googletrend = add_datepart(googletrend, 'Date', False) ``` Use `add_datepart` to add date fields to the weather, googletrend, train and test tables. ``` weather.head() weather = add_datepart(weather, 'Date', False) train.head() train = add_datepart(train, 'Date', False) test = add_datepart(test, 'Date', False) ``` Print out the head of the weather table. ``` weather.head() ``` In the `googletrend` table, the `file` column has an entry `Rossmann_DE` that represents the whole of germany; we'll want to break that out into its own separate table, since we'll need to join it on `Date` alone rather than both `Date` and `Store`. ``` googletrend rossmann_full = googletrend[googletrend['file'] == 'Rossmann_DE'] ``` Now let's do a bunch of joins to build our entire dataset! Remember after each one to check if the right-side data is null. This is the benefit of left-joining; it's easy to debug by checking for null rows. Let's start by joining `store` and `store_states` in a new table called `store`. ``` store.head() store_states.head() store = join_df(store, store_states, 'Store') store store['State'].isna().sum() ``` Next let's join `train` and `store` in a table called `joined`. Do the same for `test` and `store` in a table called `joined_test`. ``` joined = join_df(train, store, 'Store') joined.head() joined_test = join_df(test, store, 'Store') joined_test.head() ``` Next join `joined` and `googletrend` on the columns `["State", "Year", "Week"]`. Again, do the same for the test data. ``` joined = join_df(joined, googletrend[["State", "Year", "Week", "trend"]], ["State", "Year", "Week"]) joined_test = join_df(joined_test, googletrend[["State", "Year", "Week", "trend"]], ["State", "Year", "Week"]) ``` Join `joined` and `trend_de` on `["Year", "Week"]` with suffix `_DE`. Same for test. ``` rossmann_full.head() joined = join_df(joined, rossmann_full[['Date', 'trend']], 'Date', suffix='_DE') joined_test = join_df(joined_test, rossmann_full[['Date', 'trend']], 'Date', suffix='_DE') ``` Join `joined` and `weather` on `["State", "Date"]`. Same for test. ``` weather.head() joined = join_df(joined, weather, ["State", "Date"]) joined_test = join_df(joined_test, weather, ["State", "Date"]) joined.columns joined['Min_DewpointC'].head() ``` Now for every column in both `joined` and `joined_test`, check to see if it has the `_y` suffix, and if so, drop it. Warning: a data frame can have duplicate column names, but calling `df.drop` will drop _all_ instances with the passed-in column name! This could lead to calling drop a second time on a column that no longer exists! ``` for i in (joined, joined_test): for j in i.columns: if j in i.columns and '_y' in j: i.drop(j, axis=1, inplace=True) joined.columns joined_test.columns len(joined.columns) len(joined_test.columns) ``` For the columns `CompetitionOpenSinceYear`, `CompetitionOpenSinceMonth`, `Promo2SinceYear`, and `Promo2SinceMonth`, replace `NA` values with the following values (respectively): - 1900 - 1 - 1900 - 1 ``` for i in (joined, joined_test): i.loc[i['CompetitionOpenSinceYear'].isna(), 'CompetitionOpenSinceYear'] = 1900 i.loc[i['CompetitionOpenSinceMonth'].isna(), 'CompetitionOpenSinceMonth'] = 1 i.loc[i['Promo2SinceYear'].isna(), 'Promo2SinceYear'] = 1900 i.loc[i['Promo2SinceWeek'].isna(), 'Promo2SinceWeek'] = 1 joined.head()['Promo2SinceYear'] ``` Create a new field `CompetitionOpenSince` that converts `CompetitionOpenSinceYear` and `CompetitionOpenSinceMonth` and maps them to a specific date. Then create a new field `CompetitionDaysOpen` that subtracts `CompetitionOpenSince` from `Date`. ``` pd.to_datetime({'year': [2019], 'month': [1], 'day': [1]}) joined['Date'].dtype import datetime for i in (joined, joined_test): i['CompetitionOpenSince'] = pd.to_datetime({'year': i['CompetitionOpenSinceYear'], 'month': i['CompetitionOpenSinceMonth'], 'day': 15}) i['CompetitionDaysOpen'] = (pd.to_datetime(i['Date']) - i['CompetitionOpenSince']) / datetime.timedelta(days=1) joined[['CompetitionOpenSince', 'CompetitionDaysOpen']].head() ``` For `CompetitionDaysOpen`, replace values where `CompetitionDaysOpen < 0` with 0, and cases where `CompetitionOpenSinceYear < 1990` with 0. ``` joined['CompetitionOpenSinceYear'].dtype, joined['CompetitionDaysOpen'].dtype for i in (joined, joined_test): i['CompetitionDaysOpen'][i['CompetitionDaysOpen'] < 0] = 0 i['CompetitionOpenSinceYear'][i['CompetitionOpenSinceYear'] < 1990] = 1990 joined[['CompetitionDaysOpen', 'CompetitionOpenSinceYear']] ``` We add "CompetitionMonthsOpen" field, limiting the maximum to 2 years to limit number of unique categories. ``` for i in (joined, joined_test): i['CompetitionMonthsOpen'] = i['CompetitionDaysOpen'] // 30 i['CompetitionMonthsOpen'] = i.apply(lambda x: 24 if x['CompetitionMonthsOpen'] > 24 else x['CompetitionMonthsOpen'], axis=1) joined['CompetitionMonthsOpen'].value_counts() ``` Same process for Promo dates. You may need to install the `isoweek` package first. ``` # If needed, uncomment: # ! pip install isoweek ``` Use the `isoweek` package to turn `Promo2Since` to a specific date -- the Monday of the week specified in the column. Compute a field `Promo2SinceDays` that subtracts the current date from the `Promo2Since` date. ``` joined['Date'].dtype from isoweek import Week for i in (joined, joined_test): i['Promo2Since'] = i.apply(lambda x: Week(int(x['Promo2SinceYear']), int(x['Promo2SinceWeek'])).monday(), axis=1) i['Promo2Since'] = pd.to_datetime(i['Promo2Since']) i['Promo2Days'] = (pd.to_datetime(i['Date']) - i['Promo2Since']) / datetime.timedelta(days=1) i['Promo2Weeks'] = i['Promo2Days'] // 7 joined['Promo2Since'].head(25) joined.columns ``` Perform the following modifications on both the train and test set: - For cases where `Promo2Days` is negative or `Promo2SinceYear` is before 1990, set `Promo2Days` to 0 - Create `Promo2Weeks - For cases where `Promo2Weeks` is negative, set `Promo2Weeks` to 0 - For cases where `Promo2Weeks` is above 25, set `Promo2Weeks` to 25 Print the number of unique values for `Promo2Weeks` in training and test df's. ``` for i in (joined, joined_test): i['Promo2Days'][i['Promo2Days'] < 0] = 0 i['Promo2Days'][i['Promo2SinceYear'] < 1990] = 0 i['Promo2Weeks'][i['Promo2Weeks'] < 0] = 0 i['Promo2Weeks'][i['Promo2Weeks'] > 25] = 25 len(joined['Promo2Weeks'].unique()) ``` Pickle `joined` to `PATH/'joined'` and `joined_test` to `PATH/'joined_test'`. ``` joined.to_pickle(PATH/'joined') joined_test.to_pickle(PATH/'joined_test') ``` ## Durations Write a function `get_elapsed` that takes arguments `fld` (a boolean field) and `pre` (a prefix to be appended to `fld` in a new column representing the days until/since the event in `fld`), and adds a column `pre+fld` representing the date-diff (in days) between the current date and the last date `fld` was true. ``` joined = pd.read_pickle(PATH/'joined') joined_test = pd.read_pickle(PATH/'joined_test') joined['Date'] = pd.to_datetime(joined['Date']) joined_test['Date'] = pd.to_datetime(joined_test['Date']) def get_elapsed(df, fld, pre): day1 = np.timedelta64(1, 'D') store = 0 last_date = np.timedelta64() vals = [] for s,v,d in zip(df['Store'].values, df[fld].values, df['Date'].values): if s != store: store = s last_date = np.datetime64() if v: last_date = d vals.append(((d - last_date).astype('timedelta64[D]') / day1)) df[pre+fld] = vals return df joined.sort_values(['Store', 'Date'], ascending=[True, True], inplace=True) get_elapsed(joined, 'SchoolHoliday', 'After') ``` We'll be applying this to a subset of columns: Create a variable `columns` containing the strings: - Date - Store - Promo - StateHoliday - SchoolHoliday These will be the fields on which we'll be computing elapsed days since/until. ``` columns = ['Date', 'Store', 'Promo', 'StateHoliday', 'SchoolHoliday'] ``` Create one big dataframe with both the train and test sets called `df`. ``` df = pd.concat([joined[columns], joined_test[columns]], axis=0) df.shape ``` Sort by `Store` and `Date` ascending, and use `add_elapsed` to get the days since the last `SchoolHoliday` on each daya. Reorder by `Store` ascending and `Date` descending to get the days _until_ the next `SchoolHoliday`. ``` df.sort_values(['Store', 'Date'], inplace=True) get_elapsed(df, 'SchoolHoliday', 'After') df.sort_values(['Store', 'Date'], ascending=[True, False], inplace=True) get_elapsed(df, 'SchoolHoliday', 'Before').head() ``` Do the same for `StateHoliday`. ``` df.sort_values(['Store', 'Date'], inplace=True) get_elapsed(df, 'StateHoliday', 'After') df.sort_values(['Store', 'Date'], ascending=[True, False], inplace=True) get_elapsed(df, 'StateHoliday', 'Before').head() ``` Do the same for `Promo`. ``` df.sort_values(['Store', 'Date'], inplace=True) get_elapsed(df, 'Promo', 'After') df.sort_values(['Store', 'Date'], ascending=[True, False], inplace=True) get_elapsed(df, 'Promo', 'Before').head() ``` Set the index on `df` to `Date`. ``` df.set_index('Date', inplace=True) ``` Reassign `columns` to `['SchoolHoliday', 'StateHoliday', 'Promo']`. ``` columns = ['SchoolHoliday', 'StateHoliday', 'Promo'] ``` For columns `Before/AfterSchoolHoliday`, `Before/AfterStateHoliday`, and `Before/AfterPromo`, fill null values with 0. ``` df.columns for column in columns: for prefix in ['Before', 'After']: df[prefix+column].fillna(0, inplace=True) ``` Create a dataframe `bwd` that gets 7-day backward-rolling sums of the columns in `columns`, grouped by `Store`. ``` print("hello world") bwd = (df[columns+['Store']] .sort_index(ascending=True) .groupby('Store') .rolling(window=7, min_periods=1) .sum()).drop('Store', axis=1).reset_index() bwd.head(25) fwd = (df[columns+['Store']] .sort_index(ascending=False) .groupby('Store') .rolling(window=7, min_periods=1) .sum()).drop('Store', axis=1).reset_index() fwd.head(25) ``` Create a dataframe `fwd` that gets 7-day forward-rolling sums of the columns in `columns`, grouped by `Store`. Show the head of `bwd`. Show the head of `fwd`. Drop the `Store` column from `fwd` and `bwd` inplace, and reset the index inplace on each. Reset the index on `df`. Merge `df` with `bwd` and `fwd`. ``` df = join_df(df, bwd, ['Date', 'Store'], ['Date', 'Store'], '_bwd') df = join_df(df, fwd, ['Date', 'Store'], ['Date', 'Store'], '_fwd') ``` Drop `columns` from df inplace -- we don't need them anymore, since we've captured their information in columns with types more suitable for machine learning. ``` df.columns columns df.drop(columns, axis=1, inplace=True) ``` Print out the head of `df`. ``` df.head() ``` Pickle `df` to `PATH/'df'`. ``` df.to_pickle(PATH/'attempt_df') ``` Cast the `Date` column to a datetime column. ``` df['Date'] = pd.to_datetime(df['Date']) ``` Join `joined` with `df` on `['Store', 'Date']`. ``` train = join_df(joined, df, left_on=['Date', 'Store']) test = join_df(joined_test, df, left_on=['Date', 'Store']) train.head(25) ``` This is not necessarily the best idea, but the authors removed all examples for which sales were equal to zero. If you're trying to stay true to what the authors did, do that now. ``` train_clean = train.loc[train['Sales'] != 0, :] ``` Reset the indices, and pickle train and test to `train_clean` and `test_clean`. ``` train_clean.head() train_clean.to_pickle(PATH/'train_clean') train.to_pickle(PATH/'train_full') test.to_pickle(PATH/'test') ```
github_jupyter
``` from fastai.vision.all import * from moving_mnist.models.conv_rnn import * from moving_mnist.data import * if torch.cuda.is_available(): torch.cuda.set_device(1) print(torch.cuda.get_device_name()) ``` # Train Example: We wil predict: - `n_in`: 5 images - `n_out`: 5 images - `n_obj`: up to 3 objects ``` DATA_PATH = Path.cwd()/'data' ds = MovingMNIST(DATA_PATH, n_in=5, n_out=5, n_obj=[1,2,3]) train_tl = TfmdLists(range(7500), ImageTupleTransform(ds)) valid_tl = TfmdLists(range(100), ImageTupleTransform(ds)) dls = DataLoaders.from_dsets(train_tl, valid_tl, bs=32, after_batch=[Normalize.from_stats(imagenet_stats[0][0], imagenet_stats[1][0])]).cuda() loss_func = StackLoss(MSELossFlat()) ``` Left: Input, Right: Target ``` dls.show_batch() b = dls.one_batch() explode_types(b) ``` `StackUnstack` takes cares of stacking the list of images into a fat tensor, and unstacking them at the end, we will need to modify our loss function to take a list of tensors as input and target. ## Simple model ``` model = StackUnstack(SimpleModel()) ``` As the `ImageSeq` is a `tuple` of images, we will need to stack them to compute loss. ``` learn = Learner(dls, model, loss_func=loss_func, cbs=[]).to_fp16() ``` I have a weird bug that if I use `nn.LeakyReLU` after doing `learn.lr_find()` the model does not train (the loss get stucked). ``` x,y = dls.one_batch() learn.lr_find() learn.fit_one_cycle(10, 1e-4) p,t = learn.get_preds() ``` As you can see, the results is a list of 5 tensors with 100 samples each. ``` len(p), p[0].shape def show_res(t, idx): im_seq = ImageSeq.create([t[i][idx] for i in range(5)]) im_seq.show(figsize=(8,4)); k = random.randint(0,100) show_res(t,k) show_res(p,k) ``` ## A bigger Decoder We will pass: - `blur`: to use blur on the upsampling path (this is done by using and a poolling layer and a replication) - `attn`: to include a self attention layer on the decoder ``` model2 = StackUnstack(SimpleModel(szs=[16,64,96], act=partial(nn.LeakyReLU, 0.2, inplace=True),blur=True, attn=True)) ``` We have to reduce batch size as the self attention layer is heavy. ``` dls = DataLoaders.from_dsets(train_tl, valid_tl, bs=8, after_batch=[Normalize.from_stats(imagenet_stats[0][0], imagenet_stats[1][0])]).cuda() learn2 = Learner(dls, model2, loss_func=loss_func, cbs=[]).to_fp16() learn2.lr_find() learn2.fit_one_cycle(10, 1e-4) p,t = learn2.get_preds() ``` As you can see, the results is a list of 5 tensors with 100 samples each. ``` len(p), p[0].shape def show_res(t, idx): im_seq = ImageSeq.create([t[i][idx] for i in range(5)]) im_seq.show(figsize=(8,4)); k = random.randint(0,100) show_res(t,k) show_res(p,k) ```
github_jupyter
<a href="https://colab.research.google.com/github/yunjung-lee/class_python_data/blob/master/skin_cancer.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ########## skin cancer in kaggle dataset #### used knn ,dropout,save file ``` !git clone https://github.com/yunjung-lee/python_mini_project_skin_cancer.git ``` !git clone : 깃을 접속함 https://github.com/yunjung-lee/python_mini_project_skin_cancer.git : 깃의 주소 ``` import numpy as np import tensorflow as tf from sklearn.model_selection import train_test_split import sklearn.preprocessing ``` opne data(data name : hmnist_28_28_RGB.csv) ``` data = np.loadtxt('drive/Colab Notebooks/dataset/hmnist_28_28_RGB.csv', dtype = np.float32, delimiter = ",", skiprows = 1,encoding = "utf-8") # print(data) xdata = data[:,:-1] ydata = data[:,[-1]] ``` one_hot encoding ``` one_hot = sklearn.preprocessing.LabelBinarizer() ydata = one_hot.fit_transform(ydata) ``` train_ test_split ``` X_train, X_test, y_train, y_test = train_test_split(xdata, ydata, test_size=0.33, random_state=42) ``` nn-learning&drop-out 선언부 output에서 sigmoid 사용에 softmax_cross_entropy_with_logits를 사용 ``` x = tf.placeholder(tf.float32, [None,28*28*3]) y = tf.placeholder(tf.float32, [None,7]) keep_prob=tf.placeholder(tf.float32) w1 = tf.Variable(tf.random_normal([28*28*3,256],stddev=0.01)) L1 = tf.nn.relu(tf.matmul(x,w1)) L1=tf.nn.dropout(L1,keep_prob) w2 = tf.Variable(tf.random_normal([256,256],stddev=0.01)) L2 = tf.nn.relu(tf.matmul(L1,w2)) L2=tf.nn.dropout(L2,keep_prob) w3 = tf.Variable(tf.random_normal([256,7],stddev=0.01)) model = tf.matmul(L2,w3) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=model,labels=y)) optimizer = tf.train.AdamOptimizer(0.001).minimize(cost) ``` 신경망 모델 학습 batch size를 100으로 두고 drop-out을 0.8로 설정하여 진행 ``` init = tf.global_variables_initializer() saver = tf.train.Saver() sess = tf.Session() sess.run(init) batch_size = 100 total_batch = int(len(X_train[0])/batch_size) for epoch in range(20): total_cost = 0 for i in range(total_batch): _, cv = sess.run([optimizer,cost],feed_dict={x:X_train,y:y_train,keep_prob:0.8}) total_cost += cv # print("epoch :", " %d" % (epoch+1), "avg cost : ","{:.3f}".format(total_cost/total_batch)) is_correct = tf.equal(tf.argmax(model,1), tf.argmax(y,1)) accuracy = tf.reduce_mean(tf.cast(is_correct, dtype=tf.float32)) print("정확도 : ", sess.run(accuracy,feed_dict={x:X_test,y:y_test,keep_prob:1.0})) saver.save(sess, 'drive/Colab Notebooks/datasetcnn_session') sess.close() ``` 결과 avg cost : 2.989 정확도 : 0.66596067 epoch : 2 avg cost : 1.084 정확도 : 0.66596067 epoch : 3 avg cost : 1.006 정확도 : 0.66596067 epoch : 4 avg cost : 0.992 정확도 : 0.66596067 epoch : 5 avg cost : 0.959 정확도 : 0.66596067 epoch : 6 avg cost : 0.940 정확도 : 0.66596067 epoch : 7 avg cost : 0.927 정확도 : 0.66596067 epoch : 8 avg cost : 0.921 정확도 : 0.66596067 epoch : 9 avg cost : 1.014 정확도 : 0.66596067 epoch : 10 avg cost : 0.955 정확도 : 0.66596067 epoch : 11 avg cost : 0.938 정확도 : 0.66596067 epoch : 12 avg cost : 0.935 정확도 : 0.66596067 epoch : 13 avg cost : 0.921 정확도 : 0.66596067 epoch : 14 avg cost : 0.921 정확도 : 0.66596067 epoch : 15 avg cost : 0.920 정확도 : 0.66596067 epoch : 16 avg cost : 0.952 정확도 : 0.66596067 epoch : 17 avg cost : 0.916 정확도 : 0.66656584 epoch : 18 avg cost : 0.911 정확도 : 0.66807866 epoch : 19 avg cost : 0.917 정확도 : 0.66717094 epoch : 20 avg cost : 0.903 정확도 : 0.66747355
github_jupyter
<a href="https://colab.research.google.com/github/will-cotton4/DS-Unit-2-Sprint-3-Classification-Validation/blob/master/module4-rf-gb/LS_DS_234_Random_Forests_Gradient_Boosting.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> _Lambda School Data Science — Classification & Validation_ # Random Forests & Gradient Boosting #### Gradient Boosting and Random Forest are often the best choice for “Spreadsheet Machine Learning.” - Meaning, [Tree Ensembles often have the best predictive accuracy](https://arxiv.org/abs/1708.05070) for supervised learning with structured, tabular data. - Because trees can fit non-linear, non-monotonic relationships, and interactions between features. - A single decision tree, grown to unlimited depth, will overfit. We solve this problem by ensembling trees, with bagging or boosting. - One-hot encoding isn’t the only way, and may not be the best way, of categorical encoding for tree ensembles. ### Links #### Decision Trees - A Visual Introduction to Machine Learning, [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/), and [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/) - [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.html#advantages-2) - [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html) - [How a Russian mathematician constructed a decision tree - by hand - to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/) - [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) #### Random Forests - [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/) - [Coloring with Random Forests](http://structuringtheunstructured.blogspot.com/2017/11/coloring-with-random-forests.html) - [Beware Default Random Forest Importances](https://explained.ai/rf-importance/index.html) #### Gradient Boosting - [A Gentle Introduction to the Gradient Boosting Algorithm for Machine Learning](https://machinelearningmastery.com/gentle-introduction-gradient-boosting-algorithm-machine-learning/) - [A Kaggle Master Explains Gradient Boosting](http://blog.kaggle.com/2017/01/23/a-kaggle-master-explains-gradient-boosting/) - [How to explain gradient boosting](https://explained.ai/gradient-boosting/index.html) #### Python libraries for Gradient Boosting - [scikit-learn Gradient Tree Boosting](https://scikit-learn.org/stable/modules/ensemble.html#gradient-boosting) — slower than other libraries, but [the next version may be better](https://twitter.com/amuellerml/status/1123613520426426368) - Anaconda: already installed - Google Colab: already installed - [xgboost](https://xgboost.readthedocs.io/en/latest/) — can accept missing values and enforce [monotonic constraints](https://xiaoxiaowang87.github.io/monotonicity_constraint/) - Anaconda, Mac/Linux: `conda install -c conda-forge xgboost` - Windows: `pip install xgboost` - Google Colab: already installed - [LightGBM](https://lightgbm.readthedocs.io/en/latest/) — can accept missing values and enforce [monotonic constraints](https://blog.datadive.net/monotonicity-constraints-in-machine-learning/) - Anaconda: `conda install -c conda-forge lightgbm` - Google Colab: already installed - [CatBoost](https://catboost.ai/) — can accept missing values and use [categorical features](https://catboost.ai/docs/concepts/algorithm-main-stages_cat-to-numberic.html) without preprocessing - Anaconda: `conda install -c conda-forge catboost` - Google Colab: `pip install catboost` #### Categorical encoding for trees - [Are categorical variables getting lost in your random forests?](https://roamanalytics.com/2016/10/28/are-categorical-variables-getting-lost-in-your-random-forests/) - [Beyond One-Hot: An Exploration of Categorical Variables](http://www.willmcginnis.com/2015/11/29/beyond-one-hot-an-exploration-of-categorical-variables/) - [Categorical Features and Encoding in Decision Trees](https://medium.com/data-design/visiting-categorical-features-and-encoding-in-decision-trees-53400fa65931) - [Coursera — How to Win a Data Science Competition: Learn from Top Kagglers — Concept of mean encoding](https://www.coursera.org/lecture/competitive-data-science/concept-of-mean-encoding-b5Gxv) - [Mean (likelihood) encodings: a comprehensive study](https://www.kaggle.com/vprokopev/mean-likelihood-encodings-a-comprehensive-study) - [The Mechanics of Machine Learning, Chapter 6: Categorically Speaking](https://mlbook.explained.ai/catvars.html) ``` !pip install catboost !pip install category_encoders !pip install ipywidgets import warnings warnings.simplefilter(action='ignore', category=FutureWarning) import xgboost xgboost.__version__ ``` # Golf Putts (regression, 1 feature, non-linear) https://statmodeling.stat.columbia.edu/2008/12/04/the_golf_puttin/ ``` %matplotlib inline from ipywidgets import interact import category_encoders import matplotlib.pyplot as plt import numpy as np import pandas as pd putts = pd.DataFrame( columns=['distance', 'tries', 'successes'], data = [[2, 1443, 1346], [3, 694, 577], [4, 455, 337], [5, 353, 208], [6, 272, 149], [7, 256, 136], [8, 240, 111], [9, 217, 69], [10, 200, 67], [11, 237, 75], [12, 202, 52], [13, 192, 46], [14, 174, 54], [15, 167, 28], [16, 201, 27], [17, 195, 31], [18, 191, 33], [19, 147, 20], [20, 152, 24]] ) putts['rate of success'] = putts['successes'] / putts['tries'] putts_X = putts[['distance']] putts_y = putts['rate of success'] ``` #### Docs - [Scikit-Learn User Guide: Random Forests](https://scikit-learn.org/stable/modules/ensemble.html#random-forests) (`from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier`) - [XGBoost Python API Reference: Scikit-Learn API](https://xgboost.readthedocs.io/en/latest/python/python_api.html#module-xgboost.sklearn) (`from xgboost import XGBRegressor, XGBClassifier`) ``` from sklearn.ensemble import RandomForestRegressor from sklearn.tree import DecisionTreeRegressor from xgboost import XGBRegressor def putt_trees(max_depth=1, n_estimators=1): models = [DecisionTreeRegressor(max_depth=max_depth), RandomForestRegressor(max_depth=max_depth, n_estimators=n_estimators), XGBRegressor(max_depth=max_depth, n_estimators=n_estimators)] for model in models: name = model.__class__.__name__ model.fit(putts_X, putts_y) ax = putts.plot('distance', 'rate of success', kind='scatter', title=name) ax.step(putts_X, model.predict(putts_X), where='mid') plt.show() interact(putt_trees, max_depth=(1,6,1), n_estimators=(10,40,10)); ``` ### Bagging https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sample.html ``` # Do-it-yourself Bagging Ensemble of Decision Trees (like a Random Forest) def diy_bagging(max_depth=1, n_estimators=1): y_preds = [] for i in range(n_estimators): title = f'Tree {i+1}' bootstrap_sample = putts.sample(n=len(putts), replace=True).sort_values(by='distance') bootstrap_X = bootstrap_sample[['distance']] bootstrap_y = bootstrap_sample['rate of success'] tree = DecisionTreeRegressor(max_depth=max_depth) tree.fit(bootstrap_X, bootstrap_y) y_pred = tree.predict(bootstrap_X) y_preds.append(y_pred) ax = bootstrap_sample.plot('distance', 'rate of success', kind='scatter', title=title) ax.step(bootstrap_X, y_pred, where='mid') plt.show() ensembled = np.vstack(y_preds).mean(axis=0) title = f'Ensemble of {n_estimators} trees, with max_depth={max_depth}' ax = putts.plot('distance', 'rate of success', kind='scatter', title=title) ax.step(putts_X, ensembled, where='mid') plt.show() interact(diy_bagging, max_depth=(1,6,1), n_estimators=(2,5,1)); ``` ### What's "random" about random forests? 1. Each tree trains on a random bootstrap sample of the data. (In scikit-learn, for `RandomForestRegressor` and `RandomForestClassifier`, the `bootstrap` parameter's default is `True`.) This type of ensembling is called Bagging. 2. Each split considers a random subset of the features. (In scikit-learn, when the `max_features` parameter is not `None`.) For extra randomness, you can try ["extremely randomized trees"](https://scikit-learn.org/stable/modules/ensemble.html#extremely-randomized-trees)! >In extremely randomized trees (see [ExtraTreesClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesClassifier.html) and [ExtraTreesRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesRegressor.html) classes), randomness goes one step further in the way splits are computed. As in random forests, a random subset of candidate features is used, but instead of looking for the most discriminative thresholds, thresholds are drawn at random for each candidate feature and the best of these randomly-generated thresholds is picked as the splitting rule. This usually allows to reduce the variance of the model a bit more, at the expense of a slightly greater increase in bias ### Boosting Boosting (used by Gradient Boosting) is different than Bagging (used by Random Forests). [_An Introduction to Statistical Learning_](http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Seventh%20Printing.pdf) Chapter 8.2.3, Boosting: >Recall that bagging involves creating multiple copies of the original training data set using the bootstrap, fitting a separate decision tree to each copy, and then combining all of the trees in order to create a single predictive model. >**Boosting works in a similar way, except that the trees are grown _sequentially_: each tree is grown using information from previously grown trees.** >Unlike fitting a single large decision tree to the data, which amounts to _fitting the data hard_ and potentially overfitting, the boosting approach instead _learns slowly._ Given the current model, we fit a decision tree to the residuals from the model. >We then add this new decision tree into the fitted function in order to update the residuals. Each of these trees can be rather small, with just a few terminal nodes. **By fitting small trees to the residuals, we slowly improve fˆ in areas where it does not perform well.** >Note that in boosting, unlike in bagging, the construction of each tree depends strongly on the trees that have already been grown. # Wave (regression, 1 feature, non-monotonic, train/test split) http://scikit-learn.org/stable/auto_examples/tree/plot_tree_regression.html ``` from sklearn.model_selection import train_test_split def make_data(): import numpy as np rng = np.random.RandomState(1) X = np.sort(5 * rng.rand(80, 1), axis=0) y = np.sin(X).ravel() y[::5] += 2 * (0.5 - rng.rand(16)) return X, y wave_X, wave_y = make_data() wave_X_train, wave_X_test, wave_y_train, wave_y_test = train_test_split( wave_X, wave_y, test_size=0.25, random_state=42) def wave_trees(max_depth=1, n_estimators=10): models = [DecisionTreeRegressor(max_depth=max_depth), RandomForestRegressor(max_depth=max_depth, n_estimators=n_estimators), XGBRegressor(max_depth=max_depth, n_estimators=n_estimators)] for model in models: name = model.__class__.__name__ model.fit(wave_X_train, wave_y_train) print(f'{name} Train R^2 score:', model.score(wave_X_train, wave_y_train)) print(f'{name} Test R^2 score:', model.score(wave_X_test, wave_y_test)) plt.scatter(wave_X_train, wave_y_train) plt.scatter(wave_X_test, wave_y_test) plt.step(wave_X, model.predict(wave_X), where='mid') plt.show() interact(wave_trees, max_depth=(1,8,1), n_estimators=(10,40,10)); ``` # Titanic (classification, 2 features, interactions, non-linear / non-monotonic) #### viz2D helper function ``` def viz2D(fitted_model, X, feature1, feature2, num=100, title=''): """ Visualize model predictions as a 2D heatmap For regression or binary classification models, fitted on 2 features Parameters ---------- fitted_model : scikit-learn model, already fitted df : pandas dataframe, which was used to fit model feature1 : string, name of feature 1 feature2 : string, name of feature 2 target : string, name of target num : int, number of grid points for each feature Returns ------- predictions: numpy array, predictions/predicted probabilities at each grid point References ---------- https://scikit-learn.org/stable/auto_examples/classification/plot_classification_probability.html https://jakevdp.github.io/PythonDataScienceHandbook/04.04-density-and-contour-plots.html """ x1 = np.linspace(X[feature1].min(), X[feature1].max(), num) x2 = np.linspace(X[feature2].min(), X[feature2].max(), num) X1, X2 = np.meshgrid(x1, x2) X = np.c_[X1.flatten(), X2.flatten()] if hasattr(fitted_model, 'predict_proba'): predicted = fitted_model.predict_proba(X)[:,1] else: predicted = fitted_model.predict(X) plt.imshow(predicted.reshape(num, num), cmap='viridis') plt.title(title) plt.xlabel(feature1) plt.ylabel(feature2) plt.xticks([]) plt.yticks([]) plt.colorbar() plt.show() return predicted ``` ### Read data, encode categorical feature, impute missing values ``` import category_encoders as ce import seaborn as sns from sklearn.impute import SimpleImputer from sklearn.pipeline import make_pipeline titanic = sns.load_dataset('titanic') features = ['age', 'sex'] target = 'survived' preprocessor = make_pipeline(ce.OrdinalEncoder(), SimpleImputer()) X = preprocessor.fit_transform(titanic[features]) X = pd.DataFrame(X, columns=features) y = titanic[target] X.head() ``` ### Logistic Regression ``` from sklearn.linear_model import LogisticRegression lr = LogisticRegression(solver='lbfgs') lr.fit(X, y) viz2D(lr, X, feature1='age', feature2='sex', title='Logistic Regression'); ``` ### Decision Tree, Random Forest, Gradient Boosting #### Docs - [Scikit-Learn User Guide: Random Forests](https://scikit-learn.org/stable/modules/ensemble.html#random-forests) (`from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier`) - [XGBoost Python API Reference: Scikit-Learn API](https://xgboost.readthedocs.io/en/latest/python/python_api.html#module-xgboost.sklearn) (`from xgboost import XGBRegressor, XGBClassifier`) ``` from sklearn.ensemble import RandomForestClassifier from sklearn.tree import DecisionTreeClassifier from xgboost import XGBClassifier def titanic_trees(max_depth=1, n_estimators=1): models = [DecisionTreeClassifier(max_depth=max_depth), RandomForestClassifier(max_depth=max_depth, n_estimators=n_estimators), XGBClassifier(max_depth=max_depth, n_estimators=n_estimators)] for model in models: name = model.__class__.__name__ model.fit(X.values, y.values) viz2D(model, X, feature1='age', feature2='sex', title=name) interact(titanic_trees, max_depth=(1,6,1), n_estimators=(10,40,10)); ``` ### Bagging ``` # Do-it-yourself Bagging Ensemble of Decision Trees (like a Random Forest) def titanic_bagging(max_depth=1, n_estimators=1): predicteds = [] for i in range(n_estimators): title = f'Tree {i+1}' bootstrap_sample = titanic.sample(n=len(titanic), replace=True) preprocessor = make_pipeline(ce.OrdinalEncoder(), SimpleImputer()) bootstrap_X = preprocessor.fit_transform(bootstrap_sample[['age', 'sex']]) bootstrap_y = bootstrap_sample['survived'] tree = DecisionTreeClassifier(max_depth=max_depth) tree.fit(bootstrap_X, bootstrap_y) predicted = viz2D(tree, X, feature1='age', feature2='sex', title=title) predicteds.append(predicted) ensembled = np.vstack(predicteds).mean(axis=0) title = f'Ensemble of {n_estimators} trees, with max_depth={max_depth}' plt.imshow(ensembled.reshape(100, 100), cmap='viridis') plt.title(title) plt.xlabel('age') plt.ylabel('sex') plt.xticks([]) plt.yticks([]) plt.colorbar() plt.show() interact(titanic_bagging, max_depth=(1,6,1), n_estimators=(2,5,1)); ``` ### Select more features, compare models ``` from sklearn.preprocessing import MinMaxScaler titanic['deck'] = titanic['deck'].astype(str) features = ['age', 'sex', 'pclass', 'sibsp', 'parch', 'fare', 'deck', 'embark_town'] target = 'survived' preprocessor = make_pipeline(ce.OrdinalEncoder(), SimpleImputer(), MinMaxScaler()) titanic_X = preprocessor.fit_transform(titanic[features]) titanic_X = pd.DataFrame(titanic_X, columns=features) titanic_y = titanic[target] titanic_X.head() from sklearn.model_selection import cross_val_score models = [LogisticRegression(solver='lbfgs', max_iter=1000), DecisionTreeClassifier(max_depth=3), DecisionTreeClassifier(max_depth=None), RandomForestClassifier(max_depth=3, n_estimators=100, n_jobs=-1, random_state=42), RandomForestClassifier(max_depth=None, n_estimators=100, n_jobs=-1, random_state=42), XGBClassifier(max_depth=3, n_estimators=100, n_jobs=-1, random_state=42)] for model in models: print(model, '\n') score = cross_val_score(model, titanic_X, titanic_y, scoring='accuracy', cv=5).mean() print('Cross-Validation Accuracy:', score, '\n', '\n') ``` ### Feature importances ``` for model in models: name = model.__class__.__name__ model.fit(titanic_X, titanic_y) if name == 'LogisticRegression': coefficients = pd.Series(model.coef_[0], titanic_X.columns) coefficients.sort_values().plot.barh(color='grey', title=name) plt.show() else: importances = pd.Series(model.feature_importances_, titanic_X.columns) title = f'{name}, max_depth={model.max_depth}' importances.sort_values().plot.barh(color='grey', title=title) plt.show() ``` # ASSIGNMENT **Train Random Forest and Gradient Boosting models**, on the Bank Marketing dataset. (Or another dataset of your choice, not used during this lesson.) You may use any Python libraries for Gradient Boosting. Then, you have many options! #### Keep improving your model - **Try new categorical encodings.** - Explore and visualize your data. - Wrangle [bad data](https://github.com/Quartz/bad-data-guide), outliers, and missing values. - Try engineering more features. You can transform, bin, and combine features. - Try selecting fewer features. ``` !wget https://archive.ics.uci.edu/ml/machine-learning-databases/00222/bank-additional.zip !unzip bank-additional.zip #The core of this code comes from the lecture notebook earlier this week, but #I've modified it to include: # - A different CE # - Newly engineered features # Imports %matplotlib inline import warnings import category_encoders as ce import matplotlib.pyplot as plt import pandas as pd from sklearn.linear_model import LogisticRegression from sklearn.model_selection import cross_val_score from sklearn.model_selection import train_test_split from sklearn.pipeline import make_pipeline from sklearn.exceptions import DataConversionWarning from sklearn.preprocessing import StandardScaler warnings.filterwarnings(action='ignore', category=DataConversionWarning) # Load data bank = pd.read_csv('bank-additional/bank-additional-full.csv', sep=';') # Assign to X, y X = bank.drop(columns='y') y = bank['y'] == 'yes' # Drop leaky & random features X = X.drop(columns='duration') # Split Train, Test X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=42, stratify=y) # Make pipeline pipeline = make_pipeline( ce.OrdinalEncoder(), StandardScaler(), RandomForestClassifier(max_depth=4, n_estimators=100, n_jobs=-1, random_state=42) ) # Cross-validate with training data scores = cross_val_score(pipeline, X_train, y_train, scoring='roc_auc', cv=10, n_jobs=-1, verbose=10) X.describe() scores.mean() #Now we'll try it again with gradient boosting: pipeline = make_pipeline( ce.OrdinalEncoder(), StandardScaler(), XGBClassifier(max_depth=3, n_estimators=100, n_jobs=-1, random_state=42) ) # Cross-validate with training data scores = cross_val_score(pipeline, X_train, y_train, scoring='roc_auc', cv=10, n_jobs=-1, verbose=10) scores.mean() ``` #### Follow the links — learn by reading & doing - Links at the top of this notebook - Links in previous notebooks - Extra notebook for today, about **"monotonic constraints"** and "early stopping" with xgboost
github_jupyter
# Basic Matplotlib cookbook By [Terence Parr](https://parrt.cs.usfca.edu). If you like visualization in machine learning, check out my stuff at [explained.ai](https://explained.ai). This notebook shows you how to generate basic versions of the common plots you'll need. ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt %config InlineBackend.figure_format = 'retina' ``` # Get some sample data ``` df_cars = pd.read_csv("data/cars.csv") df_cars.head() # Get average miles per gallon for each car with the same number of cylinders avg_mpg = df_cars.groupby('CYL').mean()['MPG'] avg_mpg avg_wgt = df_cars.groupby('CYL').mean()['WGT'] # do the same for average weight # Get average miles per gallon for each car with the same weight avg_mpg_per_wgt = df_cars.groupby('WGT').mean()['MPG'] avg_mpg_per_wgt # Get the unique list of cylinders in numerical order cyl = sorted(df_cars['CYL'].unique()) cyl # Get a list of all mpg values for three specific cylinder sizes cyl4 = df_cars[df_cars['CYL']==4]['MPG'].values cyl6 = df_cars[df_cars['CYL']==6]['MPG'].values cyl8 = df_cars[df_cars['CYL']==8]['MPG'].values cyl4[0:20] ``` ## The most common plots This section shows how to draw very basic plots using the recommended template: ``` fig, ax = plt.subplots(figsize=(width,height)) ax.plottype(args) plt.show() ``` The default plot style is not particularly beautiful nor informative, but we have to learn the basics first. ### Histogram of car weight visualized as barchart ``` fig, ax = plt.subplots(figsize=(3,2)) # make one subplot (ax) on the figure ax.hist(df_cars['WGT']) plt.show() ``` Changing the number of bins is sometimes a good idea; it's a matter of sending in a new parameter: ``` fig, ax = plt.subplots(figsize=(3,2)) # make one subplot (ax) on the figure ax.hist(df_cars['WGT'], bins=20) plt.show() ``` ### Line plot of number of cylinders vs average miles per gallon ``` fig, ax = plt.subplots(figsize=(3,2)) # make one subplot (ax) on the figure ax.plot(cyl, avg_mpg) plt.show() ``` ### Scatterplot of weight versus miles per gallon ``` fig, ax = plt.subplots(figsize=(3,2)) # make one subplot (ax) on the figure ax.scatter(df_cars['WGT'], df_cars['MPG']) plt.show() ``` Note that if you try to use `plot()` it gives you a screwed up plot; line drawing is not appropriate for data with multiple Y values per X value. Instead, the ``` fig, ax = plt.subplots(figsize=(3,2)) # make one subplot (ax) on the figure ax.plot(df_cars['WGT'], df_cars['MPG']) ax.set_title("OOOPS!") plt.show() ``` ### Line plot of average miles per gallon grouped by weight If we want to use a line plot, we should plot the weight versus average miles per gallon at that weight. ``` fig, ax = plt.subplots(figsize=(3,2)) # make one subplot (ax) on the figure ax.plot(avg_mpg_per_wgt) plt.show() ``` I'm using a trick here. Note that `avg_mpg_per_wgt` is a series, which has an index (WGT) and the value (MPG) so I can pass this as a single parameter to matplotlib. matplotlib is flexible enough to recognize this and pull out the X and Y coordinates automatically for us. ``` avg_mpg_per_wgt ``` ### Bar chart of average miles per gallon grouped by number of cylinders ``` fig, ax = plt.subplots(figsize=(3,2)) # make one subplot (ax) on the figure ax.bar(cyl, avg_mpg) plt.show() ``` ### Box plot of miles per gallon grouped by number of cylinders A box plot needs a collection of values for each X coordinate, and we are passing in three lists. ``` fig, ax = plt.subplots(figsize=(3,2)) ax.boxplot([cyl4,cyl6,cyl8]) plt.show() ``` ### Violin plot of miles per gallon grouped by number of cylinders As with box plot, we need a collection of values for each X coordinate. All we've done here is to change the function we're calling. ``` fig, ax = plt.subplots(figsize=(3,2)) ax.violinplot([cyl4,cyl6,cyl8]) plt.show() ``` ## Creating a grid of plots ``` fig, axes = plt.subplots(nrows=3, ncols=2, figsize=(6,6)) # make one subplot (ax) on the figure axes = axes.flatten() # it comes out as a 2D matrix; convert to a vector axes[0].hist(df_cars['WGT']) axes[1].plot(cyl, avg_mpg) axes[2].scatter(df_cars['WGT'], df_cars['MPG']) axes[3].plot(avg_mpg_per_wgt) axes[4].bar(cyl, avg_mpg) axes[5].boxplot([cyl4,cyl6,cyl8]) plt.tight_layout() # I add this anytime I have a grid as it "does the right thing" plt.show() ``` ## Adding a title and labels to axes At a minimum, plots should always have labels on the axes and, regardless of the plot type, we can set the X and Y labels on the matplotlib canvas with two method calls. We can even set the overall title easily with another call. ``` fig, ax = plt.subplots(figsize=(3,2)) # make one subplot (ax) on the figure ax.hist(df_cars['WGT']) ax.set_xlabel("Weight (lbs)") ax.set_ylabel("Count at that weight") ax.set_title("Weight histogram") plt.show() ``` ## Dual Y axes for single X axis When you want to plot to curves on the same graph that have the same X but different Y scales, it's a good idea to use dual Y axes. All it takes is a call to `twinx()` on your main canvas (`ax` variable) to get another canvas to draw on: ``` fig, ax = plt.subplots(figsize=(3,2)) # make one subplot (ax) on the figure ax_wgt = ax.twinx() ax.plot(cyl, avg_mpg) ax_wgt.plot(cyl, avg_wgt) ax.set_ylabel("MPG") ax_wgt.set_ylabel("WGT") plt.show() ``` We should be using different colors for those curves, but we'll look at that in another notebook. Dual axes should be used infrequently but sometimes it's necessary for space reasons, so I'm showing it here. ## Displaying images Displaying an image using matplotlib is done using function `imshow()`. First, we load a picture of Terence enjoying his childhood using the PIL library: ``` from PIL import Image fig, ax = plt.subplots(1, 1, figsize=(3, 4)) mud = Image.open("images/mud.jpg") plt.imshow(mud) ax.axis('off') # don't show x, y axes plt.show() ``` ## Matrices as images When you start doing machine learning, particularly deep learning, one of the first examples is to classify handwritten digits. I have created a sample CSV of these digits we can easily load into a data frame. Each row is a flattened array of 28x28=784 values for a single handwritten digit image, where values are in 0..1: ``` df_digits = pd.read_csv('https://mlbook.explained.ai/data/mnist-10k-sample.csv.zip') true_digits = df_digits['digit'] df_images = df_digits.drop('digit', axis=1) # ignore the true digit number df_images.head(3) ``` Just as we did with a jpg image, we can treat a 2D matrix as an image and display it. Let's pull the first row, reshape to be a 28x28 matrix, and display using greyscale: ``` six_img_as_row = df_images.iloc[0].values # digit '3' is first row img28x28 = six_img_as_row.reshape(28,28) # unflatten as 2D array fig, ax = plt.subplots(1, 1, figsize=(2,2)) ax.imshow(img28x28, cmap='binary') ax.axis('off') # don't show x, y axes plt.show() fig, axes = plt.subplots(nrows=2, ncols=10, figsize=(8, 1.6)) for i, ax in enumerate(axes.flatten()): img_as_row = df_images.iloc[i].values img28x28 = img_as_row.reshape(28,28) ax.axis('off') # don't show x, y axes ax.imshow(img28x28, cmap='binary') plt.show() ``` ## Heat maps It's often difficult to look at a matrix of numbers and recognize patterns or see salient features. A good way to look for patterns is to visualize the matrix (or vector) as a heat map where each value gets a color on a spectrum. As data, let's ask pandas for the correlation between every pair of columns: ``` C = df_cars.corr() C ``` Then we can display the absolute value of those correlations as the spectrum of blues: ``` fig, ax = plt.subplots(1, 1, figsize=(4, 4)) C = np.abs(C) # Use vmin to set white (lowest color) to be the min value ax.imshow(C, cmap='Blues', vmin=np.min(C.values)) # Add correlation to each box for i in range(4): for j in range(4): if i!=j: ax.text(i, j, f"{C.iloc[i,j]:.2f}", horizontalalignment='center') ax.set_xticks(range(4)) ax.set_xticklabels(list(C.columns)) ax.set_yticks(range(4)) ax.set_yticklabels(list(C.columns)) plt.show() ``` ## Saving plots as images You can save plots in a variety of formats. `svg` and `pdf` are good ones because these files are actually a set of commands needed to redraw the image and so can be scaled very nicely. `png` and `gif` will be much smaller typically but have fixed resolution. Instead of calling `show()`, we use `savefig()` (but the image still appears in the notebook as well as storing it on the disk in the current working directory): ``` fig, ax = plt.subplots(figsize=(3,2)) # make one subplot (ax) on the figure ax.hist(df_cars['WGT']) ax.set_xlabel("Weight (lbs)") ax.set_ylabel("Count at that weight") ax.set_title("Weight histogram") plt.savefig("histo.pdf", bbox_inches='tight', pad_inches=0) ``` The `bbox_inches='tight', pad_inches=0` parameters are something I use all the time to make sure there is no padding around the image. When I incorporate an image into a paper or something, I can add my own padding. it just gives us more control. On your mac, use the Finder to go to the directory holding this notebook and you should see `histo.pdf`. ## Exercise 1. Create your own notebook and retype all of these examples so that you start to memorize the details. Of course, once you have typed in the template a few times, you can cut-and-paste those parts: ``` fig, ax = plt.subplots(figsize=(2,1.5)) ... plt.show() ``` 1. Add axis labels and a title to a few of the plots. 1. Make sure that you can save at least one of the figures in each of `pdf` and `png` formats.
github_jupyter
# Parameterization for sediment released by sea-ice ``` import numpy as np import matplotlib.pyplot as plt import matplotlib from mpl_toolkits.basemap import Basemap, cm import netCDF4 as nc import datetime as dt import pickle import scipy.ndimage as ndimage import xarray as xr %matplotlib inline ``` ##### Parameters ``` # Domain dimensions: imin, imax = 1479, 2179 jmin, jmax = 159, 799 # Home-made colormap: N = 256 vals_cont = np.ones((N, 4)) vals_cont[:, 0] = np.linspace(117/N, 1, N) vals_cont[:, 1] = np.linspace(82/N, 1, N) vals_cont[:, 2] = np.linspace(60/N, 1, N) sed_cmap = matplotlib.colors.ListedColormap(vals_cont).reversed() ``` ##### Load files ``` # ANHA12 grid mesh = nc.Dataset('/ocean/brogalla/GEOTRACES/data/ANHA12/ANHA12_mesh1.nc') mesh_lon = np.array(mesh.variables['nav_lon']) mesh_lat = np.array(mesh.variables['nav_lat']) tmask = np.array(mesh.variables['tmask']) land_mask = np.ma.masked_where((tmask[0,:,:,:] > 0.1), tmask[0,:,:,:]) ``` ##### Functions: ``` def load_tracks(filename): nemo_file = nc.Dataset(filename) traj = np.array(nemo_file.variables['trajectory']) # dimensions: number of particles, tracks time = np.array(nemo_file.variables['time']) # units: seconds lat = np.array(nemo_file.variables['lat']) # degrees North lon = np.array(nemo_file.variables['lon']) # degrees East return traj, time, lon, lat def check_laptev(CB_traj, CB_lon, CB_lat, CB_time): # does the parcel spend time in the laptev sea in the fall? # Define boundary latitudes and longitudes for the Laptev Sea region trajS_bdy1 = 68; trajN_bdy1 = 74; trajE_bdy1 = -170; trajW_bdy1 = -210; trajS_bdy2 = 70; trajN_bdy2 = 75; trajE_bdy2 = -185; trajW_bdy2 = -230; Laptev_particle = False # At each time step: for timestep in range(0,len(CB_traj)): if ((CB_lon[timestep] < trajE_bdy1) & (CB_lon[timestep] > trajW_bdy1) \ & (CB_lat[timestep] < trajN_bdy1) & (CB_lat[timestep] > trajS_bdy1)) or \ ((CB_lon[timestep] < trajE_bdy2) & (CB_lon[timestep] > trajW_bdy2) \ & (CB_lat[timestep] < trajN_bdy2) & (CB_lat[timestep] > trajS_bdy2)): start_time = dt.datetime(2015,12,31) - dt.timedelta(seconds=CB_time[0]) current_time = start_time - dt.timedelta(seconds=CB_time[timestep]) # And is the parcel on the shelf in the fall? if current_time.month in [9,10,11,12]: Laptev_particle = True break return Laptev_particle def parcel_origin(CB_lon, CB_lat, CB_time, CB_traj): dim_parc = int((CB_lon.shape[0]/12)/np.ceil(CB_lon.shape[1]/(4*365))) # bottom converts 6 hour to days dim_time = int(12*((CB_lon.shape[0]/dim_parc)/12)) particles_origin = np.zeros((dim_parc,dim_time)) # --- Russian shelf in fall = 1 # --- else = 0 for release_time in range(0,dim_time): for location in range(0,dim_parc): ind = location + release_time*dim_parc lon_loc = CB_lon[ind,:] lat_loc = CB_lat[ind,:] time_loc = CB_time[ind,:] traj_loc = CB_traj[ind,:] Laptev_particle = check_laptev(traj_loc, lon_loc, lat_loc, time_loc) if Laptev_particle: particles_origin[location, release_time] = 1 return particles_origin def interp_np(nav_lon, nav_lat, var_in, lon_ANHA12, lat_ANHA12): # Interpolate some field to ANHA12 grid from scipy.interpolate import griddata LatLonPair = (nav_lon, nav_lat) var_out = griddata(LatLonPair, var_in, (lon_ANHA12, lat_ANHA12), method='cubic') # Take nearest neighbour interpolation to fill nans var_fill = griddata(LatLonPair, var_in, (lon_ANHA12, lat_ANHA12), method='nearest') # fill nans with constant value (0.1) var_out[np.isnan(var_out)] = var_fill[np.isnan(var_out)] return var_out ``` Parameterization components: 1) Ice melt: - if (ice production < 0) --> ice is melting - units of ice melt, iiceprod, are in m/kt (180 s timestep) - convert m/kt to m/s - multiply iiceprod by the grid box area to get a volume of melt 2) Sediment forcing - sediment content forcing field: units of grams of sediment / m3 of ice - background sediment content amount (include higher on shelf regions) - Laptev Sea sediment amounts - multiply forcing field by sediment content - multiply sediment forcing field by ice melt (m3) to get grams of sediment - add sediment to surface grid box + solubility, Mn content ### (2) Sediment forcing field Load parcel trajectories ``` CB_traj, CB_time, CB_lon, CB_lat = load_tracks('/ocean/brogalla/GEOTRACES/parcels/trials/'+\ 'Particles_CB-20200205-extended-region2.nc') particles_origin = parcel_origin(CB_lon, CB_lat, CB_time, CB_traj) dim_parc = int((CB_lon.shape[0]/12)/np.ceil(CB_lon.shape[1]/(4*365))) dim_lons = len(set(CB_lon[0:dim_parc,0])) proportion_laptev = np.empty(CB_lon[0:dim_parc,0].shape) for location in range(0,dim_parc): proportion_laptev[location] = np.sum(particles_origin[location,:])/particles_origin.shape[1] parcel_lons = CB_lon[0:186, 0] parcel_lats = CB_lat[0:186, 0] ``` Forcing field dimensions ``` forcing_lons = mesh_lon[:,:] forcing_lats = mesh_lat[:,:] forcing_sed = np.zeros(forcing_lons.shape) ``` Interpolate Canada Basin proportions: ``` forcing_sed = interp_np(parcel_lons, parcel_lats, proportion_laptev, forcing_lons, forcing_lats) forcing_sed[forcing_sed < 0] = 0 # North of Nares Strait forcing_sed[(forcing_lons < -50) & (forcing_lons > -95) & (forcing_lats > 78) & (forcing_lats < 83.5)] = 0.03 # CAA background rate forcing_sed[(forcing_lons >-128) & (forcing_lons < -45) & (forcing_lats < 77) & (forcing_lats > 60)] = 0.03 # Beaufort Shelf background rate forcing_sed[(forcing_lons <-128) & (forcing_lats < 71.3) & (forcing_lats > 68)] = 0.02 Z2 = ndimage.gaussian_filter(forcing_sed, sigma=16, order=0) # Zero the forcing field outside of the domain: Z2[0:imin, :] = 0; Z2[imax:-1, :] = 0; Z2[:, 0:jmin] = 0; Z2[:, jmax:-1] = 0; fig, ax1, proj1 = pickle.load(open('/ocean/brogalla/GEOTRACES/pickles/mn-reference.pickle','rb')) x_model, y_model = proj1(forcing_lons, forcing_lats) CS1 = proj1.contourf(x_model, y_model, Z2, vmin=0.0, vmax=0.3, levels=np.arange(0,0.45,0.025), cmap=sed_cmap) x_sub, y_sub = proj1(mesh_lon, mesh_lat) proj1.plot(x_sub[imin:imax,jmax], y_sub[imin:imax,jmax], 'k-', lw=1.0,zorder=5) proj1.plot(x_sub[imin:imax,jmax].T, y_sub[imin:imax,jmax].T, 'k-', lw=1.0,zorder=5) proj1.plot(x_sub[imin:imax,jmin], y_sub[imin:imax,jmin], 'k-', lw=1.0,zorder=5) proj1.plot(x_sub[imin:imax,jmin].T, y_sub[imin:imax,jmin].T, 'k-', lw=1.0,zorder=5) proj1.plot(x_sub[imin,jmin:jmax], y_sub[imin,jmin:jmax], 'k-', lw=1.0,zorder=5) proj1.plot(x_sub[imin,jmin:jmax].T, y_sub[imin,jmin:jmax].T, 'k-', lw=1.0,zorder=5) proj1.plot(x_sub[imax,jmin:jmax], y_sub[imax,jmin:jmax], 'k-', lw=1.0,zorder=5) proj1.plot(x_sub[imax,jmin:jmax].T, y_sub[imax,jmin:jmax].T, 'k-', lw=1.0,zorder=5) x_parcel, y_parcel = proj1(parcel_lons, parcel_lats) proj1.scatter(x_parcel, y_parcel, s=20, zorder=2, c=proportion_laptev, edgecolor='k', \ cmap=sed_cmap, vmin=0, vmax=0.3, linewidths=0.3) cbaxes1 = fig.add_axes([0.52, 0.73, 0.33, 0.031]) CB1 = plt.colorbar(CS1, cax=cbaxes1, orientation='horizontal', ticks=np.arange(0,1.1,0.1)) CB1.ax.tick_params(labelsize=7) CB1.outline.set_linewidth(1.0) CB1.ax.set_title('Proportion of shelf sediments in sea ice', fontsize=7) ``` save to forcing field: ``` file_write = xr.Dataset( {'prop_shelf': (("y","x"), Z2)}, coords = { "y": np.zeros(2400), "x": np.zeros(1632), }, attrs = { 'long_name':'Proportion of shelf sediments in ice', 'units':'none', } ) file_write.to_netcdf('/ocean/brogalla/GEOTRACES/data/ice_sediment-20210722.nc') ```
github_jupyter
# Enable application insights and add custom logs in your endpoint ## Get your Azure ML Workspace ``` !pip install azureml-core import azureml from azureml.core import Workspace import mlflow.azureml workspace_name = '<YOUR-WORKSPACE>' resource_group = '<YOUR-RESOURCE-GROUP>' subscription_id = '<YOUR-SUBSCRIPTION-ID>' workspace = Workspace.get(name = workspace_name, resource_group = resource_group, subscription_id = subscription_id) ``` ## Customize your entry script to add some custom logs ``` %%writefile /dbfs/models/churn-prediction/score.py import mlflow import json import pandas as pd import os import xgboost as xgb import time import datetime # Called when the deployed service starts def init(): global model global train_stats # Get the path where the deployed model can be found. model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), './churn-prediction') # Load model model = mlflow.xgboost.load_model(model_path) # Handle requests to the service def run(rawdata): try: data = pd.read_json(rawdata, orient = 'split') data_xgb = xgb.DMatrix(data) start_time = datetime.datetime.now() # Return the prediction prediction = predict(data_xgb) end_time = datetime.datetime.now() print(f'TOTAL_TIME (ms): {end_time - start_time}') # TRACK IN APP INSIGHTS info = json.dumps({"payload": rawdata}) print(f'OUR_PAYLOAD: {info}') # TRACK IN APP INSIGHTS return prediction except Exception as e: error = str(e) print (f'ERROR: {error + time.strftime("%H:%M:%S")}') # TRACK IN APP INSIGHTS raise Exception(error) def predict(data): prediction = model.predict(data)[0] return {"churn-prediction": str(prediction)} ``` ## Define your inference config (same we already did) ``` from azureml.core.model import InferenceConfig from azureml.core.environment import Environment from azureml.core.conda_dependencies import CondaDependencies # Create the environment env = Environment(name='xgboost_env') conda_dep = CondaDependencies('/dbfs/models/churn-prediction/conda.yaml') # Define the packages needed by the model and scripts conda_dep.add_pip_package("azureml-defaults") # Adds dependencies to PythonSection of myenv env.python.conda_dependencies=conda_dep inference_config = InferenceConfig(entry_script="/dbfs/models/churn-prediction/score.py", environment=env) ``` ## Get the registered model ``` from azureml.core.model import Model model_name = 'churn-model' model_azure = Model.list(workspace = workspace, name = model_name)[0] ``` ## New deployment on AKS. Now with App insights enable ``` from azureml.core.webservice import AksWebservice from azureml.core.compute import AksCompute endpoint_name = 'api-churn-prod' aks_name = 'aks-e2e-ds' aks_target = AksCompute(workspace, aks_name) aks_config = AksWebservice.deploy_configuration(enable_app_insights = True) aks_service = Model.deploy(workspace=workspace, name=endpoint_name, models=[model_azure], inference_config=inference_config, deployment_config=aks_config, deployment_target=aks_target, overwrite=True) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ``` ### Call the API and see the results in the `Application Insights` ``` import requests payload1='{"columns":["Idade","RendaMensal","PercentualUtilizacaoLimite","QtdTransacoesNegadas","AnosDeRelacionamentoBanco","JaUsouChequeEspecial","QtdEmprestimos","NumeroAtendimentos","TMA","IndiceSatisfacao","Saldo","CLTV"],"data":[[21,9703,1.0,5.0,12.0,0.0,1.0,100,300,2,6438,71]]}' payload2='{"columns":["Idade","RendaMensal","PercentualUtilizacaoLimite","QtdTransacoesNegadas","AnosDeRelacionamentoBanco","JaUsouChequeEspecial","QtdEmprestimos","NumeroAtendimentos","TMA","IndiceSatisfacao","Saldo","CLTV"],"data":[[21,9703,1.0,5.0,12.0,0.0,1.0,1,5,5,6438,71]]}' headers = { 'Content-Type': 'application/json' } prod_service_key = aks_service.get_keys()[0] if len(aks_service.get_keys()) > 0 else None headers["Authorization"] = "Bearer {service_key}".format(service_key=prod_service_key) for count in range(5): print(f'Predição: {count}') response1 = requests.request("POST", aks_service.scoring_uri, headers=headers, data=payload1) response2 = requests.request("POST", aks_service.scoring_uri, headers=headers, data=payload2) print(response1.text) print(response2.text) ``` ### Let's try to simulate some errors as well ``` payload3='{"columns":["Idade","RendaMensalERRO","PercentualUtilizacaoLimite","QtdTransacoesNegadas","AnosDeRelacionamentoBanco","JaUsouChequeEspecial","QtdEmprestimos","NumeroAtendimentos","TMA","IndiceSatisfacao","Saldo","CLTV"],"data":[[21,9703,1.0,5.0,12.0,0.0,1.0,1,5,5,6438,71]]}' for count in range(10): response1 = requests.request("POST", aks_service.scoring_uri, headers=headers, data=payload3) print(response1.text) print('\n') ``` ## Update your endpoint to enable/disable `application insights` You can enable the `Application Insights` on an existing endpoint as well, to do this just run the code bellow passing True/False value. ### Get your endpoint (the ACI/AKS endpoint your already deployed) ``` from azureml.core.webservice import Webservice endpoint_name = 'api-churn-prod' aks_service= Webservice(workspace, endpoint_name) ``` ### Enable or disable `Application Insights` ``` aks_service.update(enable_app_insights=True) ```
github_jupyter
# Talks markdown generator for academicpages Takes a TSV of talks with metadata and converts them for use with [academicpages.github.io](academicpages.github.io). This is an interactive Jupyter notebook ([see more info here](http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html)). The core python code is also in `talks.py`. Run either from the `markdown_generator` folder after replacing `talks.tsv` with one containing your data. TODO: Make this work with BibTex and other databases, rather than Stuart's non-standard TSV format and citation style. ``` import pandas as pd import os ``` ## Data format The TSV needs to have the following columns: title, type, url_slug, venue, date, location, talk_url, description, with a header at the top. Many of these fields can be blank, but the columns must be in the TSV. - Fields that cannot be blank: `title`, `url_slug`, `date`. All else can be blank. `type` defaults to "Talk" - `date` must be formatted as YYYY-MM-DD. - `url_slug` will be the descriptive part of the .md file and the permalink URL for the page about the paper. - The .md file will be `YYYY-MM-DD-[url_slug].md` and the permalink will be `https://[yourdomain]/talks/YYYY-MM-DD-[url_slug]` - The combination of `url_slug` and `date` must be unique, as it will be the basis for your filenames This is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create). ``` !cat talks.tsv ``` ## Import TSV Pandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or `\t`. I found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others. ``` talks = pd.read_csv("talks.tsv", sep="\t", header=0) talks ``` ## Escape special characters YAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely. ``` html_escape_table = { "&": "&amp;", '"': "&quot;", "'": "&apos;" } def html_escape(text): if type(text) is str: return "".join(html_escape_table.get(c,c) for c in text) else: return "False" ``` ## Creating the markdown files This is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (```md```) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page. ``` loc_dict = {} for row, item in talks.iterrows(): md_filename = str(item.date) + "-" + item.url_slug + ".md" html_filename = str(item.date) + "-" + item.url_slug year = item.date[:4] md = "---\ntitle: \"" + item.title + '"\n' md += "collection: talks" + "\n" if len(str(item.type)) > 3: md += 'type: "' + item.type + '"\n' else: md += 'type: "Talk"\n' md += "permalink: /talks/" + html_filename + "\n" if len(str(item.venue)) > 3: md += 'venue: "' + item.venue + '"\n' if len(str(item.location)) > 3: md += "date: " + str(item.date) + "\n" if len(str(item.location)) > 3: md += 'location: "' + str(item.location) + '"\n' md += "---\n" if len(str(item.talk_url)) > 3: md += "\n[More information here](" + item.talk_url + ")\n" if len(str(item.description)) > 3: md += "\n" + html_escape(item.description) + "\n" md_filename = os.path.basename(md_filename) #print(md) with open("../_talks/" + md_filename, 'w') as f: f.write(md) ``` These files are in the talks directory, one directory below where we're working from. ``` !ls ../_talks !cat ../_talks/2013-03-01-tutorial-1.md ```
github_jupyter
**Tools - pandas** *The `pandas` library provides high-performance, easy-to-use data structures and data analysis tools. The main data structure is the `DataFrame`, which you can think of as an in-memory 2D table (like a spreadsheet, with column names and row labels). Many features available in Excel are available programmatically, such as creating pivot tables, computing columns based on other columns, plotting graphs, etc. You can also group rows by column value, or join tables much like in SQL. Pandas is also great at handling time series.* Prerequisites: * NumPy – if you are not familiar with NumPy, we recommend that you go through the [NumPy tutorial](tools_numpy.ipynb) now. # Setup First, let's import `pandas`. People usually import it as `pd`: ``` import pandas as pd ``` # `Series` objects The `pandas` library contains these useful data structures: * `Series` objects, that we will discuss now. A `Series` object is 1D array, similar to a column in a spreadsheet (with a column name and row labels). * `DataFrame` objects. This is a 2D table, similar to a spreadsheet (with column names and row labels). * `Panel` objects. You can see a `Panel` as a dictionary of `DataFrame`s. These are less used, so we will not discuss them here. ## Creating a `Series` Let's start by creating our first `Series` object! ``` s = pd.Series([2,-1,3,5]) s ``` ## Similar to a 1D `ndarray` `Series` objects behave much like one-dimensional NumPy `ndarray`s, and you can often pass them as parameters to NumPy functions: ``` import numpy as np np.exp(s) ``` Arithmetic operations on `Series` are also possible, and they apply *elementwise*, just like for `ndarray`s: ``` s + [1000,2000,3000,4000] ``` Similar to NumPy, if you add a single number to a `Series`, that number is added to all items in the `Series`. This is called * broadcasting*: ``` s + 1000 ``` The same is true for all binary operations such as `*` or `/`, and even conditional operations: ``` s < 0 ``` ## Index labels Each item in a `Series` object has a unique identifier called the *index label*. By default, it is simply the rank of the item in the `Series` (starting at `0`) but you can also set the index labels manually: ``` s2 = pd.Series([68, 83, 112, 68], index=["alice", "bob", "charles", "darwin"]) s2 ``` You can then use the `Series` just like a `dict`: ``` s2["bob"] ``` You can still access the items by integer location, like in a regular array: ``` s2[1] ``` To make it clear when you are accessing by label or by integer location, it is recommended to always use the `loc` attribute when accessing by label, and the `iloc` attribute when accessing by integer location: ``` s2.loc["bob"] s2.iloc[1] ``` Slicing a `Series` also slices the index labels: ``` s2.iloc[1:3] ``` This can lead to unexpected results when using the default numeric labels, so be careful: ``` surprise = pd.Series([1000, 1001, 1002, 1003]) surprise surprise_slice = surprise[2:] surprise_slice ``` Oh look! The first element has index label `2`. The element with index label `0` is absent from the slice: ``` try: surprise_slice[0] except KeyError as e: print("Key error:", e) ``` But remember that you can access elements by integer location using the `iloc` attribute. This illustrates another reason why it's always better to use `loc` and `iloc` to access `Series` objects: ``` surprise_slice.iloc[0] ``` ## Init from `dict` You can create a `Series` object from a `dict`. The keys will be used as index labels: ``` weights = {"alice": 68, "bob": 83, "colin": 86, "darwin": 68} s3 = pd.Series(weights) s3 ``` You can control which elements you want to include in the `Series` and in what order by explicitly specifying the desired `index`: ``` s4 = pd.Series(weights, index = ["colin", "alice"]) s4 ``` ## Automatic alignment When an operation involves multiple `Series` objects, `pandas` automatically aligns items by matching index labels. ``` print(s2.keys()) print(s3.keys()) s2 + s3 ``` The resulting `Series` contains the union of index labels from `s2` and `s3`. Since `"colin"` is missing from `s2` and `"charles"` is missing from `s3`, these items have a `NaN` result value. (ie. Not-a-Number means *missing*). Automatic alignment is very handy when working with data that may come from various sources with varying structure and missing items. But if you forget to set the right index labels, you can have surprising results: ``` s5 = pd.Series([1000,1000,1000,1000]) print("s2 =", s2.values) print("s5 =", s5.values) s2 + s5 ``` Pandas could not align the `Series`, since their labels do not match at all, hence the full `NaN` result. ## Init with a scalar You can also initialize a `Series` object using a scalar and a list of index labels: all items will be set to the scalar. ``` meaning = pd.Series(42, ["life", "universe", "everything"]) meaning ``` ## `Series` name A `Series` can have a `name`: ``` s6 = pd.Series([83, 68], index=["bob", "alice"], name="weights") s6 ``` ## Plotting a `Series` Pandas makes it easy to plot `Series` data using matplotlib (for more details on matplotlib, check out the [matplotlib tutorial](tools_matplotlib.ipynb)). Just import matplotlib and call the `plot()` method: ``` %matplotlib inline import matplotlib.pyplot as plt temperatures = [4.4,5.1,6.1,6.2,6.1,6.1,5.7,5.2,4.7,4.1,3.9,3.5] s7 = pd.Series(temperatures, name="Temperature") s7.plot() plt.show() ``` There are *many* options for plotting your data. It is not necessary to list them all here: if you need a particular type of plot (histograms, pie charts, etc.), just look for it in the excellent [Visualization](http://pandas.pydata.org/pandas-docs/stable/visualization.html) section of pandas' documentation, and look at the example code. # Handling time Many datasets have timestamps, and pandas is awesome at manipulating such data: * it can represent periods (such as 2016Q3) and frequencies (such as "monthly"), * it can convert periods to actual timestamps, and *vice versa*, * it can resample data and aggregate values any way you like, * it can handle timezones. ## Time range Let's start by creating a time series using `pd.date_range()`. This returns a `DatetimeIndex` containing one datetime per hour for 12 hours starting on October 29th 2016 at 5:30pm. ``` dates = pd.date_range('2016/10/29 5:30pm', periods=12, freq='H') dates ``` This `DatetimeIndex` may be used as an index in a `Series`: ``` temp_series = pd.Series(temperatures, dates) temp_series ``` Let's plot this series: ``` temp_series.plot(kind="bar") plt.grid(True) plt.show() ``` ## Resampling Pandas lets us resample a time series very simply. Just call the `resample()` method and specify a new frequency: ``` temp_series_freq_2H = temp_series.resample("2H") temp_series_freq_2H ``` The resampling operation is actually a deferred operation, which is why we did not get a `Series` object, but a `DatetimeIndexResampler` object instead. To actually perform the resampling operation, we can simply call the `mean()` method: Pandas will compute the mean of every pair of consecutive hours: ``` temp_series_freq_2H = temp_series_freq_2H.mean() ``` Let's plot the result: ``` temp_series_freq_2H.plot(kind="bar") plt.show() ``` Note how the values have automatically been aggregated into 2-hour periods. If we look at the 6-8pm period, for example, we had a value of `5.1` at 6:30pm, and `6.1` at 7:30pm. After resampling, we just have one value of `5.6`, which is the mean of `5.1` and `6.1`. Rather than computing the mean, we could have used any other aggregation function, for example we can decide to keep the minimum value of each period: ``` temp_series_freq_2H = temp_series.resample("2H").min() temp_series_freq_2H ``` Or, equivalently, we could use the `apply()` method instead: ``` temp_series_freq_2H = temp_series.resample("2H").apply(np.min) temp_series_freq_2H ``` ## Upsampling and interpolation This was an example of downsampling. We can also upsample (ie. increase the frequency), but this creates holes in our data: ``` temp_series_freq_15min = temp_series.resample("15Min").mean() temp_series_freq_15min.head(n=10) # `head` displays the top n values ``` One solution is to fill the gaps by interpolating. We just call the `interpolate()` method. The default is to use linear interpolation, but we can also select another method, such as cubic interpolation: ``` temp_series_freq_15min = temp_series.resample("15Min").interpolate(method="cubic") temp_series_freq_15min.head(n=10) temp_series.plot(label="Period: 1 hour") temp_series_freq_15min.plot(label="Period: 15 minutes") plt.legend() plt.show() ``` ## Timezones By default datetimes are *naive*: they are not aware of timezones, so 2016-10-30 02:30 might mean October 30th 2016 at 2:30am in Paris or in New York. We can make datetimes timezone *aware* by calling the `tz_localize()` method: ``` temp_series_ny = temp_series.tz_localize("America/New_York") temp_series_ny ``` Note that `-04:00` is now appended to all the datetimes. This means that these datetimes refer to [UTC](https://en.wikipedia.org/wiki/Coordinated_Universal_Time) - 4 hours. We can convert these datetimes to Paris time like this: ``` temp_series_paris = temp_series_ny.tz_convert("Europe/Paris") temp_series_paris ``` You may have noticed that the UTC offset changes from `+02:00` to `+01:00`: this is because France switches to winter time at 3am that particular night (time goes back to 2am). Notice that 2:30am occurs twice! Let's go back to a naive representation (if you log some data hourly using local time, without storing the timezone, you might get something like this): ``` temp_series_paris_naive = temp_series_paris.tz_localize(None) temp_series_paris_naive ``` Now `02:30` is really ambiguous. If we try to localize these naive datetimes to the Paris timezone, we get an error: ``` try: temp_series_paris_naive.tz_localize("Europe/Paris") except Exception as e: print(type(e)) print(e) ``` Fortunately using the `ambiguous` argument we can tell pandas to infer the right DST (Daylight Saving Time) based on the order of the ambiguous timestamps: ``` temp_series_paris_naive.tz_localize("Europe/Paris", ambiguous="infer") ``` ## Periods The `pd.period_range()` function returns a `PeriodIndex` instead of a `DatetimeIndex`. For example, let's get all quarters in 2016 and 2017: ``` quarters = pd.period_range('2016Q1', periods=8, freq='Q') quarters ``` Adding a number `N` to a `PeriodIndex` shifts the periods by `N` times the `PeriodIndex`'s frequency: ``` quarters + 3 ``` The `asfreq()` method lets us change the frequency of the `PeriodIndex`. All periods are lengthened or shortened accordingly. For example, let's convert all the quarterly periods to monthly periods (zooming in): ``` quarters.asfreq("M") ``` By default, the `asfreq` zooms on the end of each period. We can tell it to zoom on the start of each period instead: ``` quarters.asfreq("M", how="start") ``` And we can zoom out: ``` quarters.asfreq("A") ``` Of course we can create a `Series` with a `PeriodIndex`: ``` quarterly_revenue = pd.Series([300, 320, 290, 390, 320, 360, 310, 410], index = quarters) quarterly_revenue quarterly_revenue.plot(kind="line") plt.show() ``` We can convert periods to timestamps by calling `to_timestamp`. By default this will give us the first day of each period, but by setting `how` and `freq`, we can get the last hour of each period: ``` last_hours = quarterly_revenue.to_timestamp(how="end", freq="H") last_hours ``` And back to periods by calling `to_period`: ``` last_hours.to_period() ``` Pandas also provides many other time-related functions that we recommend you check out in the [documentation](http://pandas.pydata.org/pandas-docs/stable/timeseries.html). To whet your appetite, here is one way to get the last business day of each month in 2016, at 9am: ``` months_2016 = pd.period_range("2016", periods=12, freq="M") one_day_after_last_days = months_2016.asfreq("D") + 1 last_bdays = one_day_after_last_days.to_timestamp() - pd.tseries.offsets.BDay() last_bdays.to_period("H") + 9 ``` # `DataFrame` objects A DataFrame object represents a spreadsheet, with cell values, column names and row index labels. You can define expressions to compute columns based on other columns, create pivot-tables, group rows, draw graphs, etc. You can see `DataFrame`s as dictionaries of `Series`. ## Creating a `DataFrame` You can create a DataFrame by passing a dictionary of `Series` objects: ``` people_dict = { "weight": pd.Series([68, 83, 112], index=["alice", "bob", "charles"]), "birthyear": pd.Series([1984, 1985, 1992], index=["bob", "alice", "charles"], name="year"), "children": pd.Series([0, 3], index=["charles", "bob"]), "hobby": pd.Series(["Biking", "Dancing"], index=["alice", "bob"]), } people = pd.DataFrame(people_dict) people ``` A few things to note: * the `Series` were automatically aligned based on their index, * missing values are represented as `NaN`, * `Series` names are ignored (the name `"year"` was dropped), * `DataFrame`s are displayed nicely in Jupyter notebooks, woohoo! You can access columns pretty much as you would expect. They are returned as `Series` objects: ``` people["birthyear"] ``` You can also get multiple columns at once: ``` people[["birthyear", "hobby"]] ``` If you pass a list of columns and/or index row labels to the `DataFrame` constructor, it will guarantee that these columns and/or rows will exist, in that order, and no other column/row will exist. For example: ``` d2 = pd.DataFrame( people_dict, columns=["birthyear", "weight", "height"], index=["bob", "alice", "eugene"] ) d2 ``` Another convenient way to create a `DataFrame` is to pass all the values to the constructor as an `ndarray`, or a list of lists, and specify the column names and row index labels separately: ``` values = [ [1985, np.nan, "Biking", 68], [1984, 3, "Dancing", 83], [1992, 0, np.nan, 112] ] d3 = pd.DataFrame( values, columns=["birthyear", "children", "hobby", "weight"], index=["alice", "bob", "charles"] ) d3 ``` To specify missing values, you can either use `np.nan` or NumPy's masked arrays: ``` masked_array = np.ma.asarray(values, dtype=np.object) masked_array[(0, 2), (1, 2)] = np.ma.masked d3 = pd.DataFrame( masked_array, columns=["birthyear", "children", "hobby", "weight"], index=["alice", "bob", "charles"] ) d3 ``` Instead of an `ndarray`, you can also pass a `DataFrame` object: ``` d4 = pd.DataFrame( d3, columns=["hobby", "children"], index=["alice", "bob"] ) d4 ``` It is also possible to create a `DataFrame` with a dictionary (or list) of dictionaries (or list): ``` people = pd.DataFrame({ "birthyear": {"alice":1985, "bob": 1984, "charles": 1992}, "hobby": {"alice":"Biking", "bob": "Dancing"}, "weight": {"alice":68, "bob": 83, "charles": 112}, "children": {"bob": 3, "charles": 0} }) people ``` ## Multi-indexing If all columns are tuples of the same size, then they are understood as a multi-index. The same goes for row index labels. For example: ``` d5 = pd.DataFrame( { ("public", "birthyear"): {("Paris","alice"):1985, ("Paris","bob"): 1984, ("London","charles"): 1992}, ("public", "hobby"): {("Paris","alice"):"Biking", ("Paris","bob"): "Dancing"}, ("private", "weight"): {("Paris","alice"):68, ("Paris","bob"): 83, ("London","charles"): 112}, ("private", "children"): {("Paris", "alice"):np.nan, ("Paris","bob"): 3, ("London","charles"): 0} } ) d5 ``` You can now get a `DataFrame` containing all the `"public"` columns very simply: ``` d5["public"] d5["public", "hobby"] # Same result as d5["public"]["hobby"] ``` ## Dropping a level Let's look at `d5` again: ``` d5 ``` There are two levels of columns, and two levels of indices. We can drop a column level by calling `droplevel()` (the same goes for indices): ``` d5.columns = d5.columns.droplevel(level = 0) d5 ``` ## Transposing You can swap columns and indices using the `T` attribute: ``` d6 = d5.T d6 ``` ## Stacking and unstacking levels Calling the `stack()` method will push the lowest column level after the lowest index: ``` d7 = d6.stack() d7 ``` Note that many `NaN` values appeared. This makes sense because many new combinations did not exist before (eg. there was no `bob` in `London`). Calling `unstack()` will do the reverse, once again creating many `NaN` values. ``` d8 = d7.unstack() d8 ``` If we call `unstack` again, we end up with a `Series` object: ``` d9 = d8.unstack() d9 ``` The `stack()` and `unstack()` methods let you select the `level` to stack/unstack. You can even stack/unstack multiple levels at once: ``` d10 = d9.unstack(level = (0,1)) d10 ``` ## Most methods return modified copies As you may have noticed, the `stack()` and `unstack()` methods do not modify the object they apply to. Instead, they work on a copy and return that copy. This is true of most methods in pandas. ## Accessing rows Let's go back to the `people` `DataFrame`: ``` people ``` The `loc` attribute lets you access rows instead of columns. The result is a `Series` object in which the `DataFrame`'s column names are mapped to row index labels: ``` people.loc["charles"] ``` You can also access rows by integer location using the `iloc` attribute: ``` people.iloc[2] ``` You can also get a slice of rows, and this returns a `DataFrame` object: ``` people.iloc[1:3] ``` Finally, you can pass a boolean array to get the matching rows: ``` people[np.array([True, False, True])] ``` This is most useful when combined with boolean expressions: ``` people[people["birthyear"] < 1990] ``` ## Adding and removing columns You can generally treat `DataFrame` objects like dictionaries of `Series`, so the following work fine: ``` people people["age"] = 2018 - people["birthyear"] # adds a new column "age" people["over 30"] = people["age"] > 30 # adds another column "over 30" birthyears = people.pop("birthyear") del people["children"] people birthyears ``` When you add a new colum, it must have the same number of rows. Missing rows are filled with NaN, and extra rows are ignored: ``` people["pets"] = pd.Series({"bob": 0, "charles": 5, "eugene":1}) # alice is missing, eugene is ignored people ``` When adding a new column, it is added at the end (on the right) by default. You can also insert a column anywhere else using the `insert()` method: ``` people.insert(1, "height", [172, 181, 185]) people ``` ## Assigning new columns You can also create new columns by calling the `assign()` method. Note that this returns a new `DataFrame` object, the original is not modified: ``` people.assign( body_mass_index = people["weight"] / (people["height"] / 100) ** 2, has_pets = people["pets"] > 0 ) ``` Note that you cannot access columns created within the same assignment: ``` try: people.assign( body_mass_index = people["weight"] / (people["height"] / 100) ** 2, overweight = people["body_mass_index"] > 25 ) except KeyError as e: print("Key error:", e) ``` The solution is to split this assignment in two consecutive assignments: ``` d6 = people.assign(body_mass_index = people["weight"] / (people["height"] / 100) ** 2) d6.assign(overweight = d6["body_mass_index"] > 25) ``` Having to create a temporary variable `d6` is not very convenient. You may want to just chain the assigment calls, but it does not work because the `people` object is not actually modified by the first assignment: ``` try: (people .assign(body_mass_index = people["weight"] / (people["height"] / 100) ** 2) .assign(overweight = people["body_mass_index"] > 25) ) except KeyError as e: print("Key error:", e) ``` But fear not, there is a simple solution. You can pass a function to the `assign()` method (typically a `lambda` function), and this function will be called with the `DataFrame` as a parameter: ``` (people .assign(body_mass_index = lambda df: df["weight"] / (df["height"] / 100) ** 2) .assign(overweight = lambda df: df["body_mass_index"] > 25) ) ``` Problem solved! ## Evaluating an expression A great feature supported by pandas is expression evaluation. This relies on the `numexpr` library which must be installed. ``` people.eval("weight / (height/100) ** 2 > 25") ``` Assignment expressions are also supported. Let's set `inplace=True` to directly modify the `DataFrame` rather than getting a modified copy: ``` people.eval("body_mass_index = weight / (height/100) ** 2", inplace=True) people ``` You can use a local or global variable in an expression by prefixing it with `'@'`: ``` overweight_threshold = 30 people.eval("overweight = body_mass_index > @overweight_threshold", inplace=True) people ``` ## Querying a `DataFrame` The `query()` method lets you filter a `DataFrame` based on a query expression: ``` people.query("age > 30 and pets == 0") ``` ## Sorting a `DataFrame` You can sort a `DataFrame` by calling its `sort_index` method. By default it sorts the rows by their index label, in ascending order, but let's reverse the order: ``` people.sort_index(ascending=False) ``` Note that `sort_index` returned a sorted *copy* of the `DataFrame`. To modify `people` directly, we can set the `inplace` argument to `True`. Also, we can sort the columns instead of the rows by setting `axis=1`: ``` people.sort_index(axis=1, inplace=True) people ``` To sort the `DataFrame` by the values instead of the labels, we can use `sort_values` and specify the column to sort by: ``` people.sort_values(by="age", inplace=True) people ``` ## Plotting a `DataFrame` Just like for `Series`, pandas makes it easy to draw nice graphs based on a `DataFrame`. For example, it is trivial to create a line plot from a `DataFrame`'s data by calling its `plot` method: ``` people.plot(kind = "line", x = "body_mass_index", y = ["height", "weight"]) plt.show() ``` You can pass extra arguments supported by matplotlib's functions. For example, we can create scatterplot and pass it a list of sizes using the `s` argument of matplotlib's `scatter()` function: ``` people.plot(kind = "scatter", x = "height", y = "weight", s=[40, 120, 200]) plt.show() ``` Again, there are way too many options to list here: the best option is to scroll through the [Visualization](http://pandas.pydata.org/pandas-docs/stable/visualization.html) page in pandas' documentation, find the plot you are interested in and look at the example code. ## Operations on `DataFrame`s Although `DataFrame`s do not try to mimick NumPy arrays, there are a few similarities. Let's create a `DataFrame` to demonstrate this: ``` grades_array = np.array([[8,8,9],[10,9,9],[4, 8, 2], [9, 10, 10]]) grades = pd.DataFrame(grades_array, columns=["sep", "oct", "nov"], index=["alice","bob","charles","darwin"]) grades ``` You can apply NumPy mathematical functions on a `DataFrame`: the function is applied to all values: ``` np.sqrt(grades) ``` Similarly, adding a single value to a `DataFrame` will add that value to all elements in the `DataFrame`. This is called *broadcasting*: ``` grades + 1 ``` Of course, the same is true for all other binary operations, including arithmetic (`*`,`/`,`**`...) and conditional (`>`, `==`...) operations: ``` grades >= 5 ``` Aggregation operations, such as computing the `max`, the `sum` or the `mean` of a `DataFrame`, apply to each column, and you get back a `Series` object: ``` grades.mean() ``` The `all` method is also an aggregation operation: it checks whether all values are `True` or not. Let's see during which months all students got a grade greater than `5`: ``` (grades > 5).all() ``` Most of these functions take an optional `axis` parameter which lets you specify along which axis of the `DataFrame` you want the operation executed. The default is `axis=0`, meaning that the operation is executed vertically (on each column). You can set `axis=1` to execute the operation horizontally (on each row). For example, let's find out which students had all grades greater than `5`: ``` (grades > 5).all(axis = 1) ``` The `any` method returns `True` if any value is True. Let's see who got at least one grade 10: ``` (grades == 10).any(axis = 1) ``` If you add a `Series` object to a `DataFrame` (or execute any other binary operation), pandas attempts to broadcast the operation to all *rows* in the `DataFrame`. This only works if the `Series` has the same size as the `DataFrame`s rows. For example, let's substract the `mean` of the `DataFrame` (a `Series` object) from the `DataFrame`: ``` grades - grades.mean() # equivalent to: grades - [7.75, 8.75, 7.50] ``` We substracted `7.75` from all September grades, `8.75` from October grades and `7.50` from November grades. It is equivalent to substracting this `DataFrame`: ``` pd.DataFrame([[7.75, 8.75, 7.50]]*4, index=grades.index, columns=grades.columns) ``` If you want to substract the global mean from every grade, here is one way to do it: ``` grades - grades.values.mean() # substracts the global mean (8.00) from all grades ``` ## Automatic alignment Similar to `Series`, when operating on multiple `DataFrame`s, pandas automatically aligns them by row index label, but also by column names. Let's create a `DataFrame` with bonus points for each person from October to December: ``` bonus_array = np.array([[0,np.nan,2],[np.nan,1,0],[0, 1, 0], [3, 3, 0]]) bonus_points = pd.DataFrame(bonus_array, columns=["oct", "nov", "dec"], index=["bob","colin", "darwin", "charles"]) bonus_points grades + bonus_points ``` Looks like the addition worked in some cases but way too many elements are now empty. That's because when aligning the `DataFrame`s, some columns and rows were only present on one side, and thus they were considered missing on the other side (`NaN`). Then adding `NaN` to a number results in `NaN`, hence the result. ## Handling missing data Dealing with missing data is a frequent task when working with real life data. Pandas offers a few tools to handle missing data. Let's try to fix the problem above. For example, we can decide that missing data should result in a zero, instead of `NaN`. We can replace all `NaN` values by a any value using the `fillna()` method: ``` (grades + bonus_points).fillna(0) ``` It's a bit unfair that we're setting grades to zero in September, though. Perhaps we should decide that missing grades are missing grades, but missing bonus points should be replaced by zeros: ``` fixed_bonus_points = bonus_points.fillna(0) fixed_bonus_points.insert(0, "sep", 0) fixed_bonus_points.loc["alice"] = 0 grades + fixed_bonus_points ``` That's much better: although we made up some data, we have not been too unfair. Another way to handle missing data is to interpolate. Let's look at the `bonus_points` `DataFrame` again: ``` bonus_points ``` Now let's call the `interpolate` method. By default, it interpolates vertically (`axis=0`), so let's tell it to interpolate horizontally (`axis=1`). ``` bonus_points.interpolate(axis=1) ``` Bob had 0 bonus points in October, and 2 in December. When we interpolate for November, we get the mean: 1 bonus point. Colin had 1 bonus point in November, but we do not know how many bonus points he had in September, so we cannot interpolate, this is why there is still a missing value in October after interpolation. To fix this, we can set the September bonus points to 0 before interpolation. ``` better_bonus_points = bonus_points.copy() better_bonus_points.insert(0, "sep", 0) better_bonus_points.loc["alice"] = 0 better_bonus_points = better_bonus_points.interpolate(axis=1) better_bonus_points ``` Great, now we have reasonable bonus points everywhere. Let's find out the final grades: ``` grades + better_bonus_points ``` It is slightly annoying that the September column ends up on the right. This is because the `DataFrame`s we are adding do not have the exact same columns (the `grades` `DataFrame` is missing the `"dec"` column), so to make things predictable, pandas orders the final columns alphabetically. To fix this, we can simply add the missing column before adding: ``` grades["dec"] = np.nan final_grades = grades + better_bonus_points final_grades ``` There's not much we can do about December and Colin: it's bad enough that we are making up bonus points, but we can't reasonably make up grades (well I guess some teachers probably do). So let's call the `dropna()` method to get rid of rows that are full of `NaN`s: ``` final_grades_clean = final_grades.dropna(how="all") final_grades_clean ``` Now let's remove columns that are full of `NaN`s by setting the `axis` argument to `1`: ``` final_grades_clean = final_grades_clean.dropna(axis=1, how="all") final_grades_clean ``` ## Aggregating with `groupby` Similar to the SQL language, pandas allows grouping your data into groups to run calculations over each group. First, let's add some extra data about each person so we can group them, and let's go back to the `final_grades` `DataFrame` so we can see how `NaN` values are handled: ``` final_grades["hobby"] = ["Biking", "Dancing", np.nan, "Dancing", "Biking"] final_grades ``` Now let's group data in this `DataFrame` by hobby: ``` grouped_grades = final_grades.groupby("hobby") grouped_grades ``` We are ready to compute the average grade per hobby: ``` grouped_grades.mean() ``` That was easy! Note that the `NaN` values have simply been skipped when computing the means. ## Pivot tables Pandas supports spreadsheet-like [pivot tables](https://en.wikipedia.org/wiki/Pivot_table) that allow quick data summarization. To illustrate this, let's create a simple `DataFrame`: ``` bonus_points more_grades = final_grades_clean.stack().reset_index() more_grades.columns = ["name", "month", "grade"] more_grades["bonus"] = [np.nan, np.nan, np.nan, 0, np.nan, 2, 3, 3, 0, 0, 1, 0] more_grades ``` Now we can call the `pd.pivot_table()` function for this `DataFrame`, asking to group by the `name` column. By default, `pivot_table()` computes the mean of each numeric column: ``` pd.pivot_table(more_grades, index="name") ``` We can change the aggregation function by setting the `aggfunc` argument, and we can also specify the list of columns whose values will be aggregated: ``` pd.pivot_table(more_grades, index="name", values=["grade","bonus"], aggfunc=np.max) ``` We can also specify the `columns` to aggregate over horizontally, and request the grand totals for each row and column by setting `margins=True`: ``` pd.pivot_table(more_grades, index="name", values="grade", columns="month", margins=True) ``` Finally, we can specify multiple index or column names, and pandas will create multi-level indices: ``` pd.pivot_table(more_grades, index=("name", "month"), margins=True) ``` ## Overview functions When dealing with large `DataFrames`, it is useful to get a quick overview of its content. Pandas offers a few functions for this. First, let's create a large `DataFrame` with a mix of numeric values, missing values and text values. Notice how Jupyter displays only the corners of the `DataFrame`: ``` much_data = np.fromfunction(lambda x,y: (x+y*y)%17*11, (10000, 26)) large_df = pd.DataFrame(much_data, columns=list("ABCDEFGHIJKLMNOPQRSTUVWXYZ")) large_df[large_df % 16 == 0] = np.nan large_df.insert(3,"some_text", "Blabla") large_df ``` The `head()` method returns the top 5 rows: ``` large_df.head() ``` Of course there's also a `tail()` function to view the bottom 5 rows. You can pass the number of rows you want: ``` large_df.tail(n=2) ``` The `info()` method prints out a summary of each columns contents: ``` large_df.info() ``` Finally, the `describe()` method gives a nice overview of the main aggregated values over each column: * `count`: number of non-null (not NaN) values * `mean`: mean of non-null values * `std`: [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation) of non-null values * `min`: minimum of non-null values * `25%`, `50%`, `75%`: 25th, 50th and 75th [percentile](https://en.wikipedia.org/wiki/Percentile) of non-null values * `max`: maximum of non-null values ``` large_df.describe() ``` # Saving & loading Pandas can save `DataFrame`s to various backends, including file formats such as CSV, Excel, JSON, HTML and HDF5, or to a SQL database. Let's create a `DataFrame` to demonstrate this: ``` my_df = pd.DataFrame( [["Biking", 68.5, 1985, np.nan], ["Dancing", 83.1, 1984, 3]], columns=["hobby","weight","birthyear","children"], index=["alice", "bob"] ) my_df ``` ## Saving Let's save it to CSV, HTML and JSON: ``` my_df.to_csv("my_df.csv") my_df.to_html("my_df.html") my_df.to_json("my_df.json") ``` Done! Let's take a peek at what was saved: ``` for filename in ("my_df.csv", "my_df.html", "my_df.json"): print("#", filename) with open(filename, "rt") as f: print(f.read()) print() ``` Note that the index is saved as the first column (with no name) in a CSV file, as `<th>` tags in HTML and as keys in JSON. Saving to other formats works very similarly, but some formats require extra libraries to be installed. For example, saving to Excel requires the openpyxl library: ``` try: my_df.to_excel("my_df.xlsx", sheet_name='People') except ImportError as e: print(e) ``` ## Loading Now let's load our CSV file back into a `DataFrame`: ``` my_df_loaded = pd.read_csv("my_df.csv", index_col=0) my_df_loaded ``` As you might guess, there are similar `read_json`, `read_html`, `read_excel` functions as well. We can also read data straight from the Internet. For example, let's load all U.S. cities from [simplemaps.com](http://simplemaps.com/): ``` us_cities = None try: csv_url = "http://simplemaps.com/files/cities.csv" us_cities = pd.read_csv(csv_url, index_col=0) us_cities = us_cities.head() except IOError as e: print(e) us_cities ``` There are more options available, in particular regarding datetime format. Check out the [documentation](http://pandas.pydata.org/pandas-docs/stable/io.html) for more details. # Combining `DataFrame`s ## SQL-like joins One powerful feature of pandas is it's ability to perform SQL-like joins on `DataFrame`s. Various types of joins are supported: inner joins, left/right outer joins and full joins. To illustrate this, let's start by creating a couple simple `DataFrame`s: ``` city_loc = pd.DataFrame( [ ["CA", "San Francisco", 37.781334, -122.416728], ["NY", "New York", 40.705649, -74.008344], ["FL", "Miami", 25.791100, -80.320733], ["OH", "Cleveland", 41.473508, -81.739791], ["UT", "Salt Lake City", 40.755851, -111.896657] ], columns=["state", "city", "lat", "lng"]) city_loc city_pop = pd.DataFrame( [ [808976, "San Francisco", "California"], [8363710, "New York", "New-York"], [413201, "Miami", "Florida"], [2242193, "Houston", "Texas"] ], index=[3,4,5,6], columns=["population", "city", "state"]) city_pop ``` Now let's join these `DataFrame`s using the `merge()` function: ``` pd.merge(left=city_loc, right=city_pop, on="city") ``` Note that both `DataFrame`s have a column named `state`, so in the result they got renamed to `state_x` and `state_y`. Also, note that Cleveland, Salt Lake City and Houston were dropped because they don't exist in *both* `DataFrame`s. This is the equivalent of a SQL `INNER JOIN`. If you want a `FULL OUTER JOIN`, where no city gets dropped and `NaN` values are added, you must specify `how="outer"`: ``` all_cities = pd.merge(left=city_loc, right=city_pop, on="city", how="outer") all_cities ``` Of course `LEFT OUTER JOIN` is also available by setting `how="left"`: only the cities present in the left `DataFrame` end up in the result. Similarly, with `how="right"` only cities in the right `DataFrame` appear in the result. For example: ``` pd.merge(left=city_loc, right=city_pop, on="city", how="right") ``` If the key to join on is actually in one (or both) `DataFrame`'s index, you must use `left_index=True` and/or `right_index=True`. If the key column names differ, you must use `left_on` and `right_on`. For example: ``` city_pop2 = city_pop.copy() city_pop2.columns = ["population", "name", "state"] pd.merge(left=city_loc, right=city_pop2, left_on="city", right_on="name") ``` ## Concatenation Rather than joining `DataFrame`s, we may just want to concatenate them. That's what `concat()` is for: ``` result_concat = pd.concat([city_loc, city_pop]) result_concat ``` Note that this operation aligned the data horizontally (by columns) but not vertically (by rows). In this example, we end up with multiple rows having the same index (eg. 3). Pandas handles this rather gracefully: ``` result_concat.loc[3] ``` Or you can tell pandas to just ignore the index: ``` pd.concat([city_loc, city_pop], ignore_index=True) ``` Notice that when a column does not exist in a `DataFrame`, it acts as if it was filled with `NaN` values. If we set `join="inner"`, then only columns that exist in *both* `DataFrame`s are returned: ``` pd.concat([city_loc, city_pop], join="inner") ``` You can concatenate `DataFrame`s horizontally instead of vertically by setting `axis=1`: ``` pd.concat([city_loc, city_pop], axis=1) ``` In this case it really does not make much sense because the indices do not align well (eg. Cleveland and San Francisco end up on the same row, because they shared the index label `3`). So let's reindex the `DataFrame`s by city name before concatenating: ``` pd.concat([city_loc.set_index("city"), city_pop.set_index("city")], axis=1) ``` This looks a lot like a `FULL OUTER JOIN`, except that the `state` columns were not renamed to `state_x` and `state_y`, and the `city` column is now the index. The `append()` method is a useful shorthand for concatenating `DataFrame`s vertically: ``` city_loc.append(city_pop) ``` As always in pandas, the `append()` method does *not* actually modify `city_loc`: it works on a copy and returns the modified copy. # Categories It is quite frequent to have values that represent categories, for example `1` for female and `2` for male, or `"A"` for Good, `"B"` for Average, `"C"` for Bad. These categorical values can be hard to read and cumbersome to handle, but fortunately pandas makes it easy. To illustrate this, let's take the `city_pop` `DataFrame` we created earlier, and add a column that represents a category: ``` city_eco = city_pop.copy() city_eco["eco_code"] = [17, 17, 34, 20] city_eco ``` Right now the `eco_code` column is full of apparently meaningless codes. Let's fix that. First, we will create a new categorical column based on the `eco_code`s: ``` city_eco["economy"] = city_eco["eco_code"].astype('category') city_eco["economy"].cat.categories ``` Now we can give each category a meaningful name: ``` city_eco["economy"].cat.categories = ["Finance", "Energy", "Tourism"] city_eco ``` Note that categorical values are sorted according to their categorical order, *not* their alphabetical order: ``` city_eco.sort_values(by="economy", ascending=False) ``` # What next? As you probably noticed by now, pandas is quite a large library with *many* features. Although we went through the most important features, there is still a lot to discover. Probably the best way to learn more is to get your hands dirty with some real-life data. It is also a good idea to go through pandas' excellent [documentation](http://pandas.pydata.org/pandas-docs/stable/index.html), in particular the [Cookbook](http://pandas.pydata.org/pandas-docs/stable/cookbook.html).
github_jupyter
``` import os import sys from PIL import Image import numpy as np import shutil sys.path.extend(['..']) from utils.config import process_config import tensorflow as tf from tensorflow.layers import (conv2d, max_pooling2d, average_pooling2d, batch_normalization, dropout, dense) from tensorflow.nn import (relu, softmax, leaky_relu) import matplotlib.pyplot as plt %matplotlib inline from sklearn.utils import shuffle # Paths to use later DATA = '../data_splitted/' CONF = '../configs/roman.json' ``` # Configs creating ``` config_tf = tf.ConfigProto(allow_soft_placement=True) config_tf.gpu_options.allow_growth = True config_tf.gpu_options.per_process_gpu_memory_fraction = 0.95 config = process_config(CONF) ``` # Necessary functions ``` def normalize(image): return (image - image.min()) / (image.max() - image.min()) def shuffle_sim(a, b): assert a.shape[0] == a.shape[0], 'Shapes must be equal' ind = np.arange(a.shape[0]) np.random.shuffle(ind) return a[ind], b[ind] def read_train_test(path_to_data): data = {} for dset in ['train', 'test']: path_ = os.path.join(path_to_data, dset) X, Y = [], [] classes = [d for d in os.listdir(path_) if os.path.isdir(os.path.join(path_, d))] classes.sort() for cl in classes: y = np.zeros((1, 8), dtype=np.int32) y[0, int(cl) - 1] = 1 cl_path = os.path.join(path_, cl) filenames = [os.path.join(cl_path, pict) for pict in os.listdir(cl_path) if pict.endswith('.jpg')] for im in filenames: image = np.asarray(Image.open(im), dtype=np.float32) X.append(normalize(image).reshape((1, image.shape[0], image.shape[1], image.shape[2]))) Y.append(y) a, b = shuffle_sim(np.concatenate(X), np.concatenate(Y)) data[dset] = ([a, b]) return data ``` # Model ``` class Model(): """ Model class represents one object of model. :param config: Parsed config file. :param session_config: Formed session config file, if necessary. :return Model """ def __init__(self, config, session_config=None): # Configuring session self.config = config if session_config is not None: self.sess = tf.Session(config=session_config) else: self.sess = tf.Session() # Creating inputs to network with tf.name_scope('inputs'): self.x = tf.placeholder( dtype=tf.float32, shape=(None, config.image_size, config.image_size, 3)) self.y = tf.placeholder(dtype=tf.int32, shape=(None, 8)) self.training = tf.placeholder(dtype=tf.bool, shape=()) # Creating epoch counter self.global_epoch = tf.Variable( 0, name='global_epoch', trainable=False, dtype=tf.int32) self.step = tf.assign(self.global_epoch, self.global_epoch + 1) # Building model if self.config.write_histograms: self.histograms = {} self.__build_model() # Summary writer self.summ_writer_train = tf.summary.FileWriter( config.train_summary_dir, graph=self.sess.graph) self.summ_writer_test = tf.summary.FileWriter(config.test_summary_dir) self.sess.run(tf.global_variables_initializer()) # Saver self.saver = tf.train.Saver(max_to_keep=1, name='saver') def __initialize_local(self): """ Initialize local tensorflow variables. :return None """ self.sess.run(tf.local_variables_initializer()) if self.config.write_histograms: self.histograms = {} def __add_histogram(self, scope, name, var): """ Add histograms to summary scope. :param scope: Scope object. :param name: Name of variable. :param var: Variable to add to histograms. :return None """ dict_var = scope.name + '/' + name hist = self.histograms.get(dict_var, None) if hist is not None: self.histograms[dict_var] = tf.concat([hist, var], 0) else: self.histograms[dict_var] = var tf.summary.histogram(name, self.histograms[dict_var]) def __block(self, inp, ch, num, c_ker=[(3, 3), (3, 3)], c_str=[(1, 1), (1, 1)], act=relu, mp_ker=(2, 2), mp_str=(2, 2), mode='conc'): """ Create single convolution block of network. :param inp: Input Tensor of shape (batch_size, inp_size, inp_size, channels). :param ch: Number of channels to have in output Tensor. (If mode is 'conc', number of channels will be ch * 2) :param num: Number of block for variable scope. :param c_ker: List of tuples with shapes of kernels for each convolution operation. :param c_str: List of tuples with shapes of strides for each convolution operation. :param act: Activation function. :param mp_ker: Tuple-like pooling layer kernel size. :param mp_str: Tuple-like pooling layer stride size. :param mode: One of ['conc', 'mp', 'ap'] modes, where 'mp' and 'ap' are max- and average- pooling respectively, and 'conc' - concatenate mode. :return Transformed Tensor """ with tf.variable_scope('block_' + str(num)) as name: conv1 = conv2d(inp, ch, c_ker[0], strides=c_str[0]) bn = batch_normalization(conv1) out = act(bn) if config.use_dropout_block: out = dropout( out, config.dropout_rate_block, training=self.training) print(out.shape) conv2 = conv2d(out, ch, c_ker[1], strides=c_str[1]) bn = batch_normalization(conv2) out = act(bn) print(out.shape) if config.write_histograms: self.__add_histogram(name, 'conv1', conv1) self.__add_histogram(name, 'conv2', conv2) if mode == 'mp': out = max_pooling2d(out, mp_ker, strides=mp_str) elif mode == 'ap': out = average_pooling2d(out, mp_ker, mp_str) elif mode == 'conc': mp = max_pooling2d(out, mp_ker, strides=mp_str) ap = average_pooling2d(out, mp_ker, mp_str) out = tf.concat([mp, ap], -1) else: raise ValueError('Unknown mode.') print(out.shape) return out def __build_model(self): """ Build model. :return None """ with tf.name_scope('layers'): out = self.__block(self.x, 16, 1, mode='conc') out = self.__block(out, 32, 2, mode='conc') out = self.__block(out, 64, 3, mode='conc') out = self.__block(out, 256, 4, c_str=[(1, 1), (2, 2)], mode='mp') dim = np.prod(out.shape[1:]) out = tf.reshape(out, [-1, dim]) print(out.shape) with tf.variable_scope('dense') as scope: dense_l = dense(out, 128) out = batch_normalization(dense_l) out = leaky_relu(out, alpha=0.01) if config.use_dropout_dense: out = dropout( out, rate=config.dropout_rate_dense, training=self.training) print(out.shape) self.predictions = dense(out, 8, activation=softmax) if self.config.write_histograms: self.__add_histogram(scope, 'dense', dense_l) self.__add_histogram(scope, 'pred', self.predictions) with tf.name_scope('metrics'): amax_labels = tf.argmax(self.y, 1) amax_pred = tf.argmax(self.predictions, 1) cur_loss = tf.losses.softmax_cross_entropy(self.y, self.predictions) self.loss, self.loss_update = tf.metrics.mean(cur_loss) cur_acc = tf.reduce_mean( tf.cast(tf.equal(amax_labels, amax_pred), dtype=tf.float32)) self.acc, self.acc_update = tf.metrics.mean(cur_acc) self.optimize = tf.train.AdamOptimizer( self.config.learning_rate).minimize(cur_loss) tf.summary.scalar('loss', self.loss) tf.summary.scalar('accuracy', self.acc) self.summary = tf.summary.merge_all() def train(self, dat, epochs, dat_v=None, batch=None): """ Train model on data. :param dat: List of data to train on, like [X, y]. Where X is an array with size (None, image_size, image_size, 3) and y is an array with size (None ,8). :param epochs: Number of epochs to run training procedure. :param dat_v: List of data to validate on, like [X, y]. Where X is an array with size (None, image_size, image_size, 3) and y is an array with size (None ,8). :param batch: Batch size to train on. :return None """ if batch is not None: steps = int(np.ceil(dat[0].shape[0] / batch)) else: batch = dat[0].shape[0] steps = 1 for epoch in range(epochs): self.__initialize_local() summary = tf.summary.Summary() for step in range(steps): start = step * batch finish = ( step + 1) * batch if step + 1 != steps else dat[0].shape[0] _, _, _ = self.sess.run( [self.loss_update, self.acc_update, self.optimize], feed_dict={ self.x: dat[0][start:finish], self.y: dat[1][start:finish], self.training: True }) summary, loss, acc, ep = self.sess.run( [self.summary, self.loss, self.acc, self.step]) self.summ_writer_train.add_summary(summary, ep) print( 'EP: {:3d}\tLOSS: {:.10f}\tACC: {:.10f}\t'.format( ep, loss, acc), end='') if dat_v is not None: self.test(dat_v, batch=batch) else: print() def test(self, dat, batch=None): """ Test model on specific data. :param dat: List of data to test on, like [X, y]. Where X is an array with size (None, image_size, image_size, 3) and y is an array with size (None ,8).. :param batch: Batch size to use. :return None """ if batch is not None: steps = int(np.ceil(dat[0].shape[0] / batch)) else: steps = 1 batch = dat[0].shape[0] self.__initialize_local() for step in range(steps): start = step * batch finish = ( step + 1) * batch if step + 1 != steps else dat[0].shape[0] _, _ = self.sess.run( [self.loss_update, self.acc_update], feed_dict={ self.x: dat[0][start:finish], self.y: dat[1][start:finish], self.training: False }) summary, loss, acc, ep = self.sess.run( [self.summary, self.loss, self.acc, self.global_epoch]) self.summ_writer_test.add_summary(summary, ep) print('VALID\tLOSS: {:.10f}\tACC: {:.10f}'.format(loss, acc)) def predict_proba(self, data, batch=None): """ Predict probability of each class. :param data: An array to predict on with shape (None, image_size, image_size, 3) :param batch: Batch size to use. :return Array of predictions with shape (None, 8) """ if batch is not None: steps = int(np.ceil(data.shape[0] / batch)) else: steps = 1 batch = data.shape[0] self.__initialize_local() preds_arr = [] for step in range(steps): start = step * batch finish = (step + 1) * batch if step + 1 != steps else data.shape[0] preds = self.sess.run( self.predictions, feed_dict={ self.x: data[start:finish], self.y: np.zeros((finish - start, 8)), self.training: False }) preds_arr.append(preds) return np.concatenate(preds_arr) def save_model(self, model_path=None): """ Save model weights to the folder with weights. :param model_path: String-like path to save in. :return None """ gstep = self.sess.run(self.global_epoch) if model_path is not None: self.saver.save(self.sess, model_path + 'model') else: self.saver.save(self.sess, config.checkpoint_dir + 'model') def load_model(self, model_path=None): """ Load model weights. :param model_path: String-like path to load from. :return None """ if model_path is not None: meta = [ os.path.join(filename) for filename in os.listdir(model_path) if filename.endswith('.meta') ][0] self.saver = tf.train.import_meta_graph( os.path.join(model_path, meta)) self.saver.restore(self.sess, tf.train.latest_checkpoint(model_path)) else: meta = [ os.path.join(filename) for filename in os.listdir(self.config.checkpoint_dir) if filename.endswith('.meta') ][0] self.saver = tf.train.import_meta_graph( os.path.join(self.config.checkpoint_dir, meta)) self.saver.restore( self.sess, tf.train.latest_checkpoint(self.config.checkpoint_dir)) def plot_misclassified(self, data, batch=None): """ Create and display a plot with misclassified images. :param data: :param batch: :return None """ predicted = np.argmax(self.predict_proba(data[0], batch), axis=1) real = np.argmax(data[1], axis=1) matches = (real != predicted) mism_count = np.sum(matches.astype(np.int32)) images = data[0][matches] misk = predicted[matches] columns = 4 rows = int(np.ceil(mism_count / columns)) fig = plt.figure(figsize=(16, rows * 3)) fig.suptitle( '{} photos were misclassiified.'.format(mism_count), fontsize=16, fontweight='bold') for i in range(mism_count): s_fig = fig.add_subplot(rows, columns, i + 1) s_fig.set_title('Classified as {}'.format(misk[i] + 1)) plt.imshow(images[i]) plt.show() def close(self): """ Close a session of model to load next one. :retun None """ self.sess.close() tf.reset_default_graph() # Creating model object m = Model(config, config_tf) # Loading previous model's weights m.load_model() # Reading train and test data dat = read_train_test(DATA) # Training model # IT IS NOT NECESSARY HERE, BUT THERE ARE LOGS FOR YOU TO SEE m.train(dat['train'], dat_v=dat['test'], epochs=100, batch=512) # Saving model weights # IT IS NOT NECESSARY HERE, IF YOU DIDN'T TRAIN MODEL m.save_model() ``` # Plotting misclassified images ``` m.plot_misclassified(dat['train'], batch=256) m.plot_misclassified(dat['test'], batch=256) m.close() ```
github_jupyter
# Some useful functions ``` import time from sklearn.metrics import confusion_matrix from sklearn.utils.multiclass import unique_labels import numpy as np import matplotlib import matplotlib.pyplot as plt %matplotlib inline from sklearn import svm from keras.models import Sequential from keras.layers import Conv2D,MaxPooling2D, MaxPool2D from keras.layers import Activation, Dense, Flatten, Dropout from sklearn.ensemble import IsolationForest from sklearn.neighbors import LocalOutlierFactor def plot_history(history): """ This function plot training history of a model """ plt.figure(1) plt.subplot(211) plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.ylim(0.0, 1.1) # summarize history for loss plt.subplot(212) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.ylim(0.0, 1.1) plt.show() def find_outliers(data,outliers_fraction,n_neighbors): """ This function finds and plots outliers using the Local Outlier Factor method """ # Example settings n_samples = data.shape[0] n_outliers = int(outliers_fraction * n_samples) n_inliers = n_samples - n_outliers # define outlier/anomaly detection methods to be compared anomaly_algorithms = [("Local Outlier Factor", LocalOutlierFactor( n_neighbors=n_neighbors, contamination=outliers_fraction))] # Define datasets blobs_params = dict(random_state=0, n_samples=n_inliers, n_features=2) datasets = [data] # # Compare given classifiers under given settings xx, yy = np.meshgrid(np.linspace(-10000, 40000, 150), np.linspace(-10000, 40000, 150)) # plt.figure(figsize=(5,5)) plt.subplots_adjust(left=.02, right=.98, bottom=.001, top=.96, wspace=.05, hspace=.01) plot_num = 1 rng = np.random.RandomState(42) for i_dataset, X_ in enumerate(datasets): for name, algorithm in anomaly_algorithms: t0 = time.time() algorithm.fit(X_) t1 = time.time() plt.subplot(len(datasets), len(anomaly_algorithms), plot_num) if i_dataset == 0: plt.title(name) # fit the data and tag outliers if name == "Local Outlier Factor": y_pred = algorithm.fit_predict(X_) else: y_pred = algorithm.fit(X).predict(X_) # plot the levels lines and the points if name != "Local Outlier Factor": # LOF does not implement predict Z = algorithm.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) plt.contour(xx, yy, Z, levels=[0], linewidths=2, colors='black') # print(y_pred) colors = np.array(['b', 'y']) plt.scatter(X_[:, 0], X_[:, 1],alpha=0.5, color=colors[(y_pred + 1) // 2]) plt.text(.99, .01, ('%.2fs' % (t1 - t0)).lstrip('0'), transform=plt.gca().transAxes, size=15, horizontalalignment='right') plot_num += 1 plt.show() return y_pred def plot_confusion_matrix(y_true, y_pred, classes, normalize=False, title=None, cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ if not title: if normalize: title = 'Normalized confusion matrix' else: title = 'Confusion matrix, without normalization' # Compute confusion matrix cm = confusion_matrix(y_true, y_pred) # Only use the labels that appear in the data classes = classes[unique_labels(y_true, y_pred)] if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') # print(cm) fig, ax = plt.subplots() im = ax.imshow(cm, interpolation='nearest', cmap=cmap) ax.figure.colorbar(im, ax=ax) # Loop over data dimensions and create text annotations. fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i in range(cm.shape[0]): for j in range(cm.shape[1]): ax.text(j, i, format(cm[i, j], fmt), ha="center", va="center", color="white" if cm[i, j] > thresh else "black") fig.tight_layout() return ax def get_model(): """ This function creates and compile a Sequential model used as classifier """ model = Sequential() model.add(Conv2D(filters = 32, kernel_size = 2,input_shape=(SIZE,SIZE,1),padding='same')) model.add(Conv2D(filters = 32,kernel_size = 2,activation= 'relu',padding='same')) model.add(Activation('relu')) model.add(MaxPool2D(pool_size=2)) model.add(Conv2D(filters = 64,kernel_size = 2,activation= 'relu',padding='same')) model.add(MaxPool2D(pool_size=2)) model.add(Conv2D(filters = 128,kernel_size = 2,activation= 'relu',padding='same')) model.add(MaxPool2D(pool_size=2)) model.add(Dropout(0.3)) model.add(Flatten()) model.add(Dense(128)) model.add(Activation('relu')) model.add(Dropout(0.4)) model.add(Dense(8,activation = 'softmax')) model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) print('Compiled!') return model np.set_printoptions(precision=2) ``` # Loading original dataset I used load_files from sklearn.datasets package function to load the original dataset ``` from sklearn.datasets import load_files import numpy as np data_dir = '../input/xnaturev2/XNature/' # loading file names and their respective target labels into numpy array! def load_dataset(path): data = load_files(path) files = np.array(data['filenames']) targets = np.array(data['target']) target_labels = np.array(data['target_names']) return files,targets,target_labels data, labels,target_labels = load_dataset(data_dir) print('Loading complete!') print('Data set size : ' , data.shape[0]) ``` # 1. Prepare data here I load the images and convert into gray images, then I performed a PCA in order to visualiza data to them find outliers, if exist. ``` #again prepare data load files and labels from keras.preprocessing.image import ImageDataGenerator from keras.utils import np_utils from keras.preprocessing.image import array_to_img, img_to_array, load_img from skimage.color import rgb2gray SIZE=100 def convert_image_to_array(files): images_as_array=[] for file in files: # Convert to Numpy Array images_as_array.append(rgb2gray(img_to_array(load_img(file,target_size=(SIZE, SIZE))))) return images_as_array X = np.array(convert_image_to_array(data)) X=X.reshape(X.shape[0],X.shape[1],X.shape[2],1) # print('Original set shape : ',X.shape) print('1st original image shape ',X[0].shape) no_of_classes = len(np.unique(labels)) y = np_utils.to_categorical(labels,no_of_classes) ``` > ## Outliers remotion A simple visualization can help identify outliers, in this case I used PCA ``` import matplotlib.pyplot as plt %matplotlib inline from sklearn.decomposition import PCA pca = PCA(2) # 100*100*3 from 64 to 2 dimensions projected = pca.fit_transform(X.reshape(X.shape[0],SIZE*SIZE*1)) scatter=plt.scatter(projected[:, 0], projected[:, 1], c=labels,cmap=plt.cm.get_cmap('Set1', 8), edgecolor='none', alpha=0.8) plt.title("PCA") plt.xlabel('component 1') plt.ylabel('component 2') plt.colorbar() # plt.legend(handles=scatter.legend_elements()[0], labels=list(target_labels),loc='upper right', bbox_to_anchor=(1.5, 1)) plt.show() ``` some points look like outliers so I will use LocalOutlierFactor to remove some posible outliers ``` #plot outliers and show corresponding iamges y_pred=find_outliers(projected,0.001,27) outliers=X[y_pred==-1] lbs=y[y_pred==-1] for ol,lb in zip(outliers,lbs): print(target_labels[np.argmax([lb])]) plt.imshow(ol.reshape(SIZE,SIZE),cmap='gray') plt.show() ``` ### Remove outliers ``` X=X[y_pred!=-1] y=y[y_pred!=-1] print(X.shape) print(y.shape) ``` Split data into training and testing datasets ``` #split data into training and test sets from sklearn.model_selection import train_test_split # x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.33,shuffle=True) x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.33,shuffle=True, random_state=42) x_train = x_train.astype('float32')/255 x_test = x_test.astype('float32')/255 print('Training set shape : ',x_train.shape) print('Testing set shape : ',x_test.shape) ``` ## Imbalance analysis A simple bar chart show how the classes are imbalanced. Class knife has many more occurrences than the other classes ``` from matplotlib import pyplot as plt %matplotlib inline from collections import Counter import pandas as pd D =Counter(np.argmax(y_train,axis=1)) plt.title("number of ocurrences by class") plt.bar(range(len(D)), D.values(), align='center') plt.xticks(range(len(D)), target_labels[list(D.keys())]) plt.show() ``` In this case the class Knife has much more data than the others and it could cause overfitting and misinterpretation of results. In order to eliminate this bias of imbalance we need to balance the dataset. We can use different balancing methods both, using manual augmentation or using some functions like balanced_batch_generator. Among them, the simplest that would be the undersampling of n-1 classes for the number of elements in the class with less elements or oversampling of the n-1 classes for the quantity of elements of the class with more elements. Another known easy method to solve the imbalance problem is to adding weights to classes during the training as following: ``` from sklearn.utils import class_weight y_numbers=y_train.argmax(axis=1) class_weights = class_weight.compute_class_weight('balanced', np.unique(y_numbers), y_numbers) class_weights = dict(enumerate(class_weights)) class_weights ``` To help us with the imbalance task scikit-learn has a function that helps us calculate the weight of each class # 2. Train and package model Therefore, we only need to adjust some parameters and pass the weights of the classes during the training of our model ``` #train the model from keras.callbacks import ModelCheckpoint from keras.callbacks import EarlyStopping from keras.callbacks import ReduceLROnPlateau from keras import backend as K model = get_model() model.summary() no_of_classes = len(np.unique(labels)) batch_size = 32 es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=5) checkpointer = ModelCheckpoint(filepath = 'cnn_xnatureV2_balanced_weight.hdf5', verbose = 1, save_best_only = True) reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,patience=3, verbose=1, min_lr=0.00005) history = model.fit(x_train,y_train, batch_size = 32, epochs=30, validation_split=0.2, class_weight=class_weights, callbacks = [es,checkpointer,reduce_lr], verbose=1, shuffle=True) plot_history(history) ``` # 3. Testing model ``` # load the weights that yielded the best validation accuracy # model.load_weights('cnn_xnatureV2_balanced_weight.hdf5') # evaluate and print test accuracy score = model.evaluate(x_test, y_test, verbose=0) print('\n', 'Test accuracy:', score[1]) # plotting some prefictions y_pred = model.predict(x_test) fig = plt.figure(figsize=(16, 9)) for i, idx in enumerate(np.random.choice(x_test.shape[0], size=32, replace=False)): ax = fig.add_subplot(4, 8, i + 1, xticks=[], yticks=[]) ax.imshow(np.squeeze(x_test[idx]),cmap='gray') pred_idx = np.argmax(y_pred[idx]) true_idx = np.argmax(y_test[idx]) ax.set_title("{} ({})".format(target_labels[pred_idx], target_labels[true_idx]), color=("green" if pred_idx == true_idx else "red")) plot_confusion_matrix(y_test.argmax(axis=1), y_pred.argmax(axis=1), classes=target_labels, title='Confusion matrix, without normalization') # # Plot normalized confusion matrix plot_confusion_matrix(y_test.argmax(axis=1), y_pred.argmax(axis=1), classes=target_labels, normalize=True, title='Normalized confusion matrix') plt.show() from sklearn.metrics import classification_report print(classification_report(y_test.argmax(axis=1), y_pred.argmax(axis=1))) #show classification errors ei=y_test.argmax(axis=1)!=y_pred.argmax(axis=1) im_err=x_test[ei] act=y_test[ei] pre=y_pred[ei] for er,a,p in zip(im_err,act,pre): plt.title(target_labels[np.argmax(p)]+"/"+target_labels[np.argmax(a)]) plt.imshow(er.reshape(SIZE,SIZE),cmap='gray') plt.show() ``` # 4. Model Validation I used KFold with K= 10 and 10 epochs to validate the model, for each split I recompute the class weight. In order to evaluate the validation the confusion matrix classification is been presented. The final result was ... <span style="color:blue">Accuracy mean: *99.698%* std: 0.381</span> ``` import numpy as np from sklearn.model_selection import KFold from keras import backend as K from sklearn.utils import class_weight no_of_classes = len(np.unique(labels)) batch_size = 32 kfold = KFold(n_splits=10, shuffle=True, random_state=7) cvscores=[] for train_index, test_index in kfold.split(X,y): print("TRAIN:", len(train_index), "TEST:", len(test_index)) x_train, x_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] x_train = x_train.astype('float32')/255 x_test = x_test.astype('float32')/255 reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,patience=3, verbose=1, min_lr=0.0001) y_numbers=y_train.argmax(axis=1) class_weights = class_weight.compute_class_weight('balanced', np.unique(y_numbers), y_numbers) class_weights = dict(enumerate(class_weights)) print(class_weight) model=get_model() history = model.fit(x_train,y_train, batch_size = 32, epochs=10, validation_split=0.2, class_weight=class_weights, callbacks = [reduce_lr], verbose=1, shuffle=True) # evaluate the model y_pred = model.predict(x_test) plot_confusion_matrix(y_test.argmax(axis=1), y_pred.argmax(axis=1), classes=target_labels, title='Confusion matrix, without normalization') # # Plot normalized confusion matrix plot_confusion_matrix(y_test.argmax(axis=1), y_pred.argmax(axis=1), classes=target_labels, normalize=True, title='Normalized confusion matrix') plt.show() scores = model.evaluate(x_test, y_test, verbose=0) print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100)) cvscores.append(scores[1] * 100) print(np.mean(cvscores), np.std(cvscores)) ```
github_jupyter
``` import gensim.downloader as api import gensim from gensim.models import Phrases from gensim.models import KeyedVectors, Word2Vec import numpy as np import nltk from nltk.corpus import stopwords import string from sklearn.metrics.pairwise import cosine_similarity import networkx as nx import ast import json filename = r'/home/miboj/NLP/document-summarizer/data/processed/articles.json' file = open(filename, encoding='ascii', errors='ignore') text = file.read() file.close() d = ast.literal_eval(text) with open(filename) as json_file: data = json.load(json_file) filename = r'/home/miboj/NLP/document-summarizer/data/processed/articles.json' file = open(filename, encoding='ascii', errors='ignore') text = file.read() file.close() json_content = ast.literal_eval(text) samples = json_content[0:10] print(samples[1]) tokens_list = [] for i in d: for sen in i['content']: tokens_list.append(sen) import time start_time = time.time() sentences = [] word_count = 0 stpwrds = stopwords.words('english') + list(string.punctuation) + ['—', '“', '”', "'", "’"] for e, i in enumerate(tokens_list): words = [] a = nltk.word_tokenize(i) for word in a: if word not in stpwrds: words.append(word) word_count += 1 sentences.append(words) print("--- %s seconds ---" % (time.time() - start_time)) print(len(sentences)) print(word_count) """ sg - Training algorithm: 1 for skip-gram, 0 for CBOW hs - If 1, hierarchical softmax will be used for model training. If 0, and negative is non-zero, negative sampling will be used. """ model = Word2Vec(sentences, size=100, window=5, workers=12, sg=1, hs=1, compute_loss=True) model.most_similar('weapon') model = model.wv import re def remove_empty_string(input_string): for e, i in enumerate(input_string): try: if i[-1] == ' ' and input_string[e+1][-1] == ' ': input_string[e] = i.rstrip() except IndexError: print('Out of index') joined_string = ''.join(input_string) for e, i in enumerate(joined_string): if i == ' ' and joined_string[e+1] == ' ': del i sentences = nltk.sent_tokenize(joined_string) return sentences raw_string = [" ROME — Defying reports that their planned partnership is ", "doomed to fail", ", France’s Naval Group and ", "Italy’s Fincantieri", " have announced a joint venture to build and export naval vessels. ", " The two ", "state-controlled shipyards", " said they were forming a 50-50 joint venture after months of talks to integrate their activities. The move comes as Europe’s fractured shipbuilding industry faces stiffer global competition. ", " The firms said in a statement that the deal would allow them to “jointly prepare winning offers for binational programs and export market,” as well as create joint supply chains, research and testing. ", " Naval Group and Fincantieri first announced talks on cooperation last year after the latter negotiated a controlling share in French shipyard STX. But the deal was reportedly losing momentum due to resistance from French industry and a political row between France and Italy over migrants. ", " The new deal falls short of the 10 percent share swap predicted by French Economy and Finance Minister Bruno Le Maire earlier this year, and far short of the total integration envisaged by Fincantieri CEO Giuseppe Bono. ", " The statement called the joint venture the “first steps” toward the creation of an alliance that would create “a more efficient and competitive European shipbuilding industry.”", " Naval Group CEO Hervé Guillou, speaking at the Euronaval trade expo in Paris on Oct. 24, said the alliance is based on “two countries sharing a veritable naval ambition.”", " The joint venture is necessary because the “context of the global market has changed drastically,” he added, specifically mentioning new market entrants Russia, China, Singapore, Ukraine, India and Turkey.", "Sign up for the Early Bird Brief, the defense industry's most comprehensive news and information, straight to your inbox.", "By giving us your email, you are opting in to the Early Bird Brief.", " When asked about an initial product to be tackled under the alliance, Guillou acknowledged: “The answer is simple: there is nothing yet.”", " However, the firms said they are working toward a deal to build four logistics support ships for the French Navy, which will be based on an Italian design. ", "Competition flares up for the follow-on portion of a deal previously won by the French shipbuilder.", " The firms also plan to jointly bid next year on work for midlife upgrades for Horizon frigates, which were built by France and Italy and are in service with both navies. The work would include providing a common combat management system. ", " The statement was cautious about future acceleration toward integration. “A Government-to-Government Agreement would be needed to ensure the protection of sovereign assets, a fluid collaboration between the French and Italian teams and encourage further coherence of the National assistance programs, which provide a framework and support export sales,” the statement said.", " But the firms were optimistic the deal would be “a great opportunity for both groups and their eco-systems, by enhancing their ability to better serve the Italian and French navies, to capture new export contracts, to increase research funding and, ultimately, improve the competitiveness of both French and Italian naval sectors.”", " ", "Sebastian Sprenger", " in Paris contributed to this report."] sentences = remove_empty_string(raw_string) # The 'skipthoughts' module can be found at the root of the GitHub repository linked above #import skipthoughts # You would need to download pre-trained models first #model = skipthoughts.load_model() #encoder = skipthoughts.Encoder(model) #encoded = encoder.encode(sentences) a = model['ROME'] a.shape def get_embedding(sentences): embeddings = [] stpwrds = stopwords.words('english') + list(string.punctuation) + ['—', '“', '”', "'", "’"] for i in sentences: temp = [] words = nltk.word_tokenize(i) for word in words: true_len = len(words) - len([w for w in stpwrds if w in words]) #if word not in stpwrds: if word in model.vocab: v = model[word] #else: # v = np.zeros(300,) temp.append(v) a = sum(temp)/true_len np_temp = np.array(a) #embeddings.append(temp) embeddings.append(np_temp) sentence_vectors = np.array(embeddings) return sentence_vectors def get_sim_matrix(sentences, sentence_vectors): sim_mat = np.zeros([len(sentences), len(sentences)]) sim_mat.shape for i in range(len(sentences)): for j in range(len(sentences)): if i != j: sim_mat[i][j] = cosine_similarity(sentence_vectors[i].reshape(1,100), sentence_vectors[j].reshape(1,100))[0,0] return sim_mat def get_pagerank(sim_mat): nx_graph = nx.from_numpy_array(sim_mat) scores = nx.pagerank(nx_graph) return scores def get_summary(num_sentences, scores, sentences): ranked_sentences = sorted(((scores[i],s) for i,s in enumerate(sentences)), reverse=True) #num_of_sentences = 4 summary = '' for i in range(num_sentences): summary += ranked_sentences[i][1] summary += " " return summary a = """The statement called the joint venture the “first steps” toward the creation of an alliance that would create “a more efficient and competitive European shipbuilding industry.” Naval Group CEO Hervé Guillou, speaking at the Euronaval trade expo in Paris on Oct. 24, said the alliance is based on “two countries sharing a veritable naval ambition.” The joint venture is necessary because the “context of the global market has changed drastically,” he added, specifically mentioning new market entrants Russia, China, Singapore, Ukraine, India and Turkey.Sign up for the Early Bird Brief, the defense industry's most comprehensive news and information, straight to your inbox.By giving us your email, you are opting in to the Early Bird Brief. But the firms were optimistic the deal would be “a great opportunity for both groups and their eco-systems, by enhancing their ability to better serve the Italian and French navies, to capture new export contracts, to increase research funding and, ultimately, improve the competitiveness of both French and Italian naval sectors.” Sebastian Sprenger in Paris contributed to this report. When asked about an initial product to be tackled under the alliance, Guillou acknowledged: “The answer is simple: there is nothing yet.” However, the firms said they are working toward a deal to build four logistics support ships for the French Navy, which will be based on an Italian design. The firms also plan to jointly bid next year on work for midlife upgrades for Horizon frigates, which were built by France and Italy and are in service with both navies.""" summary_samples = [] summary_len = [] for i in samples: i = remove_empty_string(i['content']) embeddings = get_embedding(i) sim_mat = get_sim_matrix(i, embeddings) scores = get_pagerank(sim_mat) sentence_length = len(i) summary = get_summary(int(sentence_length*0.3), scores, i) summary_samples.append(summary) summary_len.append(int(sentence_length*0.3)) sorted_summaries = [] for e, i in enumerate(summary_samples): a = nltk.sent_tokenize(i) o = samples[e]['content'] b = remove_empty_string(o) #print(a) #print(b) res = [sort for x in b for sort in a if sort == x] sorted_summaries.append(res) for e, i in enumerate(sorted_summaries): print(e, ": ") print("len original: ", len(remove_empty_string(samples[e]['content']))) print("Summary len: ", summary_len[e]) summary = "" for sen in i: summary += sen summary += " " print(summary) for e, i in enumerate(samples): print(e) sample = "" temp = remove_empty_string(i['content']) for t in temp: sample += t sample += " " print(sample) ```
github_jupyter
``` from __future__ import print_function import keras from keras.models import Sequential, Model, load_model import keras.backend as K import tensorflow as tf import pandas as pd import os import pickle import numpy as np import scipy.sparse as sp import scipy.io as spio import isolearn.io as isoio from scipy.stats import pearsonr import matplotlib.pyplot as plt import matplotlib.cm as cm import matplotlib.colors as colors import matplotlib as mpl from matplotlib.text import TextPath from matplotlib.patches import PathPatch, Rectangle from matplotlib.font_manager import FontProperties from matplotlib import gridspec from matplotlib.ticker import FormatStrFormatter from aparent.data.aparent_data_plasmid_legacy import load_data from analyze_aparent_conv_layers_helpers import * #Load random MPRA data file_path = '../data/random_mpra_legacy/combined_library/processed_data_lifted/' plasmid_gens = load_data(batch_size=32, valid_set_size=1000, test_set_size=40000, kept_libraries=[22], canonical_pas=True, no_dse_canonical_pas=True, file_path=file_path) #Load legacy APARENT model (lifted from theano) model_name = 'aparent_theano_legacy_30_31_34'#_pasaligned save_dir = os.path.join(os.getcwd(), '../saved_models/legacy_models') model_path = os.path.join(save_dir, model_name + '.h5') aparent_model = load_model(model_path) #Create a new model that outputs the conv layer activation maps together with the isoform proportion conv_layer_iso_model = Model( inputs = aparent_model.inputs, outputs = [ aparent_model.get_layer('iso_conv_layer_1').output, aparent_model.get_layer('iso_out_layer_1').output ] ) #Predict from test data generator iso_conv_1_out, iso_pred = conv_layer_iso_model.predict_generator(plasmid_gens['test'], workers=4, use_multiprocessing=True) iso_conv_1_out = np.reshape(iso_conv_1_out, (iso_conv_1_out.shape[0], iso_conv_1_out.shape[1], iso_conv_1_out.shape[2])) iso_pred = np.ravel(iso_pred[:, 1]) logodds_pred = np.log(iso_pred / (1. - iso_pred)) #Retrieve one-hot input sequences onehot_seqs = np.concatenate([plasmid_gens['test'][i][0][0][:, 0, :, :] for i in range(len(plasmid_gens['test']))], axis=0) #Mask for simple library (Alien1) mask_seq = ('X' * 4) + ('N' * (45 + 6 + 45 + 6 + 45)) + ('X' * 27) for j in range(len(mask_seq)) : if mask_seq[j] == 'X' : iso_conv_1_out[:, :, j] = 0 #Layer 1: Compute Max Activation Correlation maps and PWMs filter_width = 8 n_samples = 5000 pwms = np.zeros((iso_conv_1_out.shape[1], filter_width, 4)) pwms_top = np.zeros((iso_conv_1_out.shape[1], filter_width, 4)) for k in range(iso_conv_1_out.shape[1]) : for i in range(iso_conv_1_out.shape[0]) : max_j = np.argmax(iso_conv_1_out[i, k, :]) if iso_conv_1_out[i, k, max_j] > 0 : pwms[k, :, :] += onehot_seqs[i, max_j: max_j+filter_width, :] sort_index = np.argsort(np.max(iso_conv_1_out[:, k, :], axis=-1))[::-1] for i in range(n_samples) : max_j = np.argmax(iso_conv_1_out[sort_index[i], k, :]) if iso_conv_1_out[sort_index[i], k, max_j] > 0 : pwms_top[k, :, :] += onehot_seqs[sort_index[i], max_j: max_j+filter_width, :] pwms[k, :, :] /= np.expand_dims(np.sum(pwms[k, :, :], axis=-1), axis=-1) pwms_top[k, :, :] /= np.expand_dims(np.sum(pwms_top[k, :, :], axis=-1), axis=-1) r_vals = np.zeros((iso_conv_1_out.shape[1], iso_conv_1_out.shape[2])) for k in range(iso_conv_1_out.shape[1]) : for j in range(iso_conv_1_out.shape[2]) : if np.any(iso_conv_1_out[:, k, j] > 0.) : r_val, _ = pearsonr(iso_conv_1_out[:, k, j], logodds_pred) r_vals[k, j] = r_val if not np.isnan(r_val) else 0 #Plot Max Activation PWMs and Correlation maps n_filters_per_row = 5 n_rows = int(pwms.shape[0] / n_filters_per_row) k = 0 for row_i in range(n_rows) : f, ax = plt.subplots(2, n_filters_per_row, figsize=(2.5 * n_filters_per_row, 2), gridspec_kw = {'height_ratios':[3, 1]}) for kk in range(n_filters_per_row) : plot_pwm_iso_logo(pwms_top, r_vals, k, ax[0, kk], ax[1, kk], seq_start=24, seq_end=95, cse_start=49) k += 1 plt.tight_layout() plt.show() #Create a new model that outputs the conv layer activation maps together with the isoform proportion conv_layer_iso_model = Model( inputs = aparent_model.inputs, outputs = [ aparent_model.get_layer('iso_conv_layer_2').output, aparent_model.get_layer('iso_out_layer_1').output ] ) #Predict from test data generator iso_conv_2_out, iso_pred = conv_layer_iso_model.predict_generator(plasmid_gens['test'], workers=4, use_multiprocessing=True) iso_conv_2_out = np.reshape(iso_conv_2_out, (iso_conv_2_out.shape[0], iso_conv_2_out.shape[1], iso_conv_2_out.shape[2])) iso_pred = np.ravel(iso_pred[:, 1]) logodds_pred = np.log(iso_pred / (1. - iso_pred)) #Retrieve one-hot input sequences onehot_seqs = np.concatenate([plasmid_gens['test'][i][0][0][:, 0, :, :] for i in range(len(plasmid_gens['test']))], axis=0) #Layer 2: Compute Max Activation Correlation maps and PWMs filter_width = 19 n_samples = 200 pwms = np.zeros((iso_conv_2_out.shape[1], filter_width, 4)) pwms_top = np.zeros((iso_conv_2_out.shape[1], filter_width, 4)) for k in range(iso_conv_2_out.shape[1]) : for i in range(iso_conv_2_out.shape[0]) : max_j = np.argmax(iso_conv_2_out[i, k, :]) if iso_conv_2_out[i, k, max_j] > 0 : pwms[k, :, :] += onehot_seqs[i, max_j * 2: max_j * 2 + filter_width, :] sort_index = np.argsort(np.max(iso_conv_2_out[:, k, :], axis=-1))[::-1] for i in range(n_samples) : max_j = np.argmax(iso_conv_2_out[sort_index[i], k, :]) if iso_conv_2_out[sort_index[i], k, max_j] > 0 : pwms_top[k, :, :] += onehot_seqs[sort_index[i], max_j * 2: max_j * 2 + filter_width, :] pwms[k, :, :] /= np.expand_dims(np.sum(pwms[k, :, :], axis=-1), axis=-1) pwms_top[k, :, :] /= np.expand_dims(np.sum(pwms_top[k, :, :], axis=-1), axis=-1) r_vals = np.zeros((iso_conv_2_out.shape[1], iso_conv_2_out.shape[2])) for k in range(iso_conv_2_out.shape[1]) : for j in range(iso_conv_2_out.shape[2]) : if np.any(iso_conv_2_out[:, k, j] > 0.) : r_val, _ = pearsonr(iso_conv_2_out[:, k, j], logodds_pred) r_vals[k, j] = r_val if not np.isnan(r_val) else 0 #Plot Max Activation PWMs and Correlation maps n_filters_per_row = 5 n_rows = int(pwms.shape[0] / n_filters_per_row) k = 0 for row_i in range(n_rows) : f, ax = plt.subplots(2, n_filters_per_row, figsize=(3 * n_filters_per_row, 2), gridspec_kw = {'height_ratios':[3, 1.5]}) for kk in range(n_filters_per_row) : plot_pwm_iso_logo(pwms_top, r_vals, k, ax[0, kk], ax[1, kk], seq_start=12, seq_end=44) k += 1 plt.tight_layout() plt.show() ```
github_jupyter
# Applied Machine Learning ## Table of contents * [1. Notebook General Info](#1.-Notebook-General-Info) * [2. Python Basics](#2.-Python-Basics) * [2.1 Basic Types](#2.1-Basic-Types) * [2.2 Lists and Tuples](#2.2-Lists-and-Tuples) * [2.3 Dictionaries](#2.3-Dictionaries) * [2.4 Conditions](#2.4-Conditions) * [2.5 Loops](#2.5-Loops) * [2.6 Functions](#2.6-Functions) * [3. NumPy Basics](#3.-NumPy-Basics) * [3.1 Arrays](#3.1-Arrays) * [3.2 Functions and Operations](#3.2-Functions-and-Operations) * [3.3 Miscellaneous](#3.3-Miscellaneous) * [4. Visualization with Matplotlib](#4.-Visualization-with-Matplotlib) * [5. Nearest Neighbor Classification](#5.-Nearest-Neighbor-Classification) * [5.1 Digits Dataset](#5.1-Digits-Dataset) * [5.2 Distances](#5.2-Distances) * [5.3 Performance Experiments](#5.3-Performance-Experiments) * [5.4 Classification](#5.4-Classification) * [6. Linear Algebra Basics](#6.-Linear-Algebra-Basics) ## 1. Notebook General Info ### Structure - Notebooks consist of **cells** - During this course we will use **Code** and **Markdown** cells - Code in the cells is executed by pressing **Shift + Enter**. It also renders Markdown - To edit a cell, double-click on it. ### Markdown * Markdown is a lightweight markup language. * You can emphasize the words: *word*, ~~word~~, **word** * You can make lists - item 1 - item 2 - subitem 2.1 - subitem 2.2 * And tables, as well | Language |Filename extension| First appeared | |---------:|:----------------:|:--------------:| |C | `.h`, `.c` | 1972 | |C++ | `.h`, `.cpp` | 1983 | |Swift | `.swift` | 2014 | |Python | `.py` | 1991 | * Markdown allows you to add a code listing. ``` def sum(a, b): return a + b ``` * You can even add math expressions. Both inline $e^{i \phi} = \sin(\phi) + i \cos(\phi)$ and centered: $$ \int\limits_{-\infty}^{\infty} e^{-x^2}dx = \sqrt{\pi} $$ * You can also add images, even from the remote resources: ![](http://technobotss.mdek12.org/wp-content/uploads/2016/09/Markdown-mark.png) * Markdown allows one to add hyperlinks. There is a good [Markdown Cheatsheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet). ### Code * We will use Python. * It is an interpreted language. * When you execute the cell by pressing **Shift + Enter**, the code is interpreted line-by-line. ## 2. Python Basics Useful links: * CodeAcademy https://www.codecademy.com/en/tracks/python (recommended if you are new to Python!) * The Hitchhiker’s Guide to Python http://docs.python-guide.org/en/latest/ * Video tutorials by *sentdex*: [Python 3 Basic Tutorial Series](https://www.youtube.com/watch?v=oVp1vrfL_w4&list=PLQVvvaa0QuDe8XSftW-RAxdo6OmaeL85M), [Intermediate Python Programming](https://www.youtube.com/watch?v=YSe9Tu_iNQQ&list=PLQVvvaa0QuDfju7ADVp5W1GF9jVhjbX-_) Some interesting talks from conferences: * David Beazley: [Built in Super Heroes](https://youtu.be/lyDLAutA88s), [Modules and Packages](https://youtu.be/0oTh1CXRaQ0) * Raymond Hettinger: [Transforming Code into Beautiful](https://youtu.be/OSGv2VnC0go), [ Beyond PEP 8](https://youtu.be/wf-BqAjZb8M) ### 2.1 Basic Types * Python is dynamically typed: you do not specify the type of a variable. Just `my_var = 1` * Python is strongly typed: you can not add integer to string or None to integer ``` # For now, this is just a magic from __future__ import print_function, division # Integer a = 2 print(a) # Float a += 4.0 print(a) # String b = "Hello World" print(b) print(b + ' ' + str(42)) # Boolean first_bool_here = False print(first_bool_here) # This is how formatting works print('My first program is:"%s"' % b) # old style print('My first program is:"{}"'.format(b)) # new style print(f'My first program is:"{b}"') # even newer style num = 42 print(42 / 5) # a regular division print(42 // 5) # an integer division print(42 % 5) # a remainder ``` ### 2.2 Lists and Tuples * `list` and `tuple` are the array-like types in Python * `list` is mutable. `tuple` is immutable * `list` is represented as `[...]`, `tuple` as `(...)` * They both can store different types at the same time * The index of the first element is `0`, it is called 'zero-indexed' ``` # Lists empty_list = [] # creates an empty list list1 = [1, 2, 3] # creates a list with elements list2 = ['1st', '2nd', '3rd'] print(list1) # prints the list print(list2) print(len(list2)) # prints the length of the list list2.append(2) # appends the item at the end print(list2) # prints the appended list list2.insert(2, 0) # inserts 0 at index 3 (zero-indexed) print(list2) list2[1] = 'new' # changes the second element of the list (lists are mutable) print(list2) # You can create a list of lists: list_of_lists = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] print(list_of_lists[1][2]) # second list, third element # Tuples # Empty tuple can't be created. # It is immutable. So it is just nothing tuple1 = (1,) # Comma is necessary. Otherwise it is a number in parenthesis tuple2 = ('orange',) tuple3 = ('fly', 32, None) super_tuple = tuple1 + tuple2 + tuple3 print(super_tuple) super_tuple[1] = 'new' # trying to change an element of a tuple raises an error (tuples are immutable) ``` * Above we showed how to create and print lists. * How to find the length of the list and how to append or insert the items in an already created list. * There are several other operations which we can perform with lists: * removing elements from the list * joining two lists * sorting * etc There is an interesting [cheat sheet](http://www.pythonforbeginners.com/lists/python-lists-cheat-sheet/) you may find useful. Another very useful operation on lists is **Slicing**. It is a thing of Python. * Slicing allows you to access sublists * Slicing does not create a copy of the list when it is called * Slicing makes Python so useful for matrix manipulation ``` # This is the worst way of creating a list of consequent integers. # But now we use it just for demostration numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] print(numbers[1:]) # You can slice it from the given index print(numbers[:-1]) # You can slice it till the given index print(numbers[1:-2]) # You can combine them print(numbers[::2]) # You can choose each second print(numbers[2:-2][::2]) # You can chain slicing ``` ### 2.3 Dictionaries * Dictionary is a **Key-Value** storage * Dictionaries are mutable by default * Dictionaries are useful for linking items * In some versions of Python, dictionaries are sorted, in others, they are not. ``` emptydict = {} # creates empty dict user = {'id': '0x123456', 'age': 28, 'authorized': True} print(user) days = { 1: "Mon", 2: "Tues", 3: "Wed", 4: "Thu", 5: "Fri", 6: "Sat", 7: "Sun" } # A dict with items print(days.keys()) # prints keys print(days) # prints whole dict age = user['age'] # accesses the element of the dictionay with key 'age' print(age) my_dict = { 1: '1', '1': 1 } # Keys are not casted. '1' and 1 are not the same key print(my_dict[1] == my_dict['1']) my_dict['one'] = False my_dict[123] = 321 print(my_dict) ``` For the further study of dictionary manipulation in Python refer to this [tutorial](http://www.pythonforbeginners.com/dictionary/dictionary-manipulation-in-python ). ### 2.4 Conditions ``` is_visible = False if is_visible: print("I am visible") else: print("You can not see me") ``` As this is the first appearance of the nested structure, we must clarify the following: * In Python all nested code structures are defined by indentation. * Standard indentation is 4 spaces (or 1 tab) ``` animals = ['cat', 'dog', 'monkey', 'elephant'] if 'cat' in animals: print('Cat is here') if len(animals) > 2 and 'fish' not in animals: print('There are many animals but fish is not here') if 'whale' in animals or 'dog' in animals: print('At least one of my favorite animals is in the list') code = 345 if code == 200: print('success') elif code == 404: print('page not found') elif 300 <= code < 400: print('redirected') else: print('unknown error') ``` ### 2.5 Loops * There are 2 types of loops in Python: `while` and `for` * `while` loop checks the condition before executing the loop body * `for` iterates over the sequence of elements ``` # while i = 0 while i < 3: print(i) i += 1 # for loop for animal in animals: print(animal) # In order to make a c-like loop, # you have to create a list of consecutive numbers print('\nBad way:') numbers = [0, 1, 2, 3, 4] for number in numbers: print(number) # As we already stated, it is not the best way of creating such lists # Here is the best way: print('\nGood way:') for number in range(5): print(number) print('\nAdvanced example:') for number in reversed(range(10, 22, 2)): print(number) ``` ### 2.6 Functions * functions are declared with `def` statement * function is an object, like float, string, etc. ``` def function_name(): print ('Hello AML students') function_name() # Create a function that multiplies a number by 5 if it is above a given threshold, # otherwise square the input. def manipulate_number(number, threshold): # Check whether the number is higher than the threshold. if number > threshold: return number * 5 else: return number ** 2 print(manipulate_number(4, 6)) print(manipulate_number(8, 7)) def linear(x, k, b=0): # b=0 if b is not specified in function call return k * x + b print(linear(1, 3.0)) # we don't pass any keys of the arguments print(linear(k=1, x=3.0)) # we pass the keys, sometimes to reorder arguments. print(linear(1, k=3.0, b=3.0)) # we pass b=3. and specify it because b=3.0 is not the default value def are_close(a, b): return (a - b) ** 2 < 1e-6 # Functions could be passed as arguments def evaluate(func, arg_1 ,arg_2): return func(arg_1, arg_2) print(evaluate(are_close, 0.333, 1.0 / 3)) ``` * If you are still very new to Python: * Implement some simple functions and print the results * Please ask questions if pieces of code do not do what you want them to do * You can always get the information about the function just by caling **help**: ```Python help(any_function) ``` * In Jupyter Notebook, you can also get the info by pushing **Tab Tab** with pressed **Shift** ``` # Create here your own functions, if you want # Create a new cell by typing ctrl+b ``` ## 3. NumPy Basics * A very nice part of Python is that there are a lot of 3rd party libraries. * The most popular library for matrix manipulations / linear algebra is [**NumPy**](http://www.numpy.org/). * The official website says: > NumPy is the fundamental package for scientific computing with Python. * NumPy core functions are written in **C/C++** and **Fortran**. * NumPy functions work faster than pure Python functions (or at least with the same speed). ``` # The first import import numpy as np ``` * Easy enough! * There are several ways of importing libraries: * `import library` - import the full library. You can access its functions: `library.utils.somefunc(x)` * `import library as lib` - the same as above-described, but more convenient: `lib.utils.other_func(x, y)` * `from library.utils import somefunc` - only one function is imported: `somefunc(x)` * `import numpy as np` is a standard convention of importing NumPy. ### 3.1 Arrays * The feature of **NumPy** is **Array**. * An array is close to the list data type, but it is extended with several useful methods. ``` # you can create an array of zeros a = np.zeros(5) print(a) # or an array of consecutive numbers b = np.arange(7) print('1...6:') print(b) # or even an array from a list c = np.array([1, 3, 5, 7, 12, 19]) print('An element of c:') print(c[4]) print('Length:', len(c)) ``` * You can also create n-dimensional arrays: * an array of arrays * an array of arrays of arrays * ... * They have additional properties which are insignificant for now, but will be exploited later during this course * You can transform an n-dimensional array to a plane array and vice versa just by reshaping ``` # A 2-dimensional array a = np.array([[1, 2], [3, 4]]) print(a) # you can change its shape to make it a 1-dimensional array print(a.ravel()) print(a.reshape(4)) # and vice versa b = a.ravel() print(b.reshape((2, 2))) # you can access a row or a column print('2nd column:', a[:, 1]) print('1st row:', a[0, :]) ``` ### 3.2 Functions and Operations * NumPy supports basics operations on an array and a number ``` newarray = np.zeros(8) # instead of adding a number in a loop, # you can do it in one line newarray += 8 print(newarray) # the same for other basic operations newarray *= 3 print(newarray) # and even with slicing newarray[::2] /= 8 print(newarray) ``` * Numpy also supports operations on several arrays of the same length * These operations are elemetwise ``` arr_1 = np.array([1, 9, 3, 4]) arr_2 = np.arange(4) print('Arrays:') print(arr_1) print(arr_2) print('Addition:') print(arr_1 + arr_2) print(np.add(arr_1, arr_2)) # the same print('Multiplication:') print(arr_1 * arr_2) print(np.multiply(arr_1, arr_2)) # the same print('Division:') print(arr_2 / arr_1) print(np.divide(1.0 * arr_2, arr_1)) # the same ``` * NumPy provides one with a rich variaty of mathematical functions * Atomic functions ($\sin(x)$, $\cos(x)$, $\ln(x)$, $x^p$, $e^x, \dots$) are elementwise * There are several functions, which allows one to compute statistics: * mean of the elements of an array * standard deviation * ... ``` x = np.linspace(0, 1, 6) print('x:') print(x) print('Mean x:') print(np.mean(x)) print('Std x:') print(x.std()) print('x^2:') print(x*x) # as elementwise product print(np.square(x)) # with a special function print(np.power(x, 2)) # as a power function with power=2 print(x**2) # as you are expected to do it with a number print('sin(x):') print(np.sin(x)) print('Mean e^x:') print(np.mean(np.exp(x))) ``` ### 3.3 Miscellaneous ``` # Indexing x = np.linspace(0, np.pi, 10) y = np.cos(x) - np.sin(2 * x) print('x =', x, '\n') print('y =', y, '\n') # we can create the boolean mask of elements and pass it as indices mask = y > 0 print('mask =', mask, '\n') print('positive y =', y[mask], '\n') # NumPy has `random` package x = np.random.random() print(x) # uniform [-2, 8) rand_arr = np.random.uniform(-2, 8, size=3) print('Array of random variables') print(rand_arr) # here is the normal distribution print('N(x|m=0, s=0.1):') print(np.random.normal(scale=0.1, size=4)) # fast search x = np.array([1, 2, 5, -1]) print(np.where(x < 0)) # retrieve the index of max element print(np.argmax(x)) # sory array print(np.sort(x)) ``` * There is a lot which you can do with Numpy. * For further study and practice of Numpy, we refer you to this [tutorial](http://scipy.github.io/old-wiki/pages/Tentative_NumPy_Tutorial) * Here is a good [list](https://github.com/rougier/numpy-100) of numpy tasks. * You can also check other packages from **[SciPy](https://www.scipy.org)** ecosystem. * You may also be interested in [**scikit-learn**](http://scikit-learn.org/stable/) - tools for machine learning in Python ## 4. Visualization with Matplotlib * We use **Matplotlib** for plots and data visualization * There is a [tutorial](http://matplotlib.org/users/pyplot_tutorial.html). * Here are some examples from Matplotlib gallery <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-beta.2/css/bootstrap.min.css" integrity="sha384-PsH8R72JQ3SOdhVi3uxftmaW6Vc51MKb0q5P2rRUpPvrszuE4W1povHYgTpBfshb" crossorigin="anonymous"> <div class="container" style="max-width:100%"> <div class="row"> <div class="col-sm-6" style="display: flex; height: 300px;"> <img src="http://matplotlib.org/_images/fill_demo1.png" style="max-width: 100%; max-height: 100%; margin: auto;"> </div> <div class="col-sm-6" style="display: flex; height: 300px;"> <img src="http://matplotlib.org/_images/errorbar_limits.png" style="max-width: 100%; max-height: 100%; margin: auto;"> </div> </div> <div class="row"> <div class="col-sm-6" style="display: flex; height: 300px;"> <img src="http://matplotlib.org/_images/subplot_demo.png" style="max-width: 100%; max-height: 100%; margin: auto;"> </div> <div class="col-sm-6" style="display: flex; height: 300px;"> <img src="http://matplotlib.org/_images/histogram_demo_features2.png" style="max-width: 100%; max-height: 100%; margin: auto;"> </div> </div> </div> ``` # We import `pyplot` from `matplotlib` as `plt` import matplotlib.pyplot as plt # We add %matplotlib flag to specify how the figures should be shown # inline - static pictures in notebook # notebook - interactive graphics %matplotlib inline # let's plot a simple example x = np.arange(100) y = x ** 2 - x plt.plot(y) plt.show() # that's it # A more complex example n_samples = 100 x = np.linspace(0.0, 1.0, n_samples) y = x**3 / (np.exp(10 * x + 1e-8) - 1) y /= y.max() y_samples = np.abs(y + 0.1 * y * np.random.normal(size=n_samples)) plt.figure(figsize=(8, 5)) plt.plot(x, y_samples, 'o', c='orange', label='experiment') plt.plot(x, y, lw=3, label='theory') plt.grid() plt.title("Planck's law", fontsize=18) plt.legend(loc='best', fontsize=14) plt.ylabel('Relative spectral radiance', fontsize=14) plt.xlabel('Relative frequency', fontsize=14) plt.show() ``` ## 5. Nearest Neighbor Classification * We have a dataset of objects of several classes * We expect two objects from the same class to be close * Two objects from different classes are supposed to be distant * The query object is supposed to have the same class as its nearest neighbor ### 5.1 Digits Dataset * It contains handwritten digits 0 through 9 * Each object is an $8 \times 8$ grayscale image * We consider each pixel of the image as a separate feature of the object ``` import sklearn.datasets # We load the dataset digits = sklearn.datasets.load_digits() # Here we load up the images and labels and print some examples images_and_labels = list(zip(digits.images, digits.target)) for index, (image, label) in enumerate(images_and_labels[:10]): plt.subplot(2, 5, index + 1) plt.axis('off') plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest') plt.title('Training: {}'.format(label), y=1.1) plt.show() images_1 = digits.images[digits.target == 1] images_5 = digits.images[digits.target == 5] for i in range(5): plt.subplot(2, 5, i + 1) plt.axis('off') plt.imshow(images_1[i], cmap=plt.cm.gray_r, interpolation='nearest') plt.subplot(2, 5, i + 6) plt.axis('off') plt.imshow(images_5[i], cmap=plt.cm.gray_r, interpolation='nearest') plt.show() ``` * Ones look similar. Fives also looks similar * Fives and Ones look different ### 5.2 Distances * In order to talk about close and distant objects, we have to define the **distance (metric)** * Distance is a function $F(\cdot, \cdot)$ of 2 elements which returns a number * Here are the properties of distance: 1. $F(x, y) \geq 0$ 2. $F(x, y) = 0 \Leftrightarrow x = y$ 3. $F(x, y) = F(y, x)$ 4. $F(x, z) \leq F(x, y) + F(y, z)$ * Let's look at the **Eucledian distance** as it is the most intuitive for us: $$ F(x, y) = \sqrt{\sum_{i=1}^{d} (x_{i} - y_{i})^{2}}. $$ Now it is time to implement it. ``` # First of all, let's implement it in the most trivial way # without using numpy arrays, just to understand what is going on def euclidean_distance_simple(x, y): # First, make sure x and y are of equal length. assert(len(x) == len(y)) d = 0.0 for i in range(len(x)): d += (x[i]-y[i])**2 return np.sqrt(d) x1 = np.array([0.,0.]) y1 = np.array([5.,2.]) x2 = np.array([0.,1.,3.]) y2 = np.array([9.,1.,4.5]) ``` Now you can test your functions. The expected values are **5.385...** and **9.124...** ``` print(euclidean_distance_simple(x1, y1)) print(euclidean_distance_simple(x2, y2)) # Let's implement it in a more effective way # use numpy arrays # use all the benefits of numpy def euclidean_distance_numpy(x, y): # x, y - numpy arrays assert(len(x) == len(y)) d = 0.0 temp = (x - y)**2 d = np.sum(temp) return np.sqrt(d) print(euclidean_distance_numpy(x1, y1)) print(euclidean_distance_numpy(x2, y2)) ``` ### 5.3 Performance Experiments * We implemented the Euclidean distance in 2 ways. Now we are able to compare their performance * We measure the time consumption of the functions * We test the perfomance of them while being executed with random vectors of certain sizes ``` import time sizes = range(1, 1000, 10) res_simple = [] res_numpy = [] for size in sizes: x = np.random.random(size=size) y = np.random.random(size=size) time_0 = time.time() _ = euclidean_distance_simple(x, y) res_simple.append(time.time() - time_0) time_0 = time.time() _ = euclidean_distance_numpy(x, y) res_numpy.append(time.time() - time_0) res_simple = np.array(res_simple) res_numpy = np.array(res_numpy) plt.figure(figsize=(9, 5)) plt.plot(sizes, 10**6 * res_simple, lw=3, label='simple') plt.plot(sizes, 10**6 * res_numpy, lw=3, label='numpy') plt.legend(loc='best', fontsize=14) plt.xlabel('size', fontsize=15) plt.ylabel('times, mks', fontsize=15) plt.grid() plt.show() ``` * Pure Python works slower than NumPy * Always use NumPy when it is possible ### 5.4 Classification * We should divide our dataset into a training set and a test set. * In order to predict the class of an object, we will iterate over the objects in the training set * The predicted class is the class of the closest object ``` n_objects = digits.images.shape[0] train_test_split = 0.7 train_size = int(n_objects * train_test_split) indices = np.arange(n_objects) np.random.shuffle(indices) train_indices, test_indices = indices[:train_size], indices[train_size:] train_images, train_targets = digits.images[train_indices], digits.target[train_indices] test_images, test_targets = digits.images[test_indices], digits.target[test_indices] train_images = train_images.reshape((-1, 64)) test_images = test_images.reshape((-1, 64)) def predict_object_class(vec, x_train, y_train): # vec.shape: [64] # x_train.shape: [N_objects, 64] # y_train.shape: [N_objects] best = 999999 for i, sample in enumerate(x_train): candidate = euclidean_distance_numpy(sample, vec) if candidate < best: best = candidate best_i = i return y_train[best_i] def predict(x, x_train, y_train): # it is not the best way, but it is easy to understand classes = [] for vec in x: predicted_cls = predict_object_class(vec, x_train, y_train) classes.append(predicted_cls) return np.array(classes) predicted_targets = predict(test_images, train_images, train_targets) accuracy = np.mean(predicted_targets == test_targets) print("Accuracy {:.1f}%".format(accuracy * 100)) correct = predicted_targets == test_targets incorrect = ~correct f, axes = plt.subplots(2, 5, figsize=(8, 3)) for ax, image, y_pred, y_test in zip(axes[0], test_images[correct], predicted_targets[correct], test_targets[correct]): ax.imshow(image.reshape((8, 8)), cmap=plt.cm.gray_r, interpolation='nearest') ax.set_title('Pred: {}, Real: {}'.format(y_pred, y_test)) ax.set_axis_off() for ax, image, y_pred, y_test in zip(axes[1], test_images[incorrect], predicted_targets[incorrect], test_targets[incorrect]): ax.imshow(image.reshape((8, 8)), cmap=plt.cm.gray_r, interpolation='nearest') ax.set_title('Pred: {}, True: {}'.format(y_pred, y_test)) ax.set_axis_off() plt.tight_layout() plt.show() ``` * You can try to use other <a href="https://en.wikipedia.org/wiki/Metric_(mathematics)#Examples">metrics</a> * You can experiment with other datasets: * **MNIST**: 1. [Download](http://yann.lecun.com/exdb/mnist/) 2. `from dataset_utils import load_mnist` 3. `train = list(load_mnist('training', path='<PATH TO A FOLDER>'))` * **CIFAR-10** & **CIFAR-100**: 1. [Download](https://www.cs.toronto.edu/~kriz/cifar.html) 2. `from dataset_utils import load_cifar` 3. `data = load_cifar('<PATH TO A FILE>')` ## 6. Linear Algebra Basics * This introduction is devoted to Python and NumPy basics. * We used 1-dimensional NumPy arrays for data manipulation. * During the coming assignments, n-dimensional (2, 3 and even 4-dimensional) arrays will be exploited * In order to make it easier, we provide you with several useful links * [Linear Algebra Review and Reference](http://cs229.stanford.edu/section/cs229-linalg.pdf). Chapters **1.1-3.2, 3.5** provide one with almost all the necessities of linear algebra for deep learning * [The Matrix Cookbook](https://www.math.uwaterloo.ca/~hwolkowi/matrixcookbook.pdf) could be used as a cheat sheet * [Deep Learning](http://www.deeplearningbook.org) is an ultimate book. An explanation of any aspects of deep learning could be found there.
github_jupyter
*** *** # Introduction to Gradient Descent The Idea Behind Gradient Descent 梯度下降 *** *** <img src='./img/stats/gradient_descent.gif' align = "middle" width = '400px'> <img align="left" style="padding-right:10px;" width ="400px" src="./img/stats/gradient2.png"> **如何找到最快下山的路?** - 假设此时山上的浓雾很大,下山的路无法确定; - 假设你摔不死! - 你只能利用自己周围的信息去找到下山的路径。 - 以你当前的位置为基准,寻找这个位置最陡峭的方向,从这个方向向下走。 <img style="padding-right:10px;" width ="500px" src="./img/stats/gradient.png" align = 'right'> **Gradient is the vector of partial derivatives** One approach to maximizing a function is to - pick a random starting point, - compute the gradient, - take a small step in the direction of the gradient, and - repeat with a new staring point. <img src='./img/stats/gd.webp' width = '700' align = 'middle'> Let's represent parameters as $\Theta$, learning rate as $\alpha$, and gradient as $\bigtriangledown J(\Theta)$, To the find the best model is an optimization problem - “minimizes the error of the model” - “maximizes the likelihood of the data.” We’ll frequently need to maximize (or minimize) functions. - to find the input vector v that produces the largest (or smallest) possible value. # Mathematics behind Gradient Descent A simple mathematical intuition behind one of the commonly used optimisation algorithms in Machine Learning. https://www.douban.com/note/713353797/ The cost or loss function: $$Cost = \frac{1}{N} \sum_{i = 1}^N (Y' -Y)^2$$ <img src='./img/stats/x2.webp' width = '700' align = 'center'> Parameters with small changes: $$ m_1 = m_0 - \delta m, b_1 = b_0 - \delta b$$ The cost function J is a function of m and b: $$J_{m, b} = \frac{1}{N} \sum_{i = 1}^N (Y' -Y)^2 = \frac{1}{N} \sum_{i = 1}^N Error_i^2$$ $$\frac{\partial J}{\partial m} = 2 Error \frac{\partial}{\partial m}Error$$ $$\frac{\partial J}{\partial b} = 2 Error \frac{\partial}{\partial b}Error$$ Let's fit the data with linear regression: $$\frac{\partial}{\partial m}Error = \frac{\partial}{\partial m}(Y' - Y) = \frac{\partial}{\partial m}(mX + b - Y)$$ Since $X, b, Y$ are constant: $$\frac{\partial}{\partial m}Error = X$$ $$\frac{\partial}{\partial b}Error = \frac{\partial}{\partial b}(Y' - Y) = \frac{\partial}{\partial b}(mX + b - Y)$$ Since $X, m, Y$ are constant: $$\frac{\partial}{\partial m}Error = 1$$ Thus: $$\frac{\partial J}{\partial m} = 2 * Error * X$$ $$\frac{\partial J}{\partial b} = 2 * Error$$ Let's get rid of the constant 2 and multiplying the learning rate $\alpha$, who determines how large a step to take: $$\frac{\partial J}{\partial m} = Error * X * \alpha$$ $$\frac{\partial J}{\partial b} = Error * \alpha$$ Since $ m_1 = m_0 - \delta m, b_1 = b_0 - \delta b$: $$ m_1 = m_0 - Error * X * \alpha$$ $$b_1 = b_0 - Error * \alpha$$ **Notice** that the slope b can be viewed as the beta value for X = 1. Thus, the above two equations are in essence the same. Let's represent parameters as $\Theta$, learning rate as $\alpha$, and gradient as $\bigtriangledown J(\Theta)$, we have: $$\Theta_1 = \Theta_0 - \alpha \bigtriangledown J(\Theta)$$ <img src='./img/stats/gd.webp' width = '800' align = 'center'> Hence,to solve for the gradient, we iterate through our data points using our new $m$ and $b$ values and compute the partial derivatives. This new gradient tells us - the slope of our cost function at our current position - the direction we should move to update our parameters. - The size of our update is controlled by the learning rate. ``` import numpy as np # Size of the points dataset. m = 20 # Points x-coordinate and dummy value (x0, x1). X0 = np.ones((m, 1)) X1 = np.arange(1, m+1).reshape(m, 1) X = np.hstack((X0, X1)) # Points y-coordinate y = np.array([3, 4, 5, 5, 2, 4, 7, 8, 11, 8, 12, 11, 13, 13, 16, 17, 18, 17, 19, 21]).reshape(m, 1) # The Learning Rate alpha. alpha = 0.01 def error_function(theta, X, y): '''Error function J definition.''' diff = np.dot(X, theta) - y return (1./2*m) * np.dot(np.transpose(diff), diff) def gradient_function(theta, X, y): '''Gradient of the function J definition.''' diff = np.dot(X, theta) - y return (1./m) * np.dot(np.transpose(X), diff) def gradient_descent(X, y, alpha): '''Perform gradient descent.''' theta = np.array([1, 1]).reshape(2, 1) gradient = gradient_function(theta, X, y) while not np.all(np.absolute(gradient) <= 1e-5): theta = theta - alpha * gradient gradient = gradient_function(theta, X, y) return theta # source:https://www.jianshu.com/p/c7e642877b0e optimal = gradient_descent(X, y, alpha) print('Optimal parameters Theta:', optimal[0][0], optimal[1][0]) print('Error function:', error_function(optimal, X, y)[0,0]) ``` # This is the End! # Estimating the Gradient If f is a function of one variable, its derivative at a point x measures how f(x) changes when we make a very small change to x. > It is defined as the limit of the difference quotients: 差商(difference quotient)就是因变量的改变量与自变量的改变量两者相除的商。 ``` def difference_quotient(f, x, h): return (f(x + h) - f(x)) / h ``` For many functions it’s easy to exactly calculate derivatives. For example, the square function: def square(x): return x * x has the derivative: def derivative(x): return 2 * x ``` def square(x): return x * x def derivative(x): return 2 * x derivative_estimate = lambda x: difference_quotient(square, x, h=0.00001) def sum_of_squares(v): """computes the sum of squared elements in v""" return sum(v_i ** 2 for v_i in v) # plot to show they're basically the same import matplotlib.pyplot as plt x = range(-10,10) plt.plot(x, list(map(derivative, x)), 'rx') # red x plt.plot(x, list(map(derivative_estimate, x)), 'b+') # blue + plt.show() ``` When f is a function of many variables, it has multiple partial derivatives. ``` def partial_difference_quotient(f, v, i, h): # add h to just the i-th element of v w = [v_j + (h if j == i else 0) for j, v_j in enumerate(v)] return (f(w) - f(v)) / h def estimate_gradient(f, v, h=0.00001): return [partial_difference_quotient(f, v, i, h) for i, _ in enumerate(v)] ``` # Using the Gradient ``` def step(v, direction, step_size): """move step_size in the direction from v""" return [v_i + step_size * direction_i for v_i, direction_i in zip(v, direction)] def sum_of_squares_gradient(v): return [2 * v_i for v_i in v] from collections import Counter from linear_algebra import distance, vector_subtract, scalar_multiply from functools import reduce import math, random print("using the gradient") # generate 3 numbers v = [random.randint(-10,10) for i in range(3)] print(v) tolerance = 0.0000001 n = 0 while True: gradient = sum_of_squares_gradient(v) # compute the gradient at v if n%50 ==0: print(v, sum_of_squares(v)) next_v = step(v, gradient, -0.01) # take a negative gradient step if distance(next_v, v) < tolerance: # stop if we're converging break v = next_v # continue if we're not n += 1 print("minimum v", v) print("minimum value", sum_of_squares(v)) ``` # Choosing the Right Step Size Although the rationale for moving against the gradient is clear, - how far to move is not. - Indeed, choosing the right step size is more of an art than a science. Methods: 1. Using a fixed step size 1. Gradually shrinking the step size over time 1. At each step, choosing the step size that minimizes the value of the objective function ``` step_sizes = [100, 10, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001] ``` It is possible that certain step sizes will result in invalid inputs for our function. So we’ll need to create a “safe apply” function - returns infinity for invalid inputs: - which should never be the minimum of anything ``` def safe(f): """define a new function that wraps f and return it""" def safe_f(*args, **kwargs): try: return f(*args, **kwargs) except: return float('inf') # this means "infinity" in Python return safe_f ``` # Putting It All Together - **target_fn** that we want to minimize - **gradient_fn**. For example, the target_fn could represent the errors in a model as a function of its parameters, To choose a starting value for the parameters `theta_0`. ``` def minimize_batch(target_fn, gradient_fn, theta_0, tolerance=0.000001): """use gradient descent to find theta that minimizes target function""" step_sizes = [100, 10, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001] theta = theta_0 # set theta to initial value target_fn = safe(target_fn) # safe version of target_fn value = target_fn(theta) # value we're minimizing while True: gradient = gradient_fn(theta) next_thetas = [step(theta, gradient, -step_size) for step_size in step_sizes] # choose the one that minimizes the error function next_theta = min(next_thetas, key=target_fn) next_value = target_fn(next_theta) # stop if we're "converging" if abs(value - next_value) < tolerance: return theta else: theta, value = next_theta, next_value # minimize_batch" v = [random.randint(-10,10) for i in range(3)] v = minimize_batch(sum_of_squares, sum_of_squares_gradient, v) print("minimum v", v) print("minimum value", sum_of_squares(v)) ``` Sometimes we’ll instead want to maximize a function, which we can do by minimizing its negative ``` def negate(f): """return a function that for any input x returns -f(x)""" return lambda *args, **kwargs: -f(*args, **kwargs) def negate_all(f): """the same when f returns a list of numbers""" return lambda *args, **kwargs: [-y for y in f(*args, **kwargs)] def maximize_batch(target_fn, gradient_fn, theta_0, tolerance=0.000001): return minimize_batch(negate(target_fn), negate_all(gradient_fn), theta_0, tolerance) ``` Using the batch approach, each gradient step requires us to make a prediction and compute the gradient for the whole data set, which makes each step take a long time. Error functions are additive - The predictive error on the whole data set is simply the sum of the predictive errors for each data point. When this is the case, we can instead apply a technique called **stochastic gradient descent** - which computes the gradient (and takes a step) for only one point at a time. - It cycles over our data repeatedly until it reaches a stopping point. # Stochastic Gradient Descent During each cycle, we’ll want to iterate through our data in a random order: ``` def in_random_order(data): """generator that returns the elements of data in random order""" indexes = [i for i, _ in enumerate(data)] # create a list of indexes random.shuffle(indexes) # shuffle them for i in indexes: # return the data in that order yield data[i] ``` This approach avoids circling around near a minimum forever - whenever we stop getting improvements we’ll decrease the step size and eventually quit. ``` def minimize_stochastic(target_fn, gradient_fn, x, y, theta_0, alpha_0=0.01): data = list(zip(x, y)) theta = theta_0 # initial guess alpha = alpha_0 # initial step size min_theta, min_value = None, float("inf") # the minimum so far iterations_with_no_improvement = 0 # if we ever go 100 iterations with no improvement, stop while iterations_with_no_improvement < 100: value = sum( target_fn(x_i, y_i, theta) for x_i, y_i in data ) if value < min_value: # if we've found a new minimum, remember it # and go back to the original step size min_theta, min_value = theta, value iterations_with_no_improvement = 0 alpha = alpha_0 else: # otherwise we're not improving, so try shrinking the step size iterations_with_no_improvement += 1 alpha *= 0.9 # and take a gradient step for each of the data points for x_i, y_i in in_random_order(data): gradient_i = gradient_fn(x_i, y_i, theta) theta = vector_subtract(theta, scalar_multiply(alpha, gradient_i)) return min_theta def maximize_stochastic(target_fn, gradient_fn, x, y, theta_0, alpha_0=0.01): return minimize_stochastic(negate(target_fn), negate_all(gradient_fn), x, y, theta_0, alpha_0) print("using minimize_stochastic_batch") x = list(range(101)) y = [3*x_i + random.randint(-10, 20) for x_i in x] theta_0 = random.randint(-10,10) v = minimize_stochastic(sum_of_squares, sum_of_squares_gradient, x, y, theta_0) print("minimum v", v) print("minimum value", sum_of_squares(v)) ``` Scikit-learn has a Stochastic Gradient Descent module http://scikit-learn.org/stable/modules/sgd.html
github_jupyter
<a href="https://colab.research.google.com/github/Build-Week-Saltiest-Hack-News-Trolls-2/datascience/blob/Moly-malibu-patch-1/Sentimental_Analysis_pre_model.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> #Sentimental Analysis Project: ``` !pip install vaderSentiment #import Library import re import string from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer import pandas as pd import numpy as np import spacy from collections import Counter from bs4 import BeautifulSoup #create dataset import pandas as pd df = pd.read_csv('saltyhacker.csv') #Str the data df['Text'] = df['Text'].astype(str) ``` #CLEAN DATA ``` #clean DF def clean_description(desc): soup = BeautifulSoup(desc) return soup.get_text() df['rating'] = df['Text'].apply(clean_description) df['words_length'] = df['rating'].str.len() #clean HTML import lxml.html.clean lxml.html.clean.clean_html('<html><head></head><bodyonload = loadfunc()>my text</body></html>') print (BeautifulSoup('<').string) print (BeautifulSoup('&').string) #CLEAN DATA #remove whitespace df['rating'] = df['rating'].str.strip().str.lower() df['Text'] = df['Text'].str.strip().str.lower() #Start with date df['rating'].str.match('\d?\d/\d?\d/\d{4}').all() #\s indicates a white space. So [^\s] is any non-white space and includes letters, numbers, special characters df['rating'] = df['rating'].str.replace('[^a-zA-Z\s]', '').str.replace('\s+', ' ') #Replace occurrences of pattern/regex in the Series/Index with some other string df['Text'] = df['Text'].str.replace('[^a-zA-Z\s]', '').str.replace('\s+', ' ') df.head() df['rating'].value_counts(normalize=True) ``` #SENTIMENT ANALYSIS USING IN DE MODEL VADER "VADER (Valence Aware Dictionary and sEntiment Reasoner) is a sentiment intensity tool added to NLTK in 2014. Unlike other techniques that require training on related text before use, VADER is ready to go for analysis without any special setup. VADER is unique in that it makes fine-tuned distinctions between varying degrees of positivity and negativity. For example, VADER scores “comfort” moderately positively and “euphoria” extremely positively. It also attempts to capture and score textual features common in informal online text such as capitalizations, exclamation points, and emoticons." https://programminghistorian.org/en/lessons/sentiment-analysis http://www.nltk.org/_modules/nltk/sentiment/vader.html ``` #vander model from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer analyzer = SentimentIntensityAnalyzer() def vaderize(sentence): return analyzer.polarity_scores(sentence) #creared columns score with numbers df['Scores'] = df['rating'].apply(vaderize) #Create score by differente classification position into the text df[['negative', 'neutral', 'positive', 'compound']] = df.Scores.apply(pd.Series) for text in df.sort_values(by='neutral', ascending=False)['rating'].head(5): print(f"------ Topic ------") print(text, end="\n\n") #To See the count in the column df['positive'].value_counts() #To See the count in the column df['neutral'].value_counts() #To See the count in the column df['negative'].value_counts() #Graphic to see negative words negative_words = ' '.join([text for text in df['rating'][df['negative'] == 0]]) wordcloud = WordCloud(width=800, height=500, random_state=21, max_font_size=110).generate(negative_words) plt.figure(figsize=(10, 7)) plt.imshow(wordcloud, interpolation="bilinear") plt.axis('off') plt.show() # Calculate Vader sentiment score df['final_score'] = df['Text'].apply(lambda x: score_vader(x, vader)) #Discretize variable into equal-sized buckets based on rank or based on sample quantiles. df['final_pred'] = pd.cut(df['final_score'], bins=5, labels=[1, 2, 3, 4, 5]) df = df.drop('final_score', axis=1) df.head(7) #percentage value in a column by category df['final_pred'].value_counts(normalize=True) * 100 ``` #SIMPLE MODEL USING TEXTBLOB LIBRARY TextBlob is a Python (2 and 3) library for processing textual data. It provides a consistent API for diving into common natural language processing (NLP) tasks such as part-of-speech tagging, noun phrase extraction, sentiment analysis, and is a simple interface. #NOTA: This small model generally shows whether the text is neutral, positive, or negative. which is essentially what the whole project is looking for. corroborating with the vander model that in general most words are neutral according to the percentage in each column. ``` #Modelo use Textblob import csv from textblob import TextBlob article = 'saltyhacker.csv' with open(article, 'r') as csvfile: rows = csv.reader(csvfile) for row in rows: sentence = row[0] print (sentence) blob = TextBlob(sentence) print (blob.sentiment) #create a textblob object obj = TextBlob(article) #Returns a value between -1 and 1 sentiment = obj.sentiment.polarity print(sentiment) #TextBlob if sentiment == 0: print('The text is neutral') elif sentiment > 0: print('The text is positive') else: print('The Text is negative') ``` #CONCLUTION ``` #percentage value in a column by category df['final_pred'].value_counts(normalize=True) * 100 ``` #COMPARE TEXTBLOB AND VANDER SENTIMENTAL We can conclude that in both model can show if the text or article in general is neutral, positive, or negative. In this case is confirmed that the mayority words is neutral in both models.
github_jupyter
# What is probability? A simulated introduction ``` #Import packages import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline sns.set() ``` ## Learning Objectives of Part 1 - To have an understanding of what "probability" means, in both Bayesian and Frequentist terms; - To be able to simulate probability distributions that model real-world phenomena; - To understand how probability distributions relate to data-generating **stories**. ## Probability > To the pioneers such as Bernoulli, Bayes and Laplace, a probability represented a _degree-of-belief_ or plausibility; how much they thought that something was true, based on the evidence at hand. To the 19th century scholars, however, this seemed too vague and subjective an idea to be the basis of a rigorous mathematical theory. So they redefined probability as the _long-run relative frequency_ with which an event occurred, given (infinitely) many repeated (experimental) trials. Since frequencies can be measured, probability was now seen as an objective tool for dealing with _random_ phenomena. -- _Data Analysis, A Bayesian Tutorial_, Sivia & Skilling (p. 9) What type of random phenomena are we talking about here? One example is: - Knowing that a website has a click-through rate (CTR) of 10%, we can calculate the probability of having 10 people, 9 people, 8 people ... and so on click through, upon drawing 10 people randomly from the population; - But given the data of how many people click through, how can we calculate the CTR? And how certain can we be of this CTR? Or how likely is a particular CTR? Science mostly asks questions of the second form above & Bayesian thinking provides a wonderful framework for answering such questions. Essentially Bayes' Theorem gives us a way of moving from the probability of the data given the model (written as $P(data|model)$) to the probability of the model given the data ($P(model|data)$). We'll first explore questions of the 1st type using simulation: knowing the model, what is the probability of seeing certain data? ## Simulating probabilities * Let's say that a website has a CTR of 50%, i.e. that 50% of people click through. If we picked 1000 people at random from thepopulation, how likely would it be to find that a certain number of people click? We can simulate this using `numpy`'s random number generator. To do so, first note we can use `np.random.rand()` to randomly select floats between 0 and 1 (known as the _uniform distribution_). Below, we do so and plot a histogram: ``` # Draw 1,000 samples from uniform & plot results x = np.random.rand(1000) plt.hist(x, bins=20); ``` To then simulate the sampling from the population, we check whether each float was greater or less than 0.5. If less than or equal to 0.5, we say the person clicked. ``` # Computed how many people click clicks = x <= 0.5 n_clicks = clicks.sum() f"Number of clicks = {n_clicks}" ``` The proportion of people who clicked can be calculated as the total number of clicks over the number of people: ``` # Computed proportion of people who clicked f"Proportion who clicked = {n_clicks/len(clicks)}" ``` **Discussion**: Did you get the same answer as your neighbour? If you did, why? If not, why not? **Up for discussion:** Let's say that all you had was this data and you wanted to figure out the CTR (probability of clicking). * What would your estimate be? * Bonus points: how confident would you be of your estimate? **Note:** Although, in the above, we have described _probability_ in two ways, we have not described it mathematically. We're not going to do so rigorously here, but we will say that _probability_ defines a function from the space of possibilities (in the above, the interval $[0,1]$) that describes how likely it is to get a particular point or region in that space. Mike Betancourt has an elegant [Introduction to Probability Theory (For Scientists and Engineers)](https://betanalpha.github.io/assets/case_studies/probability_theory.html) that I can recommend. ### Hands-on: clicking Use random sampling to simulate how many people click when the CTR is 0.7. How many click? What proportion? ``` # Solution clicks = x <= 0.7 n_clicks = clicks.sum() print(f"Number of clicks = {n_clicks}") print(f"Proportion who clicked = {n_clicks/len(clicks)}") ``` _Discussion point_: This model is known as the bias coin flip. - Can you see why? - Can it be used to model other phenomena? ### Galapagos finch beaks You can also calculate such proportions with real-world data. Here we import a dataset of Finch beak measurements from the Galápagos islands. You can find the data [here](https://datadryad.org/resource/doi:10.5061/dryad.9gh90). ``` # Import and view head of data df_12 = pd.read_csv('../data/finch_beaks_2012.csv') df_12.head() # Store lengths in a pandas series lengths = df_12['blength'] ``` * What proportion of birds have a beak length > 10 ? ``` p = sum(lengths > 10) / len(lengths) p ``` **Note:** This is the proportion of birds that have beak length $>10$ in your empirical data, not the probability that any bird drawn from the population will have beak length $>10$. ### Proportion: A proxy for probability As stated above, we have calculated a proportion, not a probability. As a proxy for the probability, we can simulate drawing random samples (with replacement) from the data seeing how many lengths are > 10 and calculating the proportion (commonly referred to as [hacker statistics](https://speakerdeck.com/jakevdp/statistics-for-hackers)): ``` n_samples = 10000 sum(np.random.choice(lengths, n_samples, replace=True) > 10) / n_samples ``` ### Another way to simulate coin-flips In the above, you have used the uniform distribution to sample from a series of biased coin flips. I want to introduce you to another distribution that you can also use to do so: the **binomial distribution**. The **binomial distribution** with parameters $n$ and $p$ is defined as the probability distribution of > the number of heads seen when flipping a coin $n$ times when with $p(heads)=p$. **Note** that this distribution essentially tells the **story** of a general model in the following sense: if we believe that they underlying process generating the observed data has a binary outcome (affected by disease or not, head or not, 0 or 1, clicked through or not), and that one the of the two outcomes occurs with probability $p$, then the probability of seeing a particular outcome is given by the **binomial distribution** with parameters $n$ and $p$. Any process that matches the coin flip story is a Binomial process (note that you'll see such coin flips also referred to as Bernoulli trials in the literature). So we can also formulate the story of the Binomial distribution as > the number $r$ of successes in $n$ Bernoulli trials with probability $p$ of success, is Binomially distributed. We'll now use the binomial distribution to answer the same question as above: * If P(heads) = 0.7 and you flip the coin ten times, how many heads will come up? We'll also set the seed to ensure reproducible results. ``` # Set seed np.random.seed(42) # Simulate one run of flipping the biased coin 10 times np.random.binomial(10,0.7) ``` ### Simulating many times to get the distribution In the above, we have simulated the scenario once. But this only tells us one potential outcome. To see how likely it is to get $n$ heads, for example, we need to simulate it a lot of times and check what proportion ended up with $n$ heads. ``` # Simulate 1,000 run of flipping the biased coin 10 times x = np.random.binomial(10, 0.7, size=10_000) # Plot normalized histogram of results plt.hist(x, density=True, bins=10); ``` * Group chat: what do you see in the above? ### Hands-on: Probabilities - If I flip a biased coin ($P(H)=0.3$) 20 times, what is the probability of 5 or more heads? ``` # Calculate the probability of 5 or more heads for p=0.3 sum(np.random.binomial(20, 0.3, 10_000) >= 5) / 10_000 ``` - If I flip a fair coin 20 times, what is the probability of 5 or more heads? ``` # Calculate the probability of 5 or more heads for p=0.5 sum(np.random.binomial(20, 0.5, 10_000) >= 5) / 10_000 ``` - Plot the normalized histogram of number of heads of the following experiment: flipping a fair coin 10 times. ``` # Plot histogram x = np.random.binomial(10, 0.5, 10_000) plt.hist(x); ``` **Note:** you may have noticed that the _binomial distribution_ can take on only a finite number of values, whereas the _uniform distribution_ above can take on any number between $0$ and $1$. These are different enough cases to warrant special mention of this & two different names: the former is called a _probability mass function_ (PMF) and the latter a _probability distribution function_ (PDF). Time permitting, we may discuss some of the subtleties here. If not, all good texts will cover this. I like (Sivia & Skilling, 2006), among many others. **Question:** * Looking at the histogram, can you tell me the probability of seeing 4 or more heads? Enter the ECDF. ## Empirical cumulative distribution functions (ECDFs) An ECDF is, as an alternative to a histogram, a way to visualize univariate data that is rich in information. It allows you to visualize all of your data and, by doing so, avoids the very real problem of binning. - can plot control plus experiment - data plus model! - many populations - can see multimodality (though less pronounced) -- a mode becomes a point of inflexion! - can read off so much: e.g. percentiles. See Eric Ma's great post on ECDFS [here](https://ericmjl.github.io/blog/2018/7/14/ecdfs/) AND [this twitter thread](https://twitter.com/allendowney/status/1019171696572583936) (thanks, Allen Downey!). So what is this ECDF? **Definition:** In an ECDF, the x-axis is the range of possible values for the data & for any given x-value, the corresponding y-value is the proportion of data points less than or equal to that x-value. Let's define a handy ECDF function that takes in data and outputs $x$ and $y$ data for the ECDF. ``` def ecdf(data): """Compute ECDF for a one-dimensional array of measurements.""" # Number of data points n = len(data) # x-data for the ECDF x = np.sort(data) # y-data for the ECDF y = np.arange(1, n+1) / n return x, y ``` ### Hands-on: Plotting ECDFs Plot the ECDF for the previous hands-on exercise. Read the answer to the following question off the ECDF: he probability of seeing 4 or more heads? ``` # Generate x- and y-data for the ECDF x_flips, y_flips = ecdf(x) # Plot the ECDF plt.plot(x_flips, y_flips, marker=".") ``` ## Probability distributions and their stories **Credit:** Thank you to [Justin Bois](http://bois.caltech.edu/) for countless hours of discussion, work and collaboration on thinking about probability distributions and their stories. All of the following is inspired by Justin & his work, if not explicitly drawn from. ___ In the above, we saw that we could match data-generating processes with binary outcomes to the story of the binomial distribution. > The Binomial distribution's story is as follows: the number $r$ of successes in $n$ Bernoulli trials with probability $p$ of success, is Binomially distributed. There are many other distributions with stories also! ### Poisson processes and the Poisson distribution In the book [Information Theory, Inference and Learning Algorithms](https://www.amazon.com/Information-Theory-Inference-Learning-Algorithms/dp/0521642981) David MacKay tells the tale of a town called Poissonville, in which the buses have an odd schedule. Standing at a bus stop in Poissonville, the amount of time you have to wait for a bus is totally independent of when the previous bus arrived. This means you could watch a bus drive off and another arrive almost instantaneously, or you could be waiting for hours. Arrival of buses in Poissonville is what we call a Poisson process. The timing of the next event is completely independent of when the previous event happened. Many real-life processes behave in this way. * natural births in a given hospital (there is a well-defined average number of natural births per year, and the timing of one birth is independent of the timing of the previous one); * Landings on a website; * Meteor strikes; * Molecular collisions in a gas; * Aviation incidents. Any process that matches the buses in Poissonville **story** is a Poisson process. The number of arrivals of a Poisson process in a given amount of time is Poisson distributed. The Poisson distribution has one parameter, the average number of arrivals in a given length of time. So, to match the story, we could consider the number of hits on a website in an hour with an average of six hits per hour. This is Poisson distributed. ``` # Generate Poisson-distributed data samples = np.random.poisson(6, 10**6) # Plot histogram plt.hist(samples, bins=21); ``` **Question:** Does this look like anything to you? In fact, the Poisson distribution is the limit of the Binomial distribution for low probability of success and large number of trials, that is, for rare events. To see this, think about the stories. Picture this: you're doing a Bernoulli trial once a minute for an hour, each with a success probability of 0.05. We would do 60 trials, and the number of successes is Binomially distributed, and we would expect to get about 3 successes. This is just like the Poisson story of seeing 3 buses on average arrive in a given interval of time. Thus the Poisson distribution with arrival rate equal to np approximates a Binomial distribution for n Bernoulli trials with probability p of success (with n large and p small). This is useful because the Poisson distribution can be simpler to work with as it has only one parameter instead of two for the Binomial distribution. #### Hands-on: Poisson Plot the ECDF of the Poisson-distributed data that you generated above. ``` # Generate x- and y-data for the ECDF x_p, y_p = ecdf(samples) # Plot the ECDF plt.plot(x_p, y_p, marker="."); ``` #### Example Poisson distribution: field goals attempted per game This section is explicitly taken from the great work of Justin Bois. You can find more [here](https://github.com/justinbois/dataframed-plot-examples/blob/master/lebron_field_goals.ipynb). Let's first remind ourselves of the story behind the Poisson distribution. > The number of arrivals of a Poisson processes in a given set time interval is Poisson distributed. To quote Justin Bois: > We could model field goal attempts in a basketball game using a Poisson distribution. When a player takes a shot is a largely stochastic process, being influenced by the myriad ebbs and flows of a basketball game. Some players shoot more than others, though, so there is a well-defined rate of shooting. Let's consider LeBron James's field goal attempts for the 2017-2018 NBA season. First thing's first, the data ([from here](https://www.basketball-reference.com/players/j/jamesle01/gamelog/2018)): ``` fga = [19, 16, 15, 20, 20, 11, 15, 22, 34, 17, 20, 24, 14, 14, 24, 26, 14, 17, 20, 23, 16, 11, 22, 15, 18, 22, 23, 13, 18, 15, 23, 22, 23, 18, 17, 22, 17, 15, 23, 8, 16, 25, 18, 16, 17, 23, 17, 15, 20, 21, 10, 17, 22, 20, 20, 23, 17, 18, 16, 25, 25, 24, 19, 17, 25, 20, 20, 14, 25, 26, 29, 19, 16, 19, 18, 26, 24, 21, 14, 20, 29, 16, 9] ``` To show that this LeBron's attempts are ~ Poisson distributed, you're now going to plot the ECDF and compare it with the the ECDF of the Poisson distribution that has the mean of the data (technically, this is the maximum likelihood estimate). #### Hands-on: Simulating Data Generating Stories Generate the x and y values for the ECDF of LeBron's field attempt goals. ``` # Generate x & y data for ECDF x_ecdf, y_ecdf = ecdf(fga) ``` Now we'll draw samples out of a Poisson distribution to get the theoretical ECDF, plot it with the ECDF of the data and see how they look. ``` # Number of times we simulate the model n_reps = 1000 # Plot ECDF of data plt.plot(x_ecdf, y_ecdf, '.', color='black'); # Plot ECDF of model for _ in range(n_reps): samples = np.random.poisson(np.mean(fga), size=len(fga)) x_theor, y_theor = ecdf(samples) plt.plot(x_theor, y_theor, '.', alpha=0.01, color='lightgray'); # Label your axes plt.xlabel('field goal attempts') plt.ylabel('ECDF'); ``` You can see from the ECDF that LeBron's field goal attempts per game are Poisson distributed. ### Exponential distribution We've encountered a variety of named _discrete distributions_. There are also named _continuous distributions_, such as the Exponential distribution and the Normal (or Gaussian) distribution. To see what the story of the Exponential distribution is, let's return to Poissonville, in which the number of buses that will arrive per hour are Poisson distributed. However, the waiting time between arrivals of a Poisson process are exponentially distributed. So: the exponential distribution has the following story: the waiting time between arrivals of a Poisson process are exponentially distributed. It has a single parameter, the mean waiting time. This distribution is not peaked, as we can see from its PDF. For an illustrative example, lets check out the time between all incidents involving nuclear power since 1974. It's a reasonable first approximation to expect incidents to be well-modeled by a Poisson process, which means the timing of one incident is independent of all others. If this is the case, the time between incidents should be Exponentially distributed. To see if this story is credible, we can plot the ECDF of the data with the CDF that we'd get from an exponential distribution with the sole parameter, the mean, given by the mean inter-incident time of the data. ``` # Load nuclear power accidents data & create array of inter-incident times df = pd.read_csv('../data/nuclear_power_accidents.csv') df.Date = pd.to_datetime(df.Date) df = df[df.Date >= pd.to_datetime('1974-01-01')] inter_times = np.diff(np.sort(df.Date)).astype(float) / 1e9 / 3600 / 24 # Compute mean and sample from exponential mean = ___ samples = ___ # Compute ECDFs for sample & model x, y = ___ x_theor, y_theor = ___ # Plot sample & model ECDFs ___; plt.plot(x, y, marker='.', linestyle='none'); ``` We see that the data is close to being Exponentially distributed, which means that we can model the nuclear incidents as a Poisson process. ### Normal distribution The Normal distribution, also known as the Gaussian or Bell Curve, appears everywhere. There are many reasons for this. One is the following: > When doing repeated measurements, we expect them to be Normally distributed, owing to the many subprocesses that contribute to a measurement. This is because (a formulation of the Central Limit Theorem) **any quantity that emerges as the sum of a large number of subprocesses tends to be Normally distributed** provided none of the subprocesses is very broadly distributed. Now it's time to see if this holds for the measurements of the speed of light in the famous Michelson–Morley experiment: Below, I'll plot the histogram with a Gaussian curve fitted to it. Even if that looks good, though, that could be due to binning bias. SO then you'll plot the ECDF of the data and the CDF of the model! ``` # Load data, plot histogram import scipy.stats as st df = pd.read_csv('../data/michelson_speed_of_light.csv') df = df.rename(columns={'velocity of light in air (km/s)': 'c'}) c = df.c.values x_s = np.linspace(299.6, 300.1, 400) * 1000 plt.plot(x_s, st.norm.pdf(x_s, c.mean(), c.std(ddof=1))) plt.hist(c, bins=9, density=True) plt.xlabel('speed of light (km/s)') plt.ylabel('PDF'); ``` #### Hands-on: Simulating Normal ``` # Get speed of light measurement + mean & standard deviation michelson_speed_of_light = df.c.values mean = np.mean(michelson_speed_of_light) std = np.std(michelson_speed_of_light, ddof=1) # Generate normal samples w/ mean, std of data samples = np.random.normal(mean, std, size=10000) # Generate data ECDF & model CDF x, y =ecdf(michelson_speed_of_light) x_theor, y_theor = ecdf(samples) # Plot data & model (E)CDFs plt.plot(x_theor, y_theor) plt.plot(x, y, marker=".") plt.xlabel('speed of light (km/s)') plt.ylabel('CDF'); ``` Some of you may ask but is the data really normal? I urge you to check out Allen Downey's post [_Are your data normal? Hint: no._ ](http://allendowney.blogspot.com/2013/08/are-my-data-normal.html)
github_jupyter
# hgvs Documention: Examples This notebook is being drafted to run and review the code presented in the hgvs documentation that is in the "Creating a SequenceVariant from scratch" section (https://hgvs.readthedocs.io/en/stable/examples/creating-a-variant.html#overview). ## User Troubleshooting Users proposed state. User stories and user troubleshooting methods are included. ### User Credentials This section is for people to provide their background Occupation: Molecular Biologist Experience in Biology: 8 years Experience in Python: ~1 year ## Step 1: Import hgvs modules ### User Story: As a novice python developer I review the entire page for code snipets. I collect all of the modules that need to be imported and run them in the first cell. ``` import hgvs.location import hgvs.posedit import hgvs.edit import hgvs.variant import copy ``` ## Troubleshooting: "ImportError: No module named variant" ### User Story: I am confused why one of the modules caused an error. ### Approach: Use `dir()` and `help()` on hgvs and its modules. ### Resolution: Execute example from `help(hgvs)`. Import `hgvs.variantmapper`. ### Comments: `hgvs.variant` and `hgvs.variant.SequenceVariant` are now `hgvs.sequencevariant.SequenceVariant`. ``` dir(hgvs), help(hgvs) # follow example in Description import hgvs.dataproviders.uta import hgvs.parser import hgvs.variantmapper # chose variant, https://www.ncbi.nlm.nih.gov/snp/rs6025 rs6025 = 'NC_000001.10:g.169519049T>C' # parse variant hp = hgvs.parser.Parser() rs6025P = hp.parse_hgvs_variant(rs6025) rs6025P # SequenceVariant can be pulled apart rs6025P.ac, rs6025P.fill_ref, rs6025P.format, rs6025P.posedit, rs6025P.type, rs6025P.validate # Exploring .fill_ref, .format, .validate dir(rs6025P.fill_ref), dir(rs6025P.format), dir(rs6025P.validate) # create dataprovider variable -- what does this do? hdp = hgvs.dataproviders.uta.connect() # create assemblymapper variable am = hgvs.assemblymapper ``` ## Troubleshooting: "AttributeError: 'module' object has no attribute 'assemblymapper'" ### Resolution: Import `hgvs.assemblymapper`. ### Comments: ``` # import module import hgvs.assemblymapper ``` End of troubleshooting for **"AttributeError: 'module' object has no attribute 'assemblymapper'"** ``` # create assemblymapper variable, determine transcripts effected am = hgvs.assemblymapper.AssemblyMapper(hdp, alt_aln_method='splign', assembly_name='GRCh37', replace_reference=True) transcripts = am.relevant_transcripts(rs6025P) sorted(transcripts) # map variant to coding sequence rs6025c = am.g_to_c(rs6025P,transcripts[0]) rs6025c # pull apart the SequenceVariant rs6025c.ac, rs6025c.posedit.edit, rs6025c.posedit.pos.start, rs6025c.type ``` End of troubleshooting for **"ImportError: No module named variant"** ## Step 2: Make an Interval to define a position of the edit ``` start = hgvs.location.BaseOffsetPosition(base=200,offset=-6,datum=hgvs.location.CDS_START) start, str(start) ``` ## Troubleshooting: "AttributeError: 'module' object has no attribute 'CDS_START'" ### Resolution: Use `hgvs.location.Datum.` prefix. ### Comments: ``` # Check dir() on hgvs.location and hgvs.posedit dir(hgvs.location) # read doc on 'Datum' and check class list help(hgvs.location.Datum), dir(hgvs.location.Datum) hgvs.location.Datum is hgvs.enums.Datum ``` End of troubleshooting for **"AttributeError: 'module' object has no attribute 'CDS_START'"** ## Step 2 cont. ``` start = hgvs.location.BaseOffsetPosition(base=200,offset=-6,datum=hgvs.location.Datum.CDS_START) start, str(start) end = hgvs.location.BaseOffsetPosition(base=22,datum=hgvs.location.Datum.CDS_END) end, str(end) iv = hgvs.location.Interval(start=start,end=end) iv, str(iv) ``` ## Step 3: Make an edit object ``` edit = hgvs.edit.NARefAlt(ref='A',alt='T') edit, str(edit) posedit = hgvs.posedit.PosEdit(pos=iv,edit=edit) posedit, str(posedit) var = hgvs.variant.SequenceVariant(ac=transcripts[0], type='g', posedit=posedit) var, str(var) # see AttributeError: 'module' object has no attribute 'variant' troubleshooting dir(hgvs), dir(hgvs.sequencevariant) # hgvs.sequencevariant is an accepted class with SequenceVariant as a class var = hgvs.sequencevariant.SequenceVariant(ac=transcripts[0], type='g', posedit=posedit) var, str(var) ``` ## Step 4: Validate the variant See hgvs.validator.Validator for validation options. ``` dir(hgvs.validator.Validator), help(hgvs.validator.Validator) hgvs.validator.Validator.validate(var) hgvs.validator.Validator.validate(var.validate) ``` ## Troubleshooting: "TypeError: unbound method validate() must be called with Validator instance as first argument" ### Resolution: Use `hgvs.sequencevariant.validate_type_ac_pair(ac= , type= )`. ### Comments: ``` # hgvs.sequencevariant has validate_type_ac_pair val = hgvs.sequencevariant.validate_type_ac_pair(ac=var.ac, type=var.type) val ``` End of troubleshooting for **"TypeError: unbound method validate() must be called with Validator instance as first argument"** ``` var.type = 'c' val = hgvs.sequencevariant.validate_type_ac_pair(ac=var.ac, type=var.type) val ``` ## Step 5: Update variant using copy.deepcopy ``` import copy var2 = copy.deepcopy(var) var2 var2.posedit.pos.start.base = 456 str(var2) var2.posedit.edit.alt = 'CT' str(var2) var2.posedit.pos.end.uncertain = True str(var2) var2 = copy.deepcopy(var) var2.posedit.pos.end.uncertain = True str(var2) ``` ## Troubleshooting: "HGVSUnsupportedOperationError: Cannot compare coordinates of uncertain positions" ### Resolution: None at this time. ### Comments:
github_jupyter
# *Quick, Draw!* GAN In this notebook, we use Generative Adversarial Network code (adapted from [Rowel Atienza's](https://github.com/roatienza/Deep-Learning-Experiments/blob/master/Experiments/Tensorflow/GAN/dcgan_mnist.py) under [MIT License](https://github.com/roatienza/Deep-Learning-Experiments/blob/master/LICENSE)) to create sketches in the style of humans who have played the [*Quick, Draw!* game](https://quickdraw.withgoogle.com) (data available [here](https://github.com/googlecreativelab/quickdraw-dataset) under [Creative Commons Attribution 4.0 license](https://creativecommons.org/licenses/by/4.0/)). #### Load dependencies ``` # for data input and output: import numpy as np import os # for deep learning: import keras from keras.models import Model from keras.layers import Input, Dense, Conv2D, Dropout from keras.layers import BatchNormalization, Flatten from keras.layers import Activation from keras.layers import Reshape # new! from keras.layers import Conv2DTranspose, UpSampling2D # new! from keras.optimizers import RMSprop # new! # for plotting: import pandas as pd from matplotlib import pyplot as plt %matplotlib inline ``` #### Load data NumPy bitmap files are [here](https://console.cloud.google.com/storage/browser/quickdraw_dataset/full/numpy_bitmap) -- pick your own drawing category -- you don't have to pick *apples* :) ``` input_images = "../quickdraw_data/apple.npy" data = np.load(input_images) # 28x28 (sound familiar?) grayscale bitmap in numpy .npy format; images are centered data.shape data[4242] data = data/255 data = np.reshape(data,(data.shape[0],28,28,1)) # fourth dimension is color img_w,img_h = data.shape[1:3] data.shape data[4242] plt.imshow(data[4242,:,:,0], cmap='Greys') ``` #### Create discriminator network ``` def build_discriminator(depth=64, p=0.4): # Define inputs image = Input((img_w,img_h,1)) # Convolutional layers conv1 = Conv2D(depth*1, 5, strides=2, padding='same', activation='relu')(image) conv1 = Dropout(p)(conv1) conv2 = Conv2D(depth*2, 5, strides=2, padding='same', activation='relu')(conv1) conv2 = Dropout(p)(conv2) conv3 = Conv2D(depth*4, 5, strides=2, padding='same', activation='relu')(conv2) conv3 = Dropout(p)(conv3) conv4 = Conv2D(depth*8, 5, strides=1, padding='same', activation='relu')(conv3) conv4 = Flatten()(Dropout(p)(conv4)) # Output layer prediction = Dense(1, activation='sigmoid')(conv4) # Model definition model = Model(inputs=image, outputs=prediction) return model discriminator = build_discriminator() discriminator.summary() discriminator.compile(loss='binary_crossentropy', optimizer=RMSprop(lr=0.0008, decay=6e-8, clipvalue=1.0), metrics=['accuracy']) ``` #### Create generator network ``` z_dimensions = 32 def build_generator(latent_dim=z_dimensions, depth=64, p=0.4): # Define inputs noise = Input((latent_dim,)) # First dense layer dense1 = Dense(7*7*depth)(noise) dense1 = BatchNormalization(momentum=0.9)(dense1) # default momentum for moving average is 0.99 dense1 = Activation(activation='relu')(dense1) dense1 = Reshape((7,7,depth))(dense1) dense1 = Dropout(p)(dense1) # De-Convolutional layers conv1 = UpSampling2D()(dense1) conv1 = Conv2DTranspose(int(depth/2), kernel_size=5, padding='same', activation=None,)(conv1) conv1 = BatchNormalization(momentum=0.9)(conv1) conv1 = Activation(activation='relu')(conv1) conv2 = UpSampling2D()(conv1) conv2 = Conv2DTranspose(int(depth/4), kernel_size=5, padding='same', activation=None,)(conv2) conv2 = BatchNormalization(momentum=0.9)(conv2) conv2 = Activation(activation='relu')(conv2) conv3 = Conv2DTranspose(int(depth/8), kernel_size=5, padding='same', activation=None,)(conv2) conv3 = BatchNormalization(momentum=0.9)(conv3) conv3 = Activation(activation='relu')(conv3) # Output layer image = Conv2D(1, kernel_size=5, padding='same', activation='sigmoid')(conv3) # Model definition model = Model(inputs=noise, outputs=image) return model generator = build_generator() generator.summary() ``` #### Create adversarial network ``` z = Input(shape=(z_dimensions,)) img = generator(z) discriminator.trainable = False pred = discriminator(img) adversarial_model = Model(z, pred) adversarial_model.compile(loss='binary_crossentropy', optimizer=RMSprop(lr=0.0004, decay=3e-8, clipvalue=1.0), metrics=['accuracy']) ``` #### Train! ``` def train(epochs=2000, batch=128, z_dim=z_dimensions): d_metrics = [] a_metrics = [] running_d_loss = 0 running_d_acc = 0 running_a_loss = 0 running_a_acc = 0 for i in range(epochs): # sample real images: real_imgs = np.reshape( data[np.random.choice(data.shape[0], batch, replace=False)], (batch,28,28,1)) # generate fake images: fake_imgs = generator.predict( np.random.uniform(-1.0, 1.0, size=[batch, z_dim])) # concatenate images as discriminator inputs: x = np.concatenate((real_imgs,fake_imgs)) # assign y labels for discriminator: y = np.ones([2*batch,1]) y[batch:,:] = 0 # train discriminator: d_metrics.append( discriminator.train_on_batch(x,y) ) running_d_loss += d_metrics[-1][0] running_d_acc += d_metrics[-1][1] # adversarial net's noise input and "real" y: noise = np.random.uniform(-1.0, 1.0, size=[batch, z_dim]) y = np.ones([batch,1]) # train adversarial net: a_metrics.append( adversarial_model.train_on_batch(noise,y) ) running_a_loss += a_metrics[-1][0] running_a_acc += a_metrics[-1][1] # periodically print progress & fake images: if (i+1)%100 == 0: print('Epoch #{}'.format(i)) log_mesg = "%d: [D loss: %f, acc: %f]" % \ (i, running_d_loss/i, running_d_acc/i) log_mesg = "%s [A loss: %f, acc: %f]" % \ (log_mesg, running_a_loss/i, running_a_acc/i) print(log_mesg) noise = np.random.uniform(-1.0, 1.0, size=[16, z_dim]) gen_imgs = generator.predict(noise) plt.figure(figsize=(5,5)) for k in range(gen_imgs.shape[0]): plt.subplot(4, 4, k+1) plt.imshow(gen_imgs[k, :, :, 0], cmap='gray') plt.axis('off') plt.tight_layout() plt.show() return a_metrics, d_metrics a_metrics_complete, d_metrics_complete = train() ax = pd.DataFrame( { 'Adversarial': [metric[0] for metric in a_metrics_complete], 'Discriminator': [metric[0] for metric in d_metrics_complete], } ).plot(title='Training Loss', logy=True) ax.set_xlabel("Epochs") ax.set_ylabel("Loss") ax = pd.DataFrame( { 'Adversarial': [metric[1] for metric in a_metrics_complete], 'Discriminator': [metric[1] for metric in d_metrics_complete], } ).plot(title='Training Accuracy') ax.set_xlabel("Epochs") ax.set_ylabel("Accuracy") ```
github_jupyter
<a href="https://colab.research.google.com/github/AaronGe88inTHU/dreye-thu/blob/master/DataGenerator.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` from google.colab import drive drive.mount('/content/drive') import numpy as np import cv2 from PIL import Image import tensorflow as tf import os, glob from matplotlib import pyplot as plt import tarfile import shutil def target_generate(file_path): cmd_str = os.path.join(file_path,"*.png") file_list = glob.glob("/content/drive/My Drive/dreye/ImagePreprocess/*.png") file_list = sorted(file_list, key=lambda name: int(name.split("/")[-1][:-4]))#int(name[:-4])) print("{0} images need to be processed! in {1}".format(len(file_list), cmd_str)) batch_size = 20 image_arrays = [] crop_arrays = [] batches = int(np.floor(len(file_list) / batch_size)) mod = len(file_list) % batch_size for ii in range(batches): image_batch = [] crop_batch = [] for jj in range(batch_size): im = np.array(Image.open(file_list[ii * batch_size + jj])) im = cv2.resize(im, dsize=(448, 448), interpolation=cv2.INTER_CUBIC) cp = im[167:279, 167:279] image_batch.append(im) crop_batch.append(cp) print("{} images resized!".format(batch_size)) print("{} images cropped!".format(batch_size)) image_batch = np.array(image_batch) crop_batch = np.array(crop_batch) image_arrays.extend(image_batch) crop_arrays.extend(crop_batch) #print(len(image_arrays), len(crop_arrays))#plt.imshow(np.array(images[0], dtype=np.int32)) image_batch = [] crop_batch = [] for jj in range (int(batches * batch_size + mod)): im = np.array(Image.open(file_list[jj])) im = cv2.resize(im, dsize=(448, 448), interpolation=cv2.INTER_CUBIC) cp = im[167:279, 167:279] image_batch.append(im) crop_batch.append(cp) print("{} images resized!".format(mod)) print("{} images cropped!".format(mod)) image_batch = np.array(image_batch) image_arrays.extend(image_batch ) crop_batch = np.array(crop_batch) crop_arrays.extend(crop_batch) image_arrays = np.array(image_arrays) crop_arrays = np.array(crop_arrays) #print(image_arrays.shape, crop_arrays.shape) resize_path = os.path.join(file_path,"resize") crop_path = os.path.join(file_path,"crop") os.mkdir(resize_path) os.mkdir(crop_path) for ii in range(image_arrays.shape[0]): im = Image.fromarray(image_arrays[ii]) im.save(os.path.join(resize_path,"{}.png".format(ii))) im = Image.fromarray(crop_arrays[ii]) im.save(os.path.join(crop_path,"{}.png".format(ii))) print("Saved successfully!") #target_generate('/content/drive/My Drive/dreye/ImagePreprocess') path_name = os.path.join("/content/drive/My Drive/dreye/ImagePreprocess/", "resize") tar = tarfile.open(path_name+".tar.gz", "w:gz") tar.add(path_name, arcname="resize") tar.close() shutil.rmtree(path_name) path_name = os.path.join("/content/drive/My Drive/dreye/ImagePreprocess/", "crop") tar2 = tarfile.open(path_name+".tar.gz", "w:gz") tar2.add(path_name, arcname="crop") tar2.close() shutil.rmtree(path_name) ```
github_jupyter
<a href="https://colab.research.google.com/github/bruno-janota/DS-Unit-2-Linear-Models/blob/master/module1-regression-1/LS_DS_211.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Lambda School Data Science *Unit 2, Sprint 1, Module 1* --- # Regression 1 - Begin with baselines for regression - Use scikit-learn to fit a linear regression - Explain the coefficients from a linear regression Brandon Rohrer wrote a good blog post, [“What questions can machine learning answer?”](https://brohrer.github.io/five_questions_data_science_answers.html) We’ll focus on two of these questions in Unit 2. These are both types of “supervised learning.” - “How Much / How Many?” (Regression) - “Is this A or B?” (Classification) This unit, you’ll build supervised learning models with “tabular data” (data in tables, like spreadsheets). Including, but not limited to: - Predict New York City real estate prices <-- **Today, we'll start this!** - Predict which water pumps in Tanzania need repairs - Choose your own labeled, tabular dataset, train a predictive model, and publish a blog post or web app with visualizations to explain your model! ### Setup Run the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab. Libraries: - ipywidgets - pandas - plotly - scikit-learn ``` import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/' # If you're working locally: else: DATA_PATH = '../data/' # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') ``` # Begin with baselines for regression ## Overview ### Predict how much a NYC condo costs 🏠💸 Regression models output continuous numbers, so we can use regression to answer questions like "How much?" or "How many?" Often, the question is "How much will this cost? How many dollars?" For example, here's a fun YouTube video, which we'll use as our scenario for this lesson: [Amateurs & Experts Guess How Much a NYC Condo With a Private Terrace Costs](https://www.youtube.com/watch?v=JQCctBOgH9I) > Real Estate Agent Leonard Steinberg just sold a pre-war condo in New York City's Tribeca neighborhood. We challenged three people - an apartment renter, an apartment owner and a real estate expert - to try to guess how much the apartment sold for. Leonard reveals more and more details to them as they refine their guesses. The condo from the video is **1,497 square feet**, built in 1852, and is in a desirable neighborhood. According to the real estate agent, _"Tribeca is known to be one of the most expensive ZIP codes in all of the United States of America."_ How can we guess what this condo sold for? Let's look at 3 methods: 1. Heuristics 2. Descriptive Statistics 3. Predictive Model ## Follow Along ### 1. Heuristics Heuristics are "rules of thumb" that people use to make decisions and judgments. The video participants discussed their heuristics: **Participant 1**, Chinwe, is a real estate amateur. She rents her apartment in New York City. Her first guess was `8 million, and her final guess was 15 million. [She said](https://youtu.be/JQCctBOgH9I?t=465), _"People just go crazy for numbers like 1852. You say **'pre-war'** to anyone in New York City, they will literally sell a kidney. They will just give you their children."_ **Participant 3**, Pam, is an expert. She runs a real estate blog. Her first guess was 1.55 million, and her final guess was 2.2 million. [She explained](https://youtu.be/JQCctBOgH9I?t=280) her first guess: _"I went with a number that I think is kind of the going rate in the location, and that's **a thousand bucks a square foot.**"_ **Participant 2**, Mubeen, is between the others in his expertise level. He owns his apartment in New York City. His first guess was 1.7 million, and his final guess was also 2.2 million. ### 2. Descriptive Statistics We can use data to try to do better than these heuristics. How much have other Tribeca condos sold for? Let's answer this question with a relevant dataset, containing most of the single residential unit, elevator apartment condos sold in Tribeca, from January through April 2019. We can get descriptive statistics for the dataset's `SALE_PRICE` column. How many condo sales are in this dataset? What was the average sale price? The median? Minimum? Maximum? ``` import pandas as pd df = pd.read_csv(DATA_PATH+'condos/tribeca.csv') pd.options.display.float_format = '{:,.0f}'.format df['SALE_PRICE'].describe() ``` On average, condos in Tribeca have sold for \$3.9 million. So that could be a reasonable first guess. In fact, here's the interesting thing: **we could use this one number as a "prediction", if we didn't have any data except for sales price...** Imagine we didn't have any any other information about condos, then what would you tell somebody? If you had some sales prices like this but you didn't have any of these other columns. If somebody asked you, "How much do you think a condo in Tribeca costs?" You could say, "Well, I've got 90 sales prices here, and I see that on average they cost \$3.9 million." So we do this all the time in the real world. We use descriptive statistics for prediction. And that's not wrong or bad, in fact **that's where you should start. This is called the _mean baseline_.** **Baseline** is an overloaded term, with multiple meanings: 1. [**The score you'd get by guessing**](https://twitter.com/koehrsen_will/status/1088863527778111488) 2. [**Fast, first models that beat guessing**](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa) 3. **Complete, tuned "simpler" model** (Simpler mathematically, computationally. Or less work for you, the data scientist.) 4. **Minimum performance that "matters"** to go to production and benefit your employer and the people you serve. 5. **Human-level performance** Baseline type #1 is what we're doing now. Linear models can be great for #2, 3, 4, and [sometimes even #5 too!](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.188.5825) --- Let's go back to our mean baseline for Tribeca condos. If we just guessed that every Tribeca condo sold for \$3.9 million, how far off would we be, on average? ``` guess = df['SALE_PRICE'].mean() errors = guess - df['SALE_PRICE'] mean_absolute_error = errors.abs().mean() print(f'If we just guessed every Tribeca condo sold for ${guess:,.0f},') print(f'we would be off by ${mean_absolute_error:,.0f} on average.') ``` That sounds like a lot of error! But fortunately, we can do better than this first baseline — we can use more data. For example, the condo's size. Could sale price be **dependent** on square feet? To explore this relationship, let's make a scatterplot, using [Plotly Express](https://plot.ly/python/plotly-express/): ``` import plotly.express as px px.scatter(df, x='GROSS_SQUARE_FEET', y='SALE_PRICE') ``` ### 3. Predictive Model To go from a _descriptive_ [scatterplot](https://www.plotly.express/plotly_express/#plotly_express.scatter) to a _predictive_ regression, just add a _line of best fit:_ ``` px.scatter(df, x='GROSS_SQUARE_FEET', y='SALE_PRICE', trendline='ols') df.SALE_PRICE.mean() df.SALE_PRICE.std() df.SALE_PRICE.describe() import seaborn as sns sns.boxplot(df.SALE_PRICE) ``` Roll over the Plotly regression line to see its equation and predictions for sale price, dependent on gross square feet. Linear Regression helps us **interpolate.** For example, in this dataset, there's a gap between 4016 sq ft and 4663 sq ft. There were no 4300 sq ft condos sold, but what price would you predict, using this line of best fit? Linear Regression also helps us **extrapolate.** For example, in this dataset, there were no 6000 sq ft condos sold, but what price would you predict? The line of best fit tries to summarize the relationship between our x variable and y variable in a way that enables us to use the equation for that line to make predictions. **Synonyms for "y variable"** - **Dependent Variable** - Response Variable - Outcome Variable - Predicted Variable - Measured Variable - Explained Variable - **Label** - **Target** **Synonyms for "x variable"** - **Independent Variable** - Explanatory Variable - Regressor - Covariate - Correlate - **Feature** The bolded terminology will be used most often by your instructors this unit. ## Challenge In your assignment, you will practice how to begin with baselines for regression, using a new dataset! # Use scikit-learn to fit a linear regression ## Overview We can use visualization libraries to do simple linear regression ("simple" means there's only one independent variable). But during this unit, we'll usually use the scikit-learn library for predictive models, and we'll usually have multiple independent variables. In [_Python Data Science Handbook,_ Chapter 5.2: Introducing Scikit-Learn](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.html#Basics-of-the-API), Jake VanderPlas explains **how to structure your data** for scikit-learn: > The best way to think about data within Scikit-Learn is in terms of tables of data. > > ![](https://jakevdp.github.io/PythonDataScienceHandbook/figures/05.02-samples-features.png) > >The features matrix is often stored in a variable named `X`. The features matrix is assumed to be two-dimensional, with shape `[n_samples, n_features]`, and is most often contained in a NumPy array or a Pandas `DataFrame`. > >We also generally work with a label or target array, which by convention we will usually call `y`. The target array is usually one dimensional, with length `n_samples`, and is generally contained in a NumPy array or Pandas `Series`. The target array may have continuous numerical values, or discrete classes/labels. > >The target array is the quantity we want to _predict from the data:_ in statistical terms, it is the dependent variable. VanderPlas also lists a **5 step process** for scikit-learn's "Estimator API": > Every machine learning algorithm in Scikit-Learn is implemented via the Estimator API, which provides a consistent interface for a wide range of machine learning applications. > > Most commonly, the steps in using the Scikit-Learn estimator API are as follows: > > 1. Choose a class of model by importing the appropriate estimator class from Scikit-Learn. > 2. Choose model hyperparameters by instantiating this class with desired values. > 3. Arrange data into a features matrix and target vector following the discussion above. > 4. Fit the model to your data by calling the `fit()` method of the model instance. > 5. Apply the Model to new data: For supervised learning, often we predict labels for unknown data using the `predict()` method. Let's try it! ## Follow Along Follow the 5 step process, and refer to [Scikit-Learn LinearRegression documentation](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html). ``` # 1. Import the appropriate estimator class from Scikit-Learn from sklearn.linear_model import LinearRegression # 2. Instantiate this class model = LinearRegression() # 3. Arrange X features matrix & y target vector features = ['GROSS_SQUARE_FEET'] target = 'SALE_PRICE' X = df[features] y = df[target] print(X.shape, y.shape) # 4. Fit the model model.fit(X, y) # 5. Apply the model to new data sq_feet = 1497 X_test = [[sq_feet]] y_pred = model.predict(X_test) print(f'Predicted price for {sq_feet} sq ft Tribeca condo: {y_pred[0]}') ``` So, we used scikit-learn to fit a linear regression, and predicted the sales price for a 1,497 square foot Tribeca condo, like the one from the video. Now, what did that condo actually sell for? ___The final answer is revealed in [the video at 12:28](https://youtu.be/JQCctBOgH9I?t=748)!___ ``` y_test = [2800000] ``` What was the error for our prediction, versus the video participants? Let's use [scikit-learn's mean absolute error function](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html). ``` chinwe_final_guess = [15000000] mubeen_final_guess = [2200000] pam_final_guess = [2200000] from sklearn.metrics import mean_absolute_error mae = mean_absolute_error(y_test, y_pred) print(f'Out models error: {mae}') mae = mean_absolute_error(y_test, chinwe_final_guess) print(f'Chinwe models error: {mae}') mae = mean_absolute_error(y_test, mubeen_final_guess) print(f'Mubeen and Pam models error: {mae}') # Make predictions on full dataset and report the mae of the model preds = model.predict(X) mae = mean_absolute_error(y, preds) print(f'Our models MAE: ${mae}') ``` This [diagram](https://ogrisel.github.io/scikit-learn.org/sklearn-tutorial/tutorial/text_analytics/general_concepts.html#supervised-learning-model-fit-x-y) shows what we just did! Don't worry about understanding it all now. But can you start to match some of these boxes/arrows to the corresponding lines of code from above? <img src="https://ogrisel.github.io/scikit-learn.org/sklearn-tutorial/_images/plot_ML_flow_chart_12.png" width="75%"> Here's [another diagram](https://livebook.manning.com/book/deep-learning-with-python/chapter-1/), which shows how machine learning is a "new programming paradigm": <img src="https://pbs.twimg.com/media/ECQDlFOWkAEJzlY.jpg" width="70%"> > A machine learning system is "trained" rather than explicitly programmed. It is presented with many "examples" relevant to a task, and it finds statistical structure in these examples which eventually allows the system to come up with rules for automating the task. —[Francois Chollet](https://livebook.manning.com/book/deep-learning-with-python/chapter-1/) Wait, are we saying that *linear regression* could be considered a *machine learning algorithm*? Maybe it depends? What do you think? We'll discuss throughout this unit. ## Challenge In your assignment, you will use scikit-learn for linear regression with one feature. For a stretch goal, you can do linear regression with two or more features. # Explain the coefficients from a linear regression ## Overview What pattern did the model "learn", about the relationship between square feet & price? ## Follow Along To help answer this question, we'll look at the `coef_` and `intercept_` attributes of the `LinearRegression` object. (Again, [here's the documentation](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html).) ``` model.coef_ model.intercept_ ``` We can repeatedly apply the model to new/unknown data, and explain the coefficient: ``` def predict(square_feet): y_pred = model.predict([[square_feet]]) estimate = y_pred[0] coefficient = model.coef_[0] result = f'${estimate:,.0f} estimated price for {square_feet:,.0f} square foot condo in Tribeca. ' explanation = f'In this linear regression, each additional square foot adds ${coefficient:,.0f}.' return result + explanation predict(1497) # What does the model predict for low square footage? predict(500) # For high square footage? predict(10000) # Re-run the prediction functon interactively # Ipywidgets usually works on Colab, but not always from ipywidgets import interact interact(predict, square_feet=(600, 5000)); ``` ## Challenge In your assignment, you will define a function to make new predictions and explain the model coefficient. # Review You'll practice these objectives when you do your assignment: - Begin with baselines for regression - Use scikit-learn to fit a linear regression - Make new predictions and explain coefficients You'll use another New York City real estate dataset. You'll predict how much it costs to rent an apartment, instead of how much it costs to buy a condo. You've been provided with a separate notebook for your assignment, which has all the instructions and stretch goals. Good luck and have fun! # Sources #### NYC Real Estate - Video: [Amateurs & Experts Guess How Much a NYC Condo With a Private Terrace Costs](https://www.youtube.com/watch?v=JQCctBOgH9I) - Data: [NYC OpenData: NYC Citywide Rolling Calendar Sales](https://data.cityofnewyork.us/dataset/NYC-Citywide-Rolling-Calendar-Sales/usep-8jbt) - Glossary: [NYC Department of Finance: Rolling Sales Data](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) #### Baselines - Will Koehrsen, ["One of the most important steps in a machine learning project is establishing a common sense baseline..."](https://twitter.com/koehrsen_will/status/1088863527778111488) - Emmanuel Ameisen, [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa) - Robyn M. Dawes, [The robust beauty of improper linear models in decision making](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.188.5825) #### Plotly Express - [Plotly Express](https://plot.ly/python/plotly-express/) examples - [plotly_express.scatter](https://www.plotly.express/plotly_express/#plotly_express.scatter) docs #### Scikit-Learn - Jake VanderPlas, [_Python Data Science Handbook,_ Chapter 5.2: Introducing Scikit-Learn](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.html#Basics-of-the-API) - Olvier Grisel, [Diagram](https://ogrisel.github.io/scikit-learn.org/sklearn-tutorial/tutorial/text_analytics/general_concepts.html#supervised-learning-model-fit-x-y) - [sklearn.linear_model.LinearRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html) - [sklearn.metrics.mean_absolute_error](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html)
github_jupyter
# 執行語音轉文字服務操作 ``` import azure.cognitiveservices.speech as speechsdk # Creates an instance of a speech config with specified subscription key and service region. # Replace with your own subscription key and region identifier from here: https://aka.ms/speech/sdkregion speech_key, service_region = "196f2f318dc744049eafb9cf89631e42", "southcentralus" speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region) # Creates an audio configuration that points to an audio file. # Replace with your own audio filename. audio_filename = "narration.wav" audio_input = speechsdk.audio.AudioConfig(filename=audio_filename) # Creates a recognizer with the given settings speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_input) print("Recognizing first result...") # Starts speech recognition, and returns after a single utterance is recognized. The end of a # single utterance is determined by listening for silence at the end or until a maximum of 15 # seconds of audio is processed. The task returns the recognition text as result. # Note: Since recognize_once() returns only a single utterance, it is suitable only for single # shot recognition like command or query. # For long-running multi-utterance recognition, use start_continuous_recognition() instead. result = speech_recognizer.recognize_once() # Checks result. if result.reason == speechsdk.ResultReason.RecognizedSpeech: print("Recognized: {}".format(result.text)) elif result.reason == speechsdk.ResultReason.NoMatch: print("No speech could be recognized: {}".format(result.no_match_details)) elif result.reason == speechsdk.ResultReason.Canceled: cancellation_details = result.cancellation_details print("Speech Recognition canceled: {}".format(cancellation_details.reason)) if cancellation_details.reason == speechsdk.CancellationReason.Error: print("Error details: {}".format(cancellation_details.error_details)) ``` # 執行文字轉語音服務操作 ## 文字轉成合成語音 ``` import azure.cognitiveservices.speech as speechsdk # Creates an instance of a speech config with specified subscription key and service region. # Replace with your own subscription key and service region (e.g., "westus"). speech_key, service_region = "196f2f318dc744049eafb9cf89631e42", "southcentralus" speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region) # Creates a speech synthesizer using the default speaker as audio output. speech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config) # Receives a text from console input. print("Type some text that you want to speak...") text = input() # Synthesizes the received text to speech. # The synthesized speech is expected to be heard on the speaker with this line executed. result = speech_synthesizer.speak_text_async(text).get() # Checks result. if result.reason == speechsdk.ResultReason.SynthesizingAudioCompleted: print("Speech synthesized to speaker for text [{}]".format(text)) elif result.reason == speechsdk.ResultReason.Canceled: cancellation_details = result.cancellation_details print("Speech synthesis canceled: {}".format(cancellation_details.reason)) if cancellation_details.reason == speechsdk.CancellationReason.Error: if cancellation_details.error_details: print("Error details: {}".format(cancellation_details.error_details)) print("Did you update the subscription info?") ``` ## 文字轉成音訊檔案 ``` import azure.cognitiveservices.speech as speechsdk # Replace with your own subscription key and region identifier from here: https://aka.ms/speech/sdkregion speech_key, service_region = "196f2f318dc744049eafb9cf89631e42", "southcentralus" speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region) # Creates an audio configuration that points to an audio file. # Replace with your own audio filename. audio_filename = "helloworld.wav" audio_output = speechsdk.audio.AudioOutputConfig(filename=audio_filename) # Creates a synthesizer with the given settings speech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config, audio_config=audio_output) # Synthesizes the text to speech. # Replace with your own text. text = "Hello world!" result = speech_synthesizer.speak_text_async(text).get() # Checks result. if result.reason == speechsdk.ResultReason.SynthesizingAudioCompleted: print("Speech synthesized to [{}] for text [{}]".format(audio_filename, text)) elif result.reason == speechsdk.ResultReason.Canceled: cancellation_details = result.cancellation_details print("Speech synthesis canceled: {}".format(cancellation_details.reason)) if cancellation_details.reason == speechsdk.CancellationReason.Error: if cancellation_details.error_details: print("Error details: {}".format(cancellation_details.error_details)) print("Did you update the subscription info?") ``` # 語音轉成翻譯文字服務操作 ``` import azure.cognitiveservices.speech as speechsdk speech_key, service_region = "196f2f318dc744049eafb9cf89631e42", "southcentralus" def translate_speech_to_text(): # Creates an instance of a speech translation config with specified subscription key and service region. # Replace with your own subscription key and region identifier from here: https://aka.ms/speech/sdkregion translation_config = speechsdk.translation.SpeechTranslationConfig(subscription=speech_key, region=service_region) # Sets source and target languages. # Replace with the languages of your choice, from list found here: https://aka.ms/speech/sttt-languages fromLanguage = 'en-US' toLanguage = 'de' #找 文本語言 line 33底下也需要更改 translation_config.speech_recognition_language = fromLanguage translation_config.add_target_language(toLanguage) # Creates a translation recognizer using and audio file as input. recognizer = speechsdk.translation.TranslationRecognizer(translation_config=translation_config) # Starts translation, and returns after a single utterance is recognized. The end of a # single utterance is determined by listening for silence at the end or until a maximum of 15 # seconds of audio is processed. It returns the recognized text as well as the translation. # Note: Since recognize_once() returns only a single utterance, it is suitable only for single # shot recognition like command or query. # For long-running multi-utterance recognition, use start_continuous_recognition() instead. print("Say something...") result = recognizer.recognize_once() # Check the result if result.reason == speechsdk.ResultReason.TranslatedSpeech: print("RECOGNIZED '{}': {}".format(fromLanguage, result.text)) print("TRANSLATED into {}: {}".format(toLanguage, result.translations['de'])) elif result.reason == speechsdk.ResultReason.RecognizedSpeech: print("RECOGNIZED: {} (text could not be translated)".format(result.text)) elif result.reason == speechsdk.ResultReason.NoMatch: print("NOMATCH: Speech could not be recognized: {}".format(result.no_match_details)) elif result.reason == speechsdk.ResultReason.Canceled: print("CANCELED: Reason={}".format(result.cancellation_details.reason)) if result.cancellation_details.reason == speechsdk.CancellationReason.Error: print("CANCELED: ErrorDetails={}".format(result.cancellation_details.error_details)) translate_speech_to_text() ``` # 語音轉成多國翻譯文字服務操作 ``` import azure.cognitiveservices.speech as speechsdk speech_key, service_region = "196f2f318dc744049eafb9cf89631e42", "southcentralus" def translate_speech_to_text(): # Creates an instance of a speech translation config with specified subscription key and service region. # Replace with your own subscription key and region identifier from here: https://aka.ms/speech/sdkregion translation_config = speechsdk.translation.SpeechTranslationConfig(subscription=speech_key, region=service_region) # Sets source and target languages. # Replace with the languages of your choice, from list found here: https://aka.ms/speech/sttt-languages fromLanguage = 'en-US' translation_config.speech_recognition_language = fromLanguage translation_config.add_target_language('de') translation_config.add_target_language('fr') # Creates a translation recognizer using and audio file as input. recognizer = speechsdk.translation.TranslationRecognizer(translation_config=translation_config) # Starts translation, and returns after a single utterance is recognized. The end of a # single utterance is determined by listening for silence at the end or until a maximum of 15 # seconds of audio is processed. It returns the recognized text as well as the translation. # Note: Since recognize_once() returns only a single utterance, it is suitable only for single # shot recognition like command or query. # For long-running multi-utterance recognition, use start_continuous_recognition() instead. print("Say something...") result = recognizer.recognize_once() # Check the result if result.reason == speechsdk.ResultReason.TranslatedSpeech: print("RECOGNIZED '{}': {}".format(fromLanguage, result.text)) print("TRANSLATED into {}: {}".format('de', result.translations['de'])) print("TRANSLATED into {}: {}".format('fr', result.translations['fr'])) elif result.reason == speechsdk.ResultReason.RecognizedSpeech: print("RECOGNIZED: {} (text could not be translated)".format(result.text)) elif result.reason == speechsdk.ResultReason.NoMatch: print("NOMATCH: Speech could not be recognized: {}".format(result.no_match_details)) elif result.reason == speechsdk.ResultReason.Canceled: print("CANCELED: Reason={}".format(result.cancellation_details.reason)) if result.cancellation_details.reason == speechsdk.CancellationReason.Error: print("CANCELED: ErrorDetails={}".format(result.cancellation_details.error_details)) translate_speech_to_text() ``` # 語音轉成多國語音服務操作 ``` import azure.cognitiveservices.speech as speechsdk speech_key, service_region = "196f2f318dc744049eafb9cf89631e42", "southcentralus" def translate_speech_to_speech(): # Creates an instance of a speech translation config with specified subscription key and service region. # Replace with your own subscription key and region identifier from here: https://aka.ms/speech/sdkregion translation_config = speechsdk.translation.SpeechTranslationConfig(subscription=speech_key, region=service_region) # Sets source and target languages. # Replace with the languages of your choice, from list found here: https://aka.ms/speech/sttt-languages fromLanguage = 'en-US' toLanguage = 'de' translation_config.speech_recognition_language = fromLanguage translation_config.add_target_language(toLanguage) # Sets the synthesis output voice name. # Replace with the languages of your choice, from list found here: https://aka.ms/speech/tts-languages translation_config.voice_name = "de-DE-Hedda" # Creates a translation recognizer using and audio file as input. recognizer = speechsdk.translation.TranslationRecognizer(translation_config=translation_config) # Prepare to handle the synthesized audio data. def synthesis_callback(evt): size = len(evt.result.audio) print('AUDIO SYNTHESIZED: {} byte(s) {}'.format(size, '(COMPLETED)' if size == 0 else '')) recognizer.synthesizing.connect(synthesis_callback) # Starts translation, and returns after a single utterance is recognized. The end of a # single utterance is determined by listening for silence at the end or until a maximum of 15 # seconds of audio is processed. It returns the recognized text as well as the translation. # Note: Since recognize_once() returns only a single utterance, it is suitable only for single # shot recognition like command or query. # For long-running multi-utterance recognition, use start_continuous_recognition() instead. print("Say something...") result = recognizer.recognize_once() # Check the result if result.reason == speechsdk.ResultReason.TranslatedSpeech: print("RECOGNIZED '{}': {}".format(fromLanguage, result.text)) print("TRANSLATED into {}: {}".format(toLanguage, result.translations['de'])) elif result.reason == speechsdk.ResultReason.RecognizedSpeech: print("RECOGNIZED: {} (text could not be translated)".format(result.text)) elif result.reason == speechsdk.ResultReason.NoMatch: print("NOMATCH: Speech could not be recognized: {}".format(result.no_match_details)) elif result.reason == speechsdk.ResultReason.Canceled: print("CANCELED: Reason={}".format(result.cancellation_details.reason)) if result.cancellation_details.reason == speechsdk.CancellationReason.Error: print("CANCELED: ErrorDetails={}".format(result.cancellation_details.error_details)) translate_speech_to_speech() ``` # 文字語言偵測服務操作 ``` from azure.core.credentials import AzureKeyCredential from azure.ai.textanalytics import TextAnalyticsClient key = "bdb7d45b308f4851bd1b8cae9a1d3453" endpoint = "https://test0524.cognitiveservices.azure.com/" text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key)) documents = [ "This document is written in English.", "Este es un document escrito en Español.", "这是一个用中文写的文件", "Dies ist ein Dokument in deutsche Sprache.", "Detta är ett dokument skrivet på engelska." ] result = text_analytics_client.detect_language(documents) for idx, doc in enumerate(result): if not doc.is_error: print("Document text: {}".format(documents[idx])) print("Language detected: {}".format(doc.primary_language.name)) print("ISO6391 name: {}".format(doc.primary_language.iso6391_name)) print("Confidence score: {}\n".format(doc.primary_language.confidence_score)) if doc.is_error: print(doc.id, doc.error) ``` # 執行關鍵字詞擷取服務操作 ``` from azure.core.credentials import AzureKeyCredential from azure.ai.textanalytics import TextAnalyticsClient key = "bdb7d45b308f4851bd1b8cae9a1d3453" endpoint = "https://test0524.cognitiveservices.azure.com/" text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key)) documents = [ "Redmond is a city in King County, Washington, United States, located 15 miles east of Seattle.", "I need to take my cat to the veterinarian.", "I will travel to South America in the summer.", ] result = text_analytics_client.extract_key_phrases(documents) for doc in result: if not doc.is_error: print(doc.key_phrases) if doc.is_error: print(doc.id, doc.error) ``` # 執行實體辨識服務操作 ## 實體辨識 ``` from azure.core.credentials import AzureKeyCredential from azure.ai.textanalytics import TextAnalyticsClient key = "bdb7d45b308f4851bd1b8cae9a1d3453" endpoint = "https://test0524.cognitiveservices.azure.com/" text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key)) documents = [ "Microsoft was founded by Bill Gates and Paul Allen.", "I had a wonderful trip to Seattle last week.", "I visited the Space Needle 2 times.", ] result = text_analytics_client.recognize_entities(documents) docs = [doc for doc in result if not doc.is_error] for idx, doc in enumerate(docs): print("\nDocument text: {}".format(documents[idx])) for entity in doc.entities: print("Entity: \t", entity.text, "\tCategory: \t", entity.category, "\tConfidence Score: \t", entity.confidence_score) ``` ## 實體連結 ``` from azure.core.credentials import AzureKeyCredential from azure.ai.textanalytics import TextAnalyticsClient key = "bdb7d45b308f4851bd1b8cae9a1d3453" endpoint = "https://test0524.cognitiveservices.azure.com/" text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key)) documents = [ "Microsoft moved its headquarters to Bellevue, Washington in January 1979.", "Steve Ballmer stepped down as CEO of Microsoft and was succeeded by Satya Nadella.", "Microsoft superó a Apple Inc. como la compañía más valiosa que cotiza en bolsa en el mundo.", ] result = text_analytics_client.recognize_linked_entities(documents) docs = [doc for doc in result if not doc.is_error] for idx, doc in enumerate(docs): print("Document text: {}\n".format(documents[idx])) for entity in doc.entities: print("Entity: {}".format(entity.name)) print("Url: {}".format(entity.url)) print("Data Source: {}".format(entity.data_source)) for match in entity.matches: print("Confidence Score: {}".format(match.confidence_score)) print("Entity as appears in request: {}".format(match.text)) print("------------------------------------------") ``` # 執行文本翻譯服務操作 ``` # -*- coding: utf-8 -*- import os, requests, uuid, json subscription_key = 'ab93f8c61e174973818ac06706a5a5d5' # your key endpoint = 'https://api.cognitive.microsofttranslator.com/' # key_var_name = 'TRANSLATOR_TEXT_SUBSCRIPTION_KEY' # if not key_var_name in os.environ: # raise Exception('Please set/export the environment variable: {}'.format(key_var_name)) # subscription_key = os.environ[key_var_name] # endpoint_var_name = 'TRANSLATOR_TEXT_ENDPOINT' # if not endpoint_var_name in os.environ: # raise Exception('Please set/export the environment variable: {}'.format(endpoint_var_name)) # endpoint = os.environ[endpoint_var_name] path = '/translate?api-version=3.0' # Output language setting params = '&to=de&to=it' constructed_url = endpoint + path + params headers = { 'Ocp-Apim-Subscription-Key': subscription_key, 'Content-type': 'application/json', 'X-ClientTraceId': str(uuid.uuid4()) } body = [{ 'text': 'Hello World!' }] request = requests.post(constructed_url, headers=headers, json=body) response = request.json() print(json.dumps(response, sort_keys=True, indent=4, ensure_ascii=False, separators=(',', ': '))) ``` # 執行LUIS意圖辨識服務操作 ## REST API ``` import requests try: key = '8286e59fe6f54ab9826222300bbdcb11' # your Runtime key endpoint = 'westus.api.cognitive.microsoft.com' # such as 'your-resource-name.api.cognitive.microsoft.com' appId = 'df67dcdb-c37d-46af-88e1-8b97951ca1c2' utterance = 'turn on all lights' headers = { } params ={ 'query': utterance, 'timezoneOffset': '0', 'verbose': 'true', 'show-all-intents': 'true', 'spellCheck': 'false', 'staging': 'false', 'subscription-key': key } r = requests.get(f'https://{endpoint}/luis/prediction/v3.0/apps/{appId}/slots/production/predict',headers=headers, params=params) print(r.json()) except Exception as e: print(f'{e}') ``` ## SDK ``` from azure.cognitiveservices.language.luis.runtime import LUISRuntimeClient from msrest.authentication import CognitiveServicesCredentials import datetime, json, os, time # Use public app ID or replace with your own trained and published app's ID # to query your own app # public appID = 'df67dcdb-c37d-46af-88e1-8b97951ca1c2' luisAppID = 'dcb2cb33-dee6-46c1-a3a6-28e266d159e0' runtime_key = '8286e59fe6f54ab9826222300bbdcb11' runtime_endpoint = 'https://westus.api.cognitive.microsoft.com/' # production or staging luisSlotName = 'production' # Instantiate a LUIS runtime client clientRuntime = LUISRuntimeClient(runtime_endpoint, CognitiveServicesCredentials(runtime_key)) def predict(app_id, slot_name): request = { "query" : "hi, show me lovely baby pictures" } # Note be sure to specify, using the slot_name parameter, whether your application is in staging or \ # production. response = clientRuntime.prediction.get_slot_prediction(app_id=app_id, slot_name=slot_name, \ prediction_request=request) print("Top intent: {}".format(response.prediction.top_intent)) print("Sentiment: {}".format (response.prediction.sentiment)) print("Intents: ") for intent in response.prediction.intents: print("\t{}".format (json.dumps (intent))) print("Entities: {}".format (response.prediction.entities)) predict(luisAppID, luisSlotName) ```
github_jupyter
``` import numpy as np import gym import k3d from ratelimiter import RateLimiter from k3d.platonic import Cube from time import time rate_limiter = RateLimiter(max_calls=4, period=1) env = gym.make('CartPole-v0') observation = env.reset() plot = k3d.plot(grid_auto_fit=False, camera_auto_fit=False, grid=(-1,-1,-1,1,1,1)) joint_positions = np.array([observation[0], 0, 0], dtype=np.float32) pole_positions = joint_positions + np.array([np.sin(observation[2]), 0, np.cos(observation[2])], dtype=np.float32) cart = Cube(origin=joint_positions, size=0.1).mesh cart.scaling = [1, 0.5, 1] joint = k3d.points(np.mean(cart.vertices[[0,2,4,6]], axis=0), point_size=0.03, color=0xff00, shader='mesh') pole = k3d.line(vertices=np.array([joint.positions, pole_positions]), shader='mesh', color=0xff0000) box = cart.vertices mass = k3d.points(pole_positions, point_size=0.03, color=0xff0000, shader='mesh') plot += pole + cart + joint + mass plot.display() for i_episode in range(20): observation = env.reset() for t in range(100): with rate_limiter: joint_positions = np.array([observation[0], 0, 0], dtype=np.float32) pole_positions = joint_positions + np.array([np.sin(observation[2]), 0, np.cos(observation[2])], dtype=np.float32) cart.vertices = box + joint_positions joint.positions = np.mean(cart.vertices[[0,2,4,6]], axis=0) pole.vertices = [joint.positions, pole_positions] mass.positions = pole_positions action = env.action_space.sample() observation, reward, done, info = env.step(action) if done: break plot.display() for i_episode in range(20): observation = env.reset() for t in range(100): joint_positions = np.array([observation[0], 0, 0], dtype=np.float32) pole_positions = joint_positions + np.array([np.sin(observation[2]), 0, np.cos(observation[2])], dtype=np.float32) with rate_limiter: cart.vertices = box + joint_positions joint.positions = np.mean(cart.vertices[[0,2,4,6]], axis=0) pole.vertices = [joint.positions, pole_positions] mass.positions = pole_positions action = env.action_space.sample() observation, reward, done, info = env.step(action) if done: break max_calls, period = 3, 1 call_time = period/max_calls for i_episode in range(20): observation = env.reset() for t in range(100): joint_positions = np.array([observation[0], 0, 0], dtype=np.float32) pole_positions = joint_positions + np.array([np.sin(observation[2]), 0, np.cos(observation[2])], dtype=np.float32) time_stamp2 = time() if t>0: d = time_stamp2 - time_stamp1 if d < call_time: cart.vertices = box + joint_positions joint.positions = np.mean(cart.vertices[[0,2,4,6]], axis=0) pole.vertices = [joint.positions, pole_positions] mass.positions = pole_positions if t==0: cart.vertices = box + joint_positions joint.positions = np.mean(cart.vertices[[0,2,4,6]], axis=0) pole.vertices = [joint.positions, pole_positions] mass.positions = pole_positions time_stamp1 = time() action = env.action_space.sample() observation, reward, done, info = env.step(action) if done: break max_calls, period = 3, 1 call_time = period/max_calls i = 1 all_it_time = 0 cache = [] iterator = [] for i_episode in range(20): cache.append([]) observation = env.reset() for t in range(100): ts1 = time() joint_positions = np.array([observation[0], 0, 0], dtype=np.float32) pole_positions = joint_positions + np.array([np.sin(observation[2]), 0, np.cos(observation[2])], dtype=np.float32) # [cart.vertices, joint.positions, pole.vertices, mass.positions] cache[i_episode].append([box + joint_positions, np.mean((box + joint_positions)[[0,2,4,6]], axis=0), [np.mean((box + joint_positions)[[0,2,4,6]], axis=0), pole_positions], pole_positions]) if all_it_time > call_time*i: i += 1 iterator = iter(iterator) element = next(iterator) cart.vertices = element[0] joint.positions = element[1] pole.vertices = element[2] mass.positions = element[3] action = env.action_space.sample() observation, reward, done, info = env.step(action) ts2 = time() it_time = ts2 - ts1 all_it_time += it_time if done: break temp_list = [] to_pull = t//max_calls if max_calls > t: to_pull = 1 for j in range(max_calls): temp_list.append(cache[i_episode][to_pull*i]) iterator = list(iterator) + temp_list del cache for element in iterator: with RateLimiter(max_calls=max_calls): i += 1 iterator = iter(iterator) element = next(iterator) cart.vertices = element[0] joint.positions = element[1] pole.vertices = element[2] mass.positions = element[3] plot.display() ```
github_jupyter
**This notebook is an exercise in the [Python](https://www.kaggle.com/learn/python) course. You can reference the tutorial at [this link](https://www.kaggle.com/colinmorris/loops-and-list-comprehensions).** --- # Try It Yourself With all you've learned, you can start writing much more interesting programs. See if you can solve the problems below. As always, run the setup code below before working on the questions. ``` from learntools.core import binder; binder.bind(globals()) from learntools.python.ex5 import * print('Setup complete.') ``` # Exercises ## 1. Have you ever felt debugging involved a bit of luck? The following program has a bug. Try to identify the bug and fix it. ``` def has_lucky_number(nums): """Return whether the given list of numbers is lucky. A lucky list contains at least one number divisible by 7. """ for num in nums: if num % 7 == 0: return True else: return False ``` Try to identify the bug and fix it in the cell below: ``` def has_lucky_number(nums): """Return whether the given list of numbers is lucky. A lucky list contains at least one number divisible by 7. """ for num in nums: if num % 7 == 0: return True return False # Check your answer q1.check() #q1.hint() #q1.solution() ``` ## 2. ### a. Look at the Python expression below. What do you think we'll get when we run it? When you've made your prediction, uncomment the code and run the cell to see if you were right. ``` [1, 2, 3, 4] > 2 ``` ### b R and Python have some libraries (like numpy and pandas) compare each element of the list to 2 (i.e. do an 'element-wise' comparison) and give us a list of booleans like `[False, False, True, True]`. Implement a function that reproduces this behaviour, returning a list of booleans corresponding to whether the corresponding element is greater than n. ``` def elementwise_greater_than(L, thresh): """Return a list with the same length as L, where the value at index i is True if L[i] is greater than thresh, and False otherwise. >>> elementwise_greater_than([1, 2, 3, 4], 2) [False, False, True, True] """ return [l > thresh for l in L] # Check your answer q2.check() #q2.solution() ``` ## 3. Complete the body of the function below according to its docstring. ``` def menu_is_boring(meals): """Given a list of meals served over some period of time, return True if the same meal has ever been served two days in a row, and False otherwise. """ for i in range(len(meals)-1): if meals[i] == meals[i+1]: return True return False # Check your answer q3.check() q3.hint() q3.solution() ``` ## 4. <span title="A bit spicy" style="color: darkgreen ">🌶️</span> Next to the Blackjack table, the Python Challenge Casino has a slot machine. You can get a result from the slot machine by calling `play_slot_machine()`. The number it returns is your winnings in dollars. Usually it returns 0. But sometimes you'll get lucky and get a big payday. Try running it below: ``` play_slot_machine() ``` By the way, did we mention that each play costs $1? Don't worry, we'll send you the bill later. On average, how much money can you expect to gain (or lose) every time you play the machine? The casino keeps it a secret, but you can estimate the average value of each pull using a technique called the **Monte Carlo method**. To estimate the average outcome, we simulate the scenario many times, and return the average result. Complete the following function to calculate the average value per play of the slot machine. ``` def estimate_average_slot_payout(n_runs): """Run the slot machine n_runs times and return the average net profit per run. Example calls (note that return value is nondeterministic!): >>> estimate_average_slot_payout(1) -1 >>> estimate_average_slot_payout(1) 0.5 """ return sum([play_slot_machine() - 1 for _ in range(n_runs)])/n_runs estimate_average_slot_payout(10000000) ``` When you think you know the expected value per spin, run the code cell below to view the solution and get credit for answering the question. ``` # Check your answer (Run this code cell to receive credit!) q4.solution() ``` # Keep Going Many programmers report that dictionaries are their favorite data structure. You'll get to **[learn about them](https://www.kaggle.com/colinmorris/strings-and-dictionaries)** (as well as strings) in the next lesson. --- *Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161283) to chat with other Learners.*
github_jupyter
``` import numpy as np import pandas as pd from sklearn.svm import SVC from sklearn.decomposition import PCA from sklearn.preprocessing import Imputer from sklearn.preprocessing import StandardScaler from sklearn.model_selection import cross_val_score from sklearn.model_selection import train_test_split DATA = 'dataset/loan_one_hot_encoded.csv' drop_cols = ['loan_created', 'application_id', # 'firm_type_Proprietorship', 'average_business_inflow' ] df = pd.read_csv(DATA) Y = df['loan_created'] og_X = df.drop(drop_cols, axis=1) ``` Things to-do: - [ ] The data is missing values; use Imputer to fill mean/median of the column - [ ] Create another column to denote whether the data was imputed or not; I've read that it seems to have better results - [ ] Set class_weights in SVM - [ ] Tune hyperparameters - [ ] Specific kernel? What to do about skewed data: - See as an anamoly detection problem? - class weights (for SVM) - Remove training data (less data anyway..:( ) ``` imp = Imputer() imputed_X = imp.fit_transform(og_X) # X = imputed_X scl = StandardScaler() X = scl.fit_transform(imputed_X) # pd.value_counts(og_X['firm_type_Proprietorship']) X.shape reverse_Y = Y.apply(lambda x: 0 if x == 1 else 1) np.unique(Y, return_counts=True) ``` ### ensembles ``` from sklearn.ensemble import BaggingClassifier from sklearn.ensemble import AdaBoostClassifier clf = SVC(**{'C': 10, 'class_weight': 'balanced', 'degree': 3, 'gamma': 'auto', 'kernel': 'sigmoid', 'probability': True}) ens_clf = BaggingClassifier(clf) X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=42) ens_clf.fit(X_train, y_train) ens_clf.predict(X_test) clf = SVC(**{'C': 10, 'class_weight': 'balanced', 'degree': 3, 'gamma': 'auto', 'kernel': 'sigmoid', 'probability': True}) ens_clf = AdaBoostClassifier(clf) X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=42) ens_clf.fit(X_train, y_train) ens_clf.predict(X_test) ``` ### anomaly detection ``` from sklearn.svm import OneClassSVM from sklearn.metrics import accuracy_score from sklearn.metrics import make_scorer ad_clf = OneClassSVM(kernel="rbf") scores = cross_val_score(ad_clf, X, [_ if _ == 1 else -1 for _ in Y], cv=k_fold, scoring=make_scorer(accuracy_score)) print(scores) print(np.average(scores)) X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=42) ad_clf = OneClassSVM(nu=0.7) ad_clf.fit(X_train, y_train) y_predict = ad_clf.predict(X_test) print(accuracy_score([x if x == 1 else -1 for x in y_test], y_predict)) print(y_predict) from sklearn.model_selection import GridSearchCV param_grid = [ {'nu': np.arange(.1, 1.0, 0.1), 'gamma': ['auto'], 'kernel': ['rbf']}, ] gs_cv = GridSearchCV(OneClassSVM(), param_grid=param_grid, scoring=make_scorer(accuracy_score), cv=5, refit=True) gs_cv.fit(X, [_ if _ == 1 else -1 for _ in Y]) import pandas as pd pd.DataFrame(gs_cv.cv_results_) gs_cv.best_score_ gs_cv.best_params_ ``` ### normal svm and kernel selection ``` k_fold = 6 kernel = 'poly' print('Kernel: ', kernel) clf = SVC(kernel=kernel, class_weight='balanced') np.average(cross_val_score(clf, X, Y, cv=k_fold)) kernel = 'rbf' print('Kernel: ', kernel) clf = SVC(kernel=kernel, class_weight='balanced') np.average(cross_val_score(clf, X, Y, cv=k_fold)) kernel = 'sigmoid' print('Kernel: ', kernel) clf = SVC(kernel=kernel, class_weight='balanced') np.average(cross_val_score(clf, X, Y, cv=k_fold)) # kernel = 'precomputed' # print('Kernel: ', kernel) # clf = SVC(kernel=kernel, class_weight='balanced') # cross_val_score(clf, X, Y, cv=k_fold) # n_samples / (n_classes * np.bincount(y)) X.shape[0] / (2*np.bincount(Y)) X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=42) clf = SVC(kernel='poly', class_weight='balanced', probability=True) clf.fit(X_train, y_train) print(clf.predict(X_test)) clf.predict_proba(X_test)[:,1] X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=42) clf = SVC(kernel='poly', class_weight='balanced') clf.fit(X_train, y_train) clf.predict(X_test) # clf.score(X_test, y_test) p = PCA(n_components=160) p.fit(X) percents = np.round((p.explained_variance_ratio_), 3) _sum = 0 # print(np.sum(percents)) j = 0 for i in range(21): s = np.sum(percents[j: j+5])*100 _sum += s print(j, s, _sum) j += 5 pca = PCA(n_components=60) pca.fit(X) print(np.round((pca.explained_variance_ratio_), 3)) # pca.components_ # ['PC-1','PC-2','PC-3','PC-4','PC-5','PC-6'] coef = pca.transform(np.eye(X.shape[1])) print(np.linalg.norm(coef, axis=0)) _p = pd.DataFrame(coef, columns=range(1, 61), index=og_X.columns) abs(_p).idxmax() # pd.value_counts(abs(_p).idxmax(axis=1)) # clf = SVC(kernel='poly', class_weight='balanced') # X_train, X_test, y_train, y_test = train_test_split(pca.transform(X), Y, test_size=0.33, random_state=42) # clf.fit(X_train, y_train) # clf.predict(X_test), clf.score(X_test, y_test) # # cross_val_score(clf, pca.transform(X), Y, cv=k_fold) pca ```
github_jupyter
<img align="left" src="https://lever-client-logos.s3.amazonaws.com/864372b1-534c-480e-acd5-9711f850815c-1524247202159.png" width=200> <br></br> <br></br> ## *Data Science Unit 1 Sprint 3 Assignment 1* # Apply the t-test to real data Your assignment is to determine which issues have "statistically significant" differences between political parties in this [1980s congressional voting data](https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records). The data consists of 435 instances (one for each congressperson), a class (democrat or republican), and 16 binary attributes (yes or no for voting for or against certain issues). Be aware - there are missing values! Your goals: 1. Load and clean the data (or determine the best method to drop observations when running tests) 2. Using hypothesis testing, find an issue that democrats support more than republicans with p < 0.01 3. Using hypothesis testing, find an issue that republicans support more than democrats with p < 0.01 4. Using hypothesis testing, find an issue where the difference between republicans and democrats has p > 0.1 (i.e. there may not be much of a difference) Note that this data will involve *2 sample* t-tests, because you're comparing averages across two groups (republicans and democrats) rather than a single group against a null hypothesis. Stretch goals: 1. Refactor your code into functions so it's easy to rerun with arbitrary variables 2. Apply hypothesis testing to your personal project data (for the purposes of this notebook you can type a summary of the hypothesis you formed and tested) ``` import numpy as np import pandas as pd import seaborn as sns from matplotlib import style from scipy.stats import ttest_ind, ttest_ind_from_stats, ttest_rel df = pd.read_csv('house-votes-84.data',header=None, na_values='?') df.head() df.shape df = df[df != '?'] df.isna().sum() df.dropna(inplace=True) df.isna().sum() df.head() df.shape df.columns.tolist() df = df.rename(columns={ 0: 'Party', 1: 'handicapped-infants', 2: 'water-project-cost-sharing', 3: 'adoption-of-the-budget-resolution', 4: 'physician-fee-freeze', 5: 'el-salvador-aid', 6: 'religious-groups-in-schools', 7: 'anti-satellite-test-ban', 8: 'aid-to-nicaraguan-contras', 9: 'mx-missile', 10: 'immigration', 11: 'synfuels-corporation-cutback', 12: 'education-spending', 13: 'superfund-right-to-sue', 14: 'crime', 15: 'duty-free-exports', 16: 'export-administration-act-south-africa', }) df.head() df = df.replace(['y','n'], [1,0]) df.head() democrats = df[df['Party']=='democrat'] republicans = df[df['Party']=='republican'] republicans['handicapped-infants'].describe() stat, pvalue = ttest_ind(democrats['handicapped-infants'], republicans['handicapped-infants']) print('{}, {}'.format(stat, pvalue)) #2.0722024876891192e-09 def ttest(title, sample1, sample2, alpha): stat, pvalue = ttest_ind(sample1, sample2) title = title.replace('-',' ').title() result = {'title':title,'stat':stat, 'pvalue':pvalue,'alpha':alpha} return result columns = df.columns.tolist() columns = columns[1:] for col in columns: result = ttest(col,democrats[col],republicans[col],0.01) if result['pvalue'] < result['alpha']: if republicans[col].mean() > democrats[col].mean(): print('The Republicans support the {} issue more than the Democrats'.format(result['title'])) else: print('The Democrats support the {} issue more than the Republican'.format(result['title'])) else: print('The difference between parties on the {} issue is not statistically significant.'.format(result['title'])) ```
github_jupyter
### Note * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps. ``` # Dependencies and Setup import pandas as pd # File to Load purchanse_file = "Resources/purchase_data.csv" # Read Purchasing File and store into Pandas data frame purchase_data = pd.read_csv(purchanse_file) purchase_data ``` ## Player Count * Display the total number of players ``` total_players = purchase_data['SN'].nunique() total_players_df = pd.DataFrame({"total_players":[total_players]}) total_players_df ``` ## Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc. * Create a summary data frame to hold the results * Optional: give the displayed data cleaner formatting * Display the summary data frame ``` no_uniq_itms = purchase_data['Item Name'].nunique() avg_price = purchase_data['Price'].mean() total_purchase = purchase_data['Item ID'].count() total_revenue = purchase_data['Price'].sum() analysis_pur = [{'no_uniq_itms': no_uniq_itms, 'avg_price': avg_price, 'total_purchase':total_purchase, 'total_revenue':total_revenue}] pur_anlys_df = pd.DataFrame(analysis_pur) pur_anlys_df ``` ## Gender Demographics * Percentage and Count of Male Players * Percentage and Count of Female Players * Percentage and Count of Other / Non-Disclosed ``` gender_count = purchase_data.groupby('Gender')['SN'].nunique() #pct_gender_count = gender_count/576 pct_gen_players = gender_count/total_players*100 gen_demo_df = pd.DataFrame({'pct_gen_players':pct_gen_players, 'gender_count': gender_count}) # Create DataFrame gen_demo_df.index.name = None # Print the output. gen_demo_df ``` ## Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender * Create a summary data frame to hold the results * Optional: give the displayed data cleaner formatting * Display the summary data frame ``` gen_purchase_count = purchase_data.groupby('Gender')['Purchase ID'].nunique() gen_purchase_avg = purchase_data.groupby('Gender')['Price'].mean() gen_purchase_total = purchase_data.groupby('Gender')['Price'].sum() gen_avg_per_person = gen_purchase_total/gender_count gen_pur_analysis_df = pd.DataFrame({'gen_purchase_count':gen_purchase_count, 'gen_purchase_avg': gen_purchase_avg, 'gen_purchase_total':gen_purchase_total,'gen_avg_per_person':gen_avg_per_person }) # Create DataFrame gen_pur_analysis_df.index.name = None gen_pur_analysis_df ``` ## Age Demographics * Establish bins for ages * Categorize the existing players using the age bins. Hint: use pd.cut() * Calculate the numbers and percentages by age group * Create a summary data frame to hold the results * Optional: round the percentage column to two decimal points * Display Age Demographics Table ``` age_bins = [0, 9, 14, 19,24, 29, 34, 39, 100] age_group =['<10', '10-14', '15-19', '20-24','25-29','30-34','35-39','40+'] purchase_data["age_groups"] = pd.cut(purchase_data["Age"], age_bins, labels=age_group) purchase_data age_grp_total = purchase_data.groupby('age_groups')['SN'].nunique() pct_age_grp = (age_grp_total/total_players) *100 age_demo_summary_df = pd.DataFrame({'age_grp_total':age_grp_total,'pct_age_grp':pct_age_grp }) age_demo_summary_df.index.name=None age_demo_summary_df ``` ## Purchasing Analysis (Age) * Bin the purchase_data data frame by age * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below * Create a summary data frame to hold the results * Optional: give the displayed data cleaner formatting * Display the summary data frame ``` age_pur_total = purchase_data.groupby('age_groups')['Purchase ID'].nunique() age_grp_pp_sum = purchase_data.groupby('age_groups')['Price'].sum() age_grp_avg_pp = age_grp_pp_sum/age_pur_total age_grp_avg_indiv_pur = age_grp_pp_sum/age_grp_total analysis_pur_age_df = pd.DataFrame({'age_pur_total': age_pur_total, 'age_grp_pp_sum': age_grp_pp_sum, 'age_grp_avg_pp':age_grp_avg_pp, 'age_grp_avg_indiv_pur':age_grp_avg_indiv_pur}) analysis_pur_age_df.index.name = None analysis_pur_age_df ``` ## Top Spenders * Run basic calculations to obtain the results in the table below * Create a summary data frame to hold the results * Sort the total purchase value column in descending order * Optional: give the displayed data cleaner formatting * Display a preview of the summary data frame ``` spenders_count = purchase_data.groupby('SN') purchase_count = spenders_count['Purchase ID'].count() avg_pur_price = spenders_count['Price'].mean() total_purchase_value = spenders_count['Price'].sum() top_spenders_df = pd.DataFrame({'purchase_count':purchase_count,'avg_pur_price':avg_pur_price, 'total_purchase_value':total_purchase_value}) top_spenders_df_formatted = top_spenders_df.sort_values(['total_purchase_value'], ascending=False).head() top_spenders_df_formatted ``` ## Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns * Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value * Create a summary data frame to hold the results * Sort the purchase count column in descending order * Optional: give the displayed data cleaner formatting * Display a preview of the summary data frame ``` item_count = purchase_data.groupby(['Item ID','Item Name']) item_purchase_count = item_count['Purchase ID'].count() item_total_purchase_value = item_count['Price'].sum() item_price = item_total_purchase_value/item_purchase_count most_popular_df = pd.DataFrame({'item_purchase_count':item_purchase_count,'item_price':item_price, 'item_total_purchase_value':item_total_purchase_value}) most_popular_df_formatted = most_popular_df.sort_values(['item_purchase_count'], ascending=False).head() most_popular_df_formatted ``` ## Most Profitable Items * Sort the above table by total purchase value in descending order * Optional: give the displayed data cleaner formatting * Display a preview of the data frame ``` most_profitable_df = most_popular_df_formatted.sort_values(['item_total_purchase_value'], ascending = False) most_profitable_df ```
github_jupyter
``` import matplotlib.pyplot as plt import matplotlib.image as mpimg import pickle import numpy as np import cv2 from moviepy.editor import VideoFileClip import math import glob class Left_Right: last_L_points = [] last_R_points = [] def __init__(self, last_L_points, last_R_points): self.last_L_points = last_L_points self.last_R_points = last_R_points calib_image = mpimg.imread(r'C:\Users\pramo\Documents\Project4\camera_cal\calibration1.jpg') plt.imshow(calib_image) objp = np.zeros((6*9,3), np.float32) objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2) # Arrays to store object points and image points from all the images. objpoints = [] # 3d points in real world space imgpoints = [] # 2d points in image plane. # Make a list of calibration images images = glob.glob(r'C:\Users\pramo\Documents\Project4\camera_cal\calibration*.jpg') show_images = [] # Step through the list and search for chessboard corners for fname in images: img = cv2.imread(fname) gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) # Find the chessboard corners ret, corners = cv2.findChessboardCorners(gray, (9,6),None) # If found, add object points, image points if ret == True: objpoints.append(objp) imgpoints.append(corners) # Draw and display the corners img = cv2.drawChessboardCorners(img, (9,6), corners, ret) show_images.append (img) ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None) dist_pickle = {} dist_pickle["mtx"] = mtx dist_pickle["dist"] = dist pickle.dump( dist_pickle, open( "wide_dist_pickle.p", "wb" ) ) fig=plt.figure(figsize=(20, 20)) columns = 2 rows = 10 for i in range(len(show_images)): j= i+1 img = show_images[i].squeeze() fig.add_subplot(rows, columns, j) plt.imshow(img, cmap="gray") plt.show() dist_pickle = pickle.load( open( "wide_dist_pickle.p", "rb" ) ) mtx = dist_pickle["mtx"] dist = dist_pickle["dist"] def cal_undistort(img, mtx, dist): return cv2.undistort(img, mtx, dist, None, mtx) def gray_image(img): thresh = (200, 220) gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) binary = np.zeros_like(gray) binary[(gray > thresh[0]) & (gray <= thresh[1])] = 1 return binary def abs_sobel_img(img, orient='x', thresh_min=0, thresh_max=255): gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) if orient == 'x': abs_sobel = np.absolute(cv2.Sobel(gray , cv2.CV_64F, 1, 0)) if orient == 'y': abs_sobel = np.absolute(cv2.Sobel(gray , cv2.CV_64F, 0, 1)) scaled_sobel = np.uint8(255*abs_sobel/np.max(abs_sobel)) abs_sobel_output = np.zeros_like(scaled_sobel) abs_sobel_output[(scaled_sobel >= thresh_min) & (scaled_sobel <= thresh_max)] = 1 return abs_sobel_output def hls_select(img, thresh_min=0, thresh_max=255): hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS) s_channel = hls[:,:,2] binary_output = np.zeros_like(s_channel) binary_output[(s_channel > thresh_min) & (s_channel <= thresh_max)] = 1 return binary_output #hls_binary = hls_select(image, thresh=(90, 255)) def wrap_transform(img, inverse ='TRUE'): img_size = (img.shape[1], img.shape[0]) src = np.float32( [[(img_size[0] / 2) - 55, img_size[1] / 2 + 100], [((img_size[0] / 6) - 10), img_size[1]], [(img_size[0] * 5 / 6) + 60, img_size[1]], [(img_size[0] / 2 + 55), img_size[1] / 2 + 100]]) dst = np.float32( [[(img_size[0] / 4), 0], [(img_size[0] / 4), img_size[1]], [(img_size[0] * 3 / 4), img_size[1]], [(img_size[0] * 3 / 4), 0]]) if inverse == 'FALSE': M = cv2.getPerspectiveTransform(src, dst) if inverse == 'TRUE': M = cv2.getPerspectiveTransform(dst, src) return cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_LINEAR) def combined_image(img): undisort_image = cal_undistort(img, mtx, dist) W_image = wrap_transform(undisort_image, inverse ='FALSE') grayimage = gray_image(W_image ) sobelx = abs_sobel_img(W_image,'x', 20, 100) s_binary = hls_select(W_image, 150, 255) color_binary = np.dstack(( np.zeros_like(sobelx), sobelx, s_binary)) * 255 combined_binary = np.zeros_like(sobelx) combined_binary[(s_binary == 1) | (sobelx == 1) | (grayimage == 1)] = 1 return undisort_image, sobelx, s_binary, combined_binary, color_binary, W_image img = cv2.imread(r'C:\Users\pramo\Documents\Project4\camera_cal\calibration2.jpg') undisort_image = cal_undistort(img, mtx, dist) f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 10)) f.tight_layout() ax1.imshow(img) ax1.set_title('Original Image', fontsize=10) ax2.imshow(undisort_image, cmap="gray") ax2.set_title('undisort_image', fontsize=10) plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.) img = cv2.imread(r'C:\Users\pramo\Documents\Project4\test_images\test6.jpg') undisort_image, sobelx, s_binary, combined_binary, color_binary, W_image = combined_image(img) f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 10)) f.tight_layout() ax1.imshow(img) ax1.set_title('Original Image', fontsize=10) ax2.imshow(undisort_image, cmap="gray") ax2.set_title('undisort_image', fontsize=10) plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.) f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 10)) f.tight_layout() ax1.imshow(sobelx) ax1.set_title('sobelx', fontsize=10) ax2.imshow(s_binary, cmap="gray") ax2.set_title('s_binary', fontsize=10) plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.) f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 10)) f.tight_layout() ax1.imshow(color_binary) ax1.set_title('color_binary', fontsize=10) ax2.imshow(combined_binary, cmap="gray") ax2.set_title('combined_binary', fontsize=10) plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.) f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 10)) f.tight_layout() ax1.imshow(W_image) ax1.set_title('W_image', fontsize=10) ax2.imshow(img) ax2.set_title('Original Image', fontsize=10) plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.) def hist(img): return np.sum(img[img.shape[0]//2:,:], axis=0) def find_lane_pixels(binary_warped, image_show = True): # Take a histogram of the bottom half of the image histogram = hist(binary_warped) # Create an output image to draw on and visualize the result out_img = np.dstack((binary_warped, binary_warped, binary_warped)) # Find the peak of the left and right halves of the histogram # These will be the starting point for the left and right lines midpoint = np.int(histogram.shape[0]//2) leftx_base = np.argmax(histogram[:midpoint]) rightx_base = np.argmax(histogram[midpoint:]) + midpoint # HYPERPARAMETERS # Choose the number of sliding windows nwindows = 8 # Set the width of the windows +/- margin margin = 150 # Set minimum number of pixels found to recenter window minpix = 50 # Set height of windows - based on nwindows above and image shape window_height = np.int(binary_warped.shape[0]//nwindows) # Identify the x and y positions of all nonzero pixels in the image nonzero = binary_warped.nonzero() nonzeroy = np.array(nonzero[0]) nonzerox = np.array(nonzero[1]) # Current positions to be updated later for each window in nwindows leftx_current = leftx_base rightx_current = rightx_base # Create empty lists to receive left and right lane pixel indices left_lane_inds = [] right_lane_inds = [] # Step through the windows one by one for window in range(nwindows): # Identify window boundaries in x and y (and right and left) win_y_low = binary_warped.shape[0] - (window+1)*window_height win_y_high = binary_warped.shape[0] - window*window_height win_xleft_low = leftx_current - margin win_xleft_high = leftx_current + margin win_xright_low = rightx_current - margin win_xright_high = rightx_current + margin # Draw the windows on the visualization image cv2.rectangle(out_img,(win_xleft_low,win_y_low), (win_xleft_high,win_y_high),(0,255,0), 2) cv2.rectangle(out_img,(win_xright_low,win_y_low), (win_xright_high,win_y_high),(0,255,0), 2) # Identify the nonzero pixels in x and y within the window # good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0] good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0] # Append these indices to the lists left_lane_inds.append(good_left_inds) right_lane_inds.append(good_right_inds) # If you found > minpix pixels, recenter next window on their mean position if len(good_left_inds) > minpix: leftx_current = np.int(np.mean(nonzerox[good_left_inds])) if len(good_right_inds) > minpix: rightx_current = np.int(np.mean(nonzerox[good_right_inds])) # Concatenate the arrays of indices (previously was a list of lists of pixels) try: left_lane_inds = np.concatenate(left_lane_inds) right_lane_inds = np.concatenate(right_lane_inds) except ValueError: # Avoids an error if the above is not implemented fully pass # Extract left and right line pixel positions leftx = nonzerox[left_lane_inds] lefty = nonzeroy[left_lane_inds] rightx = nonzerox[right_lane_inds] righty = nonzeroy[right_lane_inds] left_fit = np.polyfit(lefty, leftx, 2) right_fit = np.polyfit(righty, rightx, 2) if image_show == True: #Generate x and y values for plotting ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] ) try: left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2] right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2] except TypeError: #Avoids an error if `left` and `right_fit` are still none or incorrect print('The function failed to fit a line!') left_fitx = 1*ploty**2 + 1*ploty right_fitx = 1*ploty**2 + 1*ploty ## Visualization ## # Colors in the left and right lane regions out_img[lefty, leftx] = [255, 0, 0] out_img[righty, rightx] = [0, 0, 255] if image_show == True: return out_img, left_fit, right_fit else: return left_fit, right_fit images = glob.glob(r'C:\Users\pramo\Documents\Project4\test_images\test*.jpg') show_images = [] # Step through the list and search for chessboard corners for fname in images: img = cv2.imread(fname) undisort_image, sobelx, s_binary, combined_binary, color_binary, W_image = combined_image(img) outImage, left_fit, right_fit = find_lane_pixels(combined_binary, image_show = True) show_images.append (outImage) fig=plt.figure(figsize=(20, 20)) columns = 2 rows = 4 for i in range(len(show_images)): j= i+1 img = show_images[i].squeeze() fig.add_subplot(rows, columns, j) plt.imshow(img) plt.show() def fit_poly(img_shape, left_fitn, right_fitn): # Generate x and y values for plotting ploty = np.linspace(0, img_shape[0]-1, img_shape[0]) ### TO-DO: Calc both polynomials using ploty, left_fit and right_fit ### left_fitx = left_fitn[0]*ploty**2 + left_fitn[1]*ploty + left_fitn[2] right_fitx = right_fitn[0]*ploty**2 + right_fitn[1]*ploty + right_fitn[2] return left_fitx, right_fitx, ploty def fit_polynomial(binary_warped, left_fit, right_fit, image_show = True): # Find our lane pixels first margin = 10 nonzero = binary_warped.nonzero() nonzeroy = np.array(nonzero[0]) nonzerox = np.array(nonzero[1]) left_lane_inds = ((nonzerox > (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy + left_fit[2] - margin)) & (nonzerox < (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy + left_fit[2] + margin))) right_lane_inds = ((nonzerox > (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy + right_fit[2] - margin)) & (nonzerox < (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy + right_fit[2] + margin))) # Again, extract left and right line pixel positions leftx = nonzerox[left_lane_inds] lefty = nonzeroy[left_lane_inds] rightx = nonzerox[right_lane_inds] righty = nonzeroy[right_lane_inds] # Color in left and right line pixels left_fitn = np.polyfit(lefty, leftx, 2) right_fitn = np.polyfit(righty, rightx, 2) if image_show == True: left_fitx, right_fitx, ploty = fit_poly(binary_warped.shape, left_fitn, right_fitn) out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255 window_img = np.zeros_like(out_img) out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0] out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255] # Generate a polygon to illustrate the search window area # And recast the x and y points into usable format for cv2.fillPoly() left_line_window1 = np.array([np.transpose(np.vstack([left_fitx-margin, ploty]))]) left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([left_fitx+margin, ploty])))]) left_line_pts = np.hstack((left_line_window1, left_line_window2)) right_line_window1 = np.array([np.transpose(np.vstack([right_fitx-margin, ploty]))]) right_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx+margin, ploty])))]) right_line_pts = np.hstack((right_line_window1, right_line_window2)) # Draw the lane onto the warped blank image cv2.fillPoly(window_img, np.int_([left_line_pts]), (0,255, 0)) cv2.fillPoly(window_img, np.int_([right_line_pts]), (0,255, 0)) out_img = cv2.addWeighted(out_img, 1, window_img, 0.3, 0) ## End visualization steps ## if image_show == True: return out_img else: return left_fitn, right_fitn img = cv2.imread(r'C:\Users\pramo\Documents\Project4\test_images\test8.jpg') undisort_image, sobelx, s_binary, combined_binary, color_binary, W_image = combined_image(img) outImage = fit_polynomial(combined_binary, left_fit, right_fit, image_show = True) plt.imshow(outImage) def center(X_pointL, X_pointR): mid_pointx = (X_pointL + X_pointR)/2 image_mid_pointx = 640 dist = distance(mid_pointx, image_mid_pointx) dist = dist*(3.7/700) return dist, mid_pointx def distance(pointL, pointR): return math.sqrt((pointL - pointR)**2) def measure_curvature_pixels(img_shape, left_fit, right_fit): ym_per_pix = 30/720 # meters per pixel in y dimension xm_per_pix = 3.7/700 # meters per pixel in x dimension # Start by generating our fake example data # Make sure to feed in your real data instead in your project! leftx, rightx, ploty = fit_poly(img_shape, left_fit, right_fit) leftx = leftx[::-1] # Reverse to match top-to-bottom in y rightx = rightx[::-1] # Reverse to match top-to-bottom in y first_element_L = leftx[-720] first_element_R = rightx[-720] center_dist, mid_pointx = center(first_element_L, first_element_R) # Fit a second order polynomial to pixel positions in each fake lane line # Fit new polynomials to x,y in world space left_fit_cr = np.polyfit(ploty*ym_per_pix, leftx*xm_per_pix, 2) right_fit_cr = np.polyfit(ploty*ym_per_pix, rightx*xm_per_pix, 2) # Define y-value where we want radius of curvature # We'll choose the maximum y-value, corresponding to the bottom of the image y_eval = np.max(ploty) # Calculation of R_curve (radius of curvature) left_curverad = ((1 + (2*left_fit_cr[0]*y_eval*ym_per_pix + left_fit_cr[1])**2)**1.5) / np.absolute(2*left_fit_cr[0]) right_curverad = ((1 + (2*right_fit_cr[0]*y_eval*ym_per_pix + right_fit_cr[1])**2)**1.5) / np.absolute(2*right_fit_cr[0]) return left_curverad, right_curverad, center_dist, mid_pointx def Sanity_Check(img_shape, left_fit, right_fit): xm_per_pix = 3.7/700 # meters per pixel in x dimension ploty = np.linspace(0, 719, num=720) left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2] right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2] left_fitx, right_fitx, ploty = fit_poly(img_shape, left_fit, right_fit) left_fitx = left_fitx[::-1] # Reverse to match top-to-bottom in y right_fitx = right_fitx[::-1] # Reverse to match top-to-bottom in y last_element_L = left_fitx[-1] last_element_R = right_fitx [-1] #print(last_element_L) mid_element_L = left_fitx[-360] mid_element_R = right_fitx [-360] first_element_L = left_fitx[-720] first_element_R = right_fitx [-720] b_dist = (distance(last_element_L, last_element_R)*xm_per_pix) m_dist = (distance(mid_element_L, mid_element_R)*xm_per_pix) t_dist = (distance(first_element_L, first_element_R)*xm_per_pix) return b_dist, m_dist, t_dist def draw_poly(u_imag, binary_warped, left_fit, right_fit): warp_zero = np.zeros_like(binary_warped).astype(np.uint8) color_warp = np.dstack((warp_zero, warp_zero, warp_zero)) # Recast the x and y points into usable format for cv2.fillPoly() left_fitx, right_fitx, ploty = fit_poly(binary_warped.shape, left_fit, right_fit) pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))]) pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))]) pts = np.hstack((pts_left, pts_right)) pts_left = np.array([pts_left], np.int32) pts_right = np.array([pts_right], np.int32) # Draw the lane onto the warped blank image cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0)) cv2.polylines(color_warp, pts_left, 0, (255,0,0), 40) cv2.polylines(color_warp, pts_right, 0, (255,0,0), 40) # Warp the blank back to original image space using inverse perspective matrix (Minv) un_warped = wrap_transform(color_warp, inverse = 'TRUE') # Combine the result with the original image out_img = cv2.addWeighted(u_imag, 1, un_warped, 0.3, 0) return out_img image_show = False def process_image(image): left_fit = [] right_fit = [] undisort_image, sobelx, s_binary, combined_binary, color_binary, W_image = combined_image(image) if len(Left_Right.last_L_points) == 0 or len(Left_Right.last_R_points) == 0: left_fit, right_fit = find_lane_pixels(combined_binary, image_show = False) else: left_fit = Left_Right.last_L_points right_fit = Left_Right.last_R_points left_fit, right_fit = fit_polynomial(combined_binary, left_fit, right_fit, image_show = False) b_dist, m_dist, t_dist = Sanity_Check(combined_binary.shape, left_fit, right_fit) mean = (b_dist + m_dist + t_dist)/3 #print (t_dist) if (3.8 > mean > 3.1) and (3.5 > t_dist > 3.1): Left_Right.last_L_points = left_fit Left_Right.last_R_points = right_fit else: left_fit = Left_Right.last_L_points right_fit = Left_Right.last_R_points L_curvature, R_Curvature, center_dist, mid_pointx = measure_curvature_pixels(combined_binary.shape, left_fit, right_fit) curvature = (L_curvature + R_Curvature)/2 result = draw_poly(undisort_image, combined_binary, left_fit, right_fit) TEXT = 'Center Curvature = %f(m)' %curvature font = cv2.FONT_HERSHEY_SIMPLEX cv2.putText(result, TEXT, (50,50), font, 1, (0, 255, 0), 2) if (mid_pointx > 640): TEXT = 'Away from center = %f(m - To Right)' %center_dist font = cv2.FONT_HERSHEY_SIMPLEX cv2.putText(result, TEXT, (50,100), font, 1, (0, 255, 0), 2) else: TEXT = 'Away from center = %f(m - To Left)' %center_dist font = cv2.FONT_HERSHEY_SIMPLEX cv2.putText(result, TEXT, (50,100), font, 1, (0, 255, 0), 2) return result img = cv2.imread(r'C:\Users\pramo\Documents\Project4\test_images\test8.jpg') outImage = process_image(img) plt.imshow(outImage) output = 'project_video_out.mp4' clip = VideoFileClip('project_video.mp4') yellow_clip = clip.fl_image(process_image) yellow_clip.write_videofile(output, audio=False) ```
github_jupyter
# 3 More Namespace Operations ### 3.1 `locals()` and `globals()` Name binding operations covered so far: - *name* `=` (assignment) - `del` *name* (unbinds the name) - `def` *name* function definition (including lambdas) - `def name(`*names*`):` (function execution) - *name*`.`*attribute_name* `=`, `__setattr__`, `__delattr__` - `global`, `nonlocal` (changes scope rules) - `except Exception as` *name*: ``` locals() len(locals()) ``` In the REPL these are the same: ``` locals() == globals() ``` ``` x = 0 x ``` The following code is not recommended. ``` locals()['x'] locals()['x'] = 1 locals()['x'] x ``` If you're tempted to use it, try this code which due to "fast locals" doesn't do what you might expect: ``` def f(): locals()['x'] = 5 print(x) f() ``` ### 3.2 The `import` Statement ``` def _dir(obj='__secret', _CLUTTER=dir()): """ A version of dir that excludes clutter and private names. """ if obj == '__secret': names = globals().keys() else: names = dir(obj) return [n for n in names if n not in _CLUTTER and not n.startswith('_')] ``` ``` _dir() ``` ``` import csv _dir() csv _dir(csv) ``` ``` csv.reader csv.writer csv.spam csv.spam = 'Python is dangerous' csv.spam csv.reader = csv.writer csv.reader from csv import reader as csv_reader _dir() csv.reader is csv_reader csv csv.reader ``` ``` del csv import csv as csv_module _dir() csv_module.reader is csv_reader csv_module.reader ``` ``` math math + 3 del math print(math) ``` Will the next statement give a `NameError` like the previous statement? Why not? ``` import math math del math ``` What if we don't know the name of the module until run-time? ``` import importlib importlib.import_module('math') math.pi math_module = importlib.import_module('math') math.pi math_module.pi module_name = 'math' import module_name import 'math' import math ``` ### 3.3 Exercises: The `import` Statement Explore reloading a module. This is rarely needed and usually only when exploring. Several statements below will throw errors - try to figure out which ones before you run them. ``` import csv import importlib importlib.reload? del csv importlib.reload(csv) importlib.reload('csv') import csv importlib.reload('csv') importlib.reload(csv) ``` ### 3.4 Augmented Assignment Statements Bind two names to the `str` object `'abc'`, then from it create `'abcd'` and rebind (reassign) one of the names: ``` string_1 = string_2 = 'abc' string_1 is string_2 string_2 = string_2 + 'd' string_1 is string_2, string_1, string_2 ``` This reassigns the second name so it is bound to a new object. This works similarly if we start with two names for one `list` object and then reassign one of the names. ``` list_1 = list_2 = ['a', 'b', 'c'] list_1 is list_2 list_2 = list_2 + ['d'] list_1 is list_2, list_1, list_2 ``` If for the `str` objects we instead use an *augmented assignment statement*, specifically *in-place add* `+=`, we get the same behaviour as earlier. ``` string_1 = string_2 = 'abc' string_2 += 'd' string_1 is string_2, string_1, string_2 ``` However, for the `list` objects the behaviour changes. ``` list_1 = list_2 = ['a', 'b', 'c'] list_2 += ['d'] list_1 is list_2, list_1, list_2 ``` The `+=` in `foo += 1` is not just syntactic sugar for `foo = foo + 1`. The `+=` and other augmented assignment statements have their own bytecodes and methods. Notice BINARY_ADD vs. INPLACE_ADD. The run-time types of the objects to which `name_1` and `name_2` are bound are irrelevant to the bytecode that gets produced. ``` import codeop, dis dis.dis(codeop.compile_command("name_1 = name_1 + name_2")) dis.dis(codeop.compile_command("name_1 += name_2")) ``` ``` list_2 = ['a', 'b', 'c'] list_2 ``` Notice that `__iadd__` returns a value ``` list_2.__iadd__(['d']) ``` and it also changes the list ``` list_2 string_2.__iadd__('4') ``` So what happens when `INPLACE_ADD` operates on the `str` object? If `INPLACE_ADD` doesn't find `__iadd__` it instead calls `__add__` and reassigns `string_2`, i.e. it falls back to `__add__`. https://docs.python.org/3/reference/datamodel.html#object.__iadd__: > These methods are called to implement the augmented arithmetic > assignments (+=, etc.). These methods should attempt to do the > operation in-place (modifying self) and return the result (which > could be, but does not have to be, self). If a specific method is > not defined, the augmented assignment falls back to the normal > methods. Here's similar behaviour with a tuple: ``` tuple_1 = (7,) tuple_1 tuple_1[0].__iadd__(1) tuple_1[0] += 1 tuple_1[0] = tuple_1[0] + 1 tuple_1 ``` Here's surprising behaviour with a tuple: ``` tuple_2 = ([12, 13],) tuple_2 tuple_2[0] += [14] ``` What value do we expect `tuple_2` to have? ``` tuple_2 ``` Let's simulate the steps to see why this behaviour makes sense. ``` list_1 = [12, 13] tuple_2 = (list_1,) tuple_2 temp = list_1.__iadd__([14]) temp temp == list_1 temp is list_1 tuple_2 tuple_2[0] = temp ``` For later study: ``` dis.dis(codeop.compile_command("tuple_2 = ([12, 13],); tuple_2[0] += [14]")) dis.dis(codeop.compile_command("tuple_2 = ([12, 13],); temp = tuple_2[0].__iadd__([14]); tuple_2[0] = temp")) ``` For a similar explanation see https://docs.python.org/3/faq/programming.html#faq-augmented-assignment-tuple-error ### 3.5 Function Arguments are Passed by Name Binding Can functions modify the arguments passed to them? When a caller passes an argument to a function, the function starts execution with a local name, the parameter from its signature, bound to the argument object passed in. ``` def function_1(string_2): print('A -->', string_2) string_2 += ' blue' print('B -->', string_2) string_1 = 'red' string_1 function_1(string_1) string_1 ``` To see more clearly why `string_1` is still a name bound to `'red'`, consider this version which is functionally equivalent but has two changes highlighted in the comments: ``` def function_2(string_2): print('A -->', string_2) string_2 = string_2 + ' blue' # Changed from += print('B -->', string_2) function_2('red') # Changed from string_1 to 'red' 'red' ``` In both cases the name `string_2` at the beginning of `function_1` and `function_2` was a name that was bound to the `str` object `'red'`, and in both the function-local name `string_2` was re-bound to the new `str` object `'red blue'`. Let's try this with a `list`. ``` def function_3(list_2): print('A -->', list_2) list_2 += ['blue'] # += with lists is shorthand for list.extend() print('B -->', list_2) list_1 = ['red'] list_1 function_3(list_1) list_1 ``` In both cases parameter names are bound to arguments, and whether or not the function can or does change the object passed in depends on the object, not how it's passed to the function.
github_jupyter
# Getting started ## Installing Python It is recommended that you install the full Anaconda Python 3.8, as it set up your Python environment, together with a bunch of often used packages that you'll use during this course. A guide on installing Anaconda can be found here: https://docs.anaconda.com/anaconda/install/. NB: You don't have to install the optional stuff, such as the PyCharm editor. For more instructions, take a look at: https://github.com/uvacreate/2021-coding-the-humanities/blob/master/setup.md. If you completed all the steps and you have Python and Jupyter notebooks installed, open this file again as a notebook and continue with the content below. Good luck and have fun! 🎉 # Hello World This notebook contains some code to allow you to check if everything runs as intended. [Jupyter notebooks](https://jupyter.org) contain cells of Python code, or text written in [markdown](https://www.markdownguide.org/getting-started/). This cell for instance contains text written in markdown syntax. You can edit it by double clicking on it. You can create new cells using the "+" (top right bar), and you can run cells to 'execute' the markdown syntax they contain and see what happens. The other type of cells contain Python code and need to be executed. You can either do this by clicking on the cell and then on the play button in the top of the window. Or by pressing `shift + ENTER`. Try this with the next cell, and you'll see the result of this first line of Python. **For a more extended revision of these materials, see http://www.karsdorp.io/python-course (Chapter 1).** ``` # It is customary for your first program to print Hello World! This is how you do it in Python. print("Hello World!") # You can comment your code using '#'. What you write afterwards won't be interpreted as code. # This comes in handy if you want to comment on smaller bits of your code. Or if you want to # add a TODO for yourself to remind you that some code needs to be added or revised. ``` The code you write is executed from a certain *working directory* (we will see more when doing input/output). You can access your working directory by using a *package* (bundle of Python code which does something for you) part of the so-called Python standard library: `os` (a package to interact with the operating system). ``` import os # we first import the package os.getcwd() # we then can use some of its functionalities. In this case, we get the current working directory (cwd) ``` ## Python versions ![You can also do images in markdown!](https://www.python.org/static/img/python-logo@2x.png) It is important that you at least run a version of Python that is being supported with security updates. Currently (Spring 2021), this means Python 3.6 or higher. You can see all current versions and their support dates on the [Python website](https://www.python.org/downloads/) For this course it is recommended to have Python 3.8 installed, since every Python version adds, but sometimes also changes functionality. If you recently installed Python through [Anaconda](https://www.anaconda.com/products/individual#), you're most likely running version 3.8! Let's check the Python version you are using by importing the `sys` package. Try running the next cell and see it's output. ``` import sys print(sys.executable) # the path where the Python executable is located print(sys.version) # its version print(sys.version_info) ``` You now printed the version of Python you have installed. You can also check the version of a package via its property `__version__`. A common package for working with tabular data is `pandas` (more on this package later). You can import the package and make it referencable by another name (a shorthand) by doing: ``` import pandas as pd # now 'pd' is the shorthand for the 'pandas' package ``` NB: Is this raising an error? Look further down for a (possible) explanation! Now the `pandas` package can be called by typing `pd`. The version number of packages is usually stored in a _magic attribute_ or a _dunder_ (=double underscore) called `__version__`. ``` pd.__version__ ``` The code above printed something without using the `print()` statement. Let's do the same, but this time by using a `print()` statement. ``` print(pd.__version__) ``` Can you spot the difference? Why do you think this is? What kind of datatype do you think the version number is? And what kind of datatype can be printed on your screen? We'll go over these differences and the involved datatypes during the first lecture and seminar. If you want to know more about a (built-in) function of Python, you can check its manual online. The information on the `print()` function can be found in the manual for [built-in functions](https://docs.python.org/3.8/library/functions.html#print) More on datatypes later on. ### Exercise Try printing your own name using the `print()` function. ``` # TODO: print your own name # TODO: print your own name and your age on one line ``` If all of the above cells were executed without any errors, you're clear to go! However, if you did get an error, you should start debugging. Most of the times, the errors returned by Python are quite meaningful. Perhaps you got this message when trying to import the `pandas` package: ```python --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-26-981caee58ba7> in <module> ----> 1 import pandas as pd ModuleNotFoundError: No module named 'pandas' ``` If you go over this error message, you can see: 1. The type of error, in this example `ModuleNotFoundError` with some extra explanation 2. The location in your code where the error occurred or was _raised_, indicated with the ----> arrow In this case, you do not have this (external) package installed in your Python installation. Have you installed the full Anaconda package? You can resolve this error by installing the package from Python's package index ([PyPI](https://pypi.org/)), which is like a store for Python packages you can use in your code. To install the `pandas` package (if missing), run in a cell: ```python pip install pandas ``` Or to update the `pandas` package you already have installed: ```python pip install pandas -U ``` Try this in the cell below! ``` # Try either installing or updating (if there is an update) your pandas package # your code here ``` If you face other errors, then Google (or DuckDuckGo etc.) is your friend. You'll see tons of questions on Python related problems on websites such as Stack Overflow. It's tempting to simply copy paste a coding pattern from there into your own code. But if you do, make sure you fully understand what is going on. Also, in assignments in this course, we ask you to: 1. Specify a URL or source of the website/book you got your copied code from 2. Explain in a _short_ text or through comments by line what the copied code is doing This will be repeated during the lectures. However, if you're still stuck, you can open a discussion in our [Canvas course](https://canvas.uva.nl/courses/22381/discussion_topics). You're also very much invited to engage in threads on the discussion board of others and help them out. Debugging, solving, and explaining these coding puzzles for sure makes you a better programmer! # Basic stuff The code below does some basic things using Python. Please check if you know what it does and, if not, you can still figure it out. Just traverse through the rest of this notebook by executing each cell if this is all new to you and try to understand what happens. ![](https://upload.wikimedia.org/wikipedia/commons/thumb/6/6b/Don%27t_Panic.svg/200px-Don%27t_Panic.svg.png) The [first notebook](https://github.com/uvacreate/2021-coding-the-humanities/blob/master/notebooks/1_Basics.ipynb) that we're discussing in class is paced more slowly. You can already take a look at it if you want to work ahead. We'll be repeating the concepts below, and more. If you think you already master these 'Python basics' and the material from the first notebook, then get into contact with us for some more challenging exercises! ## Variables and operations ``` a = 2 b = a # Or, assign two variables at the same time c, d = 10, 20 c b += c # Just typing a variable name in the Python interpreter (= terminal/shell/cell) also returns/prints its value a # Now, what's the value of b? b # Why the double equals sign? How is this different from the above a = b ? a == b # Because the ≠ sign is hard to find on your keyboard a != b s = "Hello World!" print(s) s[-1] s[:5] s[6:] s[6:-1] s words = ["A", "list", "of", "strings"] words letters = list(s) # Names in green are reserved by Python: avoid using them as variable names letters ``` If you do have bound a value to a built-in function of Python by accident, you can undo this by restarting your 'kernel' in Jupyter Notebook. Click `Kernel` and then `Restart` in the bar in the top of the screen. You'll make Python loose it's memory of previously declared variables. This also means that you must re-run all cells again if you need the executions and their outcomes. ``` # Sets are unordered collections of unique elements unique_letters = set(letters) unique_letters # Variables have a certain data type. # Python is very flexible with allowing you to assign variables to data as you like # If you need a certain data type, you need to check it explicitly type(s) print("If you forgot the value of variable 'a':", a) type(a) type(2.3) type("Hello") type(letters) type(unique_letters) ``` #### Exercise 1. Create variables of each type: integer, float, text, list, and set. 2. Try using mathematical operators such as `+ - * / **` on the numerical datatypes (integer and float) 3. Print their value as a string ``` # Your code here ``` Hint: You can insert more cells by going to `Insert` and then `Insert Cell Above/Below` in this Jupyter Notebook.
github_jupyter
``` import numpy as np import cv2 import matplotlib.pyplot as plt import random import os img = cv2.imread("./map_bw.png") Map = np.array(~(img[:,:,0]==0)).astype(int) Navigable_terrain = np.array(Map.nonzero()).T Sidx = random.sample(range(0,Navigable_terrain.shape[0]),1) Eidx = Sidx while (Sidx==Eidx): Eidx = random.sample(range(0,Navigable_terrain.shape[0]),1) Start = Navigable_terrain[Sidx,:][0] Goal = Navigable_terrain[Eidx,:][0] Map = np.array(~(img[:,:,0]==0)).astype(int) obstacles = np.where(Map == 0) #img_size = 120 #img = np.ones([img_size,img_size,3],np.int) #img = img * 255; #obst_size = 4; #DO NOT CHANGE #obstacle_end_points = np.array( # [[int(img.shape[0]/obst_size),int(img.shape[1]/obst_size)] # ,[int(img.shape[0] - img.shape[0]/obst_size),int(img.shape[1]/obst_size)] # ,[int(img.shape[0] - img.shape[0]/obst_size),int(img.shape[1] - img.shape[1]/obst_size)] # ,[int(img.shape[0]/obst_size),int(img.shape[1] - img.shape[1]/obst_size)] # ]) #cv2.polylines(img,pts=[obstacle_end_points], isClosed=False, color=(0,0,0), thickness = int(img_size/20)) #Map = np.array(img[:,:,2]).astype(int) #obstacles = np.where(Map == 0) #Start = np.array([int(img_size/2),int(img_size/2)]) #Goal = np.array([int(img_size/2),int(img_size - img_size/6)]) #print(Start) #print(Goal) Start = np.array([83,73]) Goal = np.array([156,111]) img[Start[0],Start[1],:] = 0 img[Start[0],Start[1],0] = 255 img[Goal[0],Goal[1],:] = 0 img[Goal[0],Goal[1],1] = 255 plt.imshow(img) plt.show() Traversal_array = np.array([[-1, -1] ,[0, -1] ,[1, -1] ,[-1, 0] ,[1, 0] ,[-1, 1] ,[0, 1] ,[1, 1]]) def GetNavigableNeighbors(pos,Traversal_array,obstacles): neighbors = np.ones_like(Traversal_array) * pos neighbors = neighbors + Traversal_array NavigableNeighbors = [] for neighbor in neighbors.tolist(): if not(neighbor in np.append([obstacles[0]],[obstacles[1]],axis=0).T.tolist()): NavigableNeighbors.append(neighbor) return np.array(NavigableNeighbors) def euclidean_dist(pt1,pt2): return np.sqrt(np.square(pt1[0]-pt2[0]) + np.square(pt1[1]-pt2[1])) class Node(): def __init__(self,pos,goal,parent=None): self.pos = np.array(pos) if parent != None: self.gcost = euclidean_dist(pos,parent.pos) + parent.gcost else: self.gcost = 0 self.parent = parent self.hcost = euclidean_dist(pos,goal) def GetFCost(self): return self.gcost + self.hcost def Print(self): print("Position: ",self.pos) print("Parent: ",self.parent) print("Gcost: ",self.gcost) print("Hcost: ",self.hcost) print("Fcost: ",self.gcost+self.hcost) StartNode = Node(Start,Goal) parent_id = -1 Exploring_nodes = np.array([[parent_id,StartNode.pos[0],StartNode.pos[1],StartNode.GetFCost(),StartNode.hcost,0]]) ExecutingNode = StartNode ExploredNodes = {'0':ExecutingNode} #type(ExecutingNode.pos) while ExecutingNode.pos.astype(int).tolist() != Goal.astype(int).tolist(): parent_id += 1 for neighbor in GetNavigableNeighbors(ExecutingNode.pos,Traversal_array,obstacles).tolist(): if not(neighbor in np.array(Exploring_nodes[:,1:3]).tolist()): CurrentNode = Node(neighbor,Goal,ExecutingNode) Exploring_nodes = np.append(Exploring_nodes ,[[parent_id, CurrentNode.pos[0], CurrentNode.pos[1], CurrentNode.GetFCost(), CurrentNode.hcost,0]] ,axis = 0) dict_index = Exploring_nodes.shape[0] - 1 ExploredNodes.update({str(dict_index) : CurrentNode}) #CurrentNode.Print() idx = Exploring_nodes[:,1:3].tolist().index(ExecutingNode.pos.tolist()) Exploring_nodes[idx,5] = 1 non_visited = np.where(Exploring_nodes[:,5] != 1)[0] lowest_fcost = non_visited[np.where(Exploring_nodes[non_visited,3] == np.min(Exploring_nodes[non_visited,3]))[0]] #print(ExecutingNode.pos) if len(lowest_fcost) == 1: ExecutingNode = ExploredNodes[str(lowest_fcost[0])] #print("Fcost: ",lowest_fcost) else: lowest_hcost = non_visited[np.where(Exploring_nodes[non_visited,4] == np.min(Exploring_nodes[non_visited,4]))[0]] #print("Hcost: ",lowest_hcost) ExecutingNode = ExploredNodes[str(lowest_hcost[0])] #img[int(ExecutingNode.pos[0]),int(ExecutingNode.pos[1]),:] = 0 #img[int(ExecutingNode.pos[0]),int(ExecutingNode.pos[1]),2] = 255 plt.imshow(img) plt.show() print(Goal) def GetPath(Start,FinalExecutingNode): Path = [] current_node = FinalExecutingNode while Start.astype(int).tolist() != current_node.pos.astype(int).tolist(): Path.append(current_node.pos.astype(int).tolist()) current_node = current_node.parent img[int(current_node.pos[0]),int(current_node.pos[1]),:] = 0 img[int(current_node.pos[0]),int(current_node.pos[1]),2] = 255 return Path #print(GetPath(Start,ExecutingNode)) #img = cv2.imread("./map_bw.png") Path = GetPath(Start,ExecutingNode) plt.imshow(img) plt.show() parent_node = ExecutingNode.parent parent_node.Print() ```
github_jupyter
### Basic Programming 13 ### 1. Write a program that calculates and prints the value according to the given formula: ### Q = Square root of [(2 C D)/H] ### Following are the fixed values of C and H: ### C is 50. H is 30. ### D is the variable whose values should be input to your program in a comma-separated sequence. ### Example ### Let us assume the following comma separated input sequence is given to the program: ### 100,150,180 ### The output of the program should be: ### 18,22,24 ``` import math numbers = input("Provide D in with comma separated: ") numbers = numbers.split(',') result_list = [] result_string = '' for D in numbers: Q = round(math.sqrt(2 * 50 * int(D) / 30)) result_list.append(str(Q)) print(','.join(result_list)) ``` ### 2. Write a program which takes 2 digits, X,Y as input and generates a 2-dimensional array. ### The element value in the i-th row and j-th column of the array should be i*j. ### Note: i=0,1.., X-1; j=0,1,¡Y-1. ### Example ### Suppose the following inputs are given to the program: ### 3,5 ### Then, the output of the program should be: ### [[0, 0, 0, 0, 0], [0, 1, 2, 3, 4], [0, 2, 4, 6, 8]] ``` x=int(input('Enter the value of X: ')) y=int(input('Enter the value of Y: ')) l1=[] for i in range(x): l2=[] for j in range(y): l2.append(i*j) l1.append(l2) l1 ``` ### 3. Write a program that accepts a comma separated sequence of words as input and prints the words in a comma-separated sequence after sorting them alphabetically. ### Suppose the following input is supplied to the program: ### without,hello,bag,world ### Then, the output should be: ### bag,hello,without,world ``` items=[x for x in input('Enter comma seperated words ').split(',')] items.sort() print(','.join(items)) ``` ### 4.Write a program that accepts a sequence of whitespace separated words as input and prints the words after removing all duplicate words and sorting them alphanumerically. ### Suppose the following input is supplied to the program: ### hello world and practice makes perfect and hello world again ### Then, the output should be: ### again and hello makes perfect practice world ``` s=input('Enter the sequence of white separated words: ').split(' ') print(' '.join(sorted(set(s)))) ``` ### 5. Write a program that accepts a sentence and calculate the number od letters and digits. ### suppose the Following input is supplied to the program: ### hello world! 123 ### Then,the output should be: ### LETTERS 10 ### DIGITS 3 ``` s = input("Input a string : ") digits=letters=0 for c in s: if c.isdigit(): digits += 1 elif c.isalpha(): letters += 1 else: pass print("Letters", letters) print("Digits", digits) ``` ### 6.A website requires the users to input username and password to register. Write a program to check the validity of password input by users. ### Following are the criteria for checking the password: #### 1. At least 1 letter between [a-z] #### 2. At least 1 number between [0-9] #### 1. At least 1 letter between [A-Z] #### 3. At least 1 character from [$#@] #### 4. Minimum length of transaction password: 6 #### 5. Maximum length of transaction password: 12 ### Your program should accept a sequence of comma separated passwords and will check them according to the above criteria. Passwords that match the criteria are to be printed, each separated by a comma. ### Example ### If the following passwords are given as input to the program: ### ABd1234@1,a F1#,2w3E*,2We3345 ### Then, the output of the program should be: ### ABd1234@1 ``` import re pswd = input("Type the passwords in comma separated form: ").split(",") valid = [] for i in pswd: if len(i) < 6 or len(i) > 12: continue elif not re.search("([a-z])+", i): continue elif not re.search("([A-Z])+", i): continue elif not re.search("([0-9])+", i): continue elif not re.search("([!@$%^&])+", i): continue else: valid.append(i) print((" ").join(valid)) break else: print('Invalid password') ```
github_jupyter
``` !pip install efficientnet #import the libraries needed import pandas as pd import numpy as np import os import cv2 from tqdm import tqdm_notebook as tqdm import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from keras_preprocessing.image import ImageDataGenerator from tensorflow.keras.models import Model from tensorflow.keras.layers import Dense import efficientnet.tfkeras as efn import warnings warnings.filterwarnings("ignore") current_path = r'C:\Users\nguyent2\Desktop\Kaggle-Four-Shapes-Classification-Challenge\Kaggle Dataset\shapes' circle_paths = os.listdir(os.path.join(current_path, 'circle')) square_paths = os.listdir(os.path.join(current_path, 'square')) star_paths = os.listdir(os.path.join(current_path, 'star')) triangle_paths = os.listdir(os.path.join(current_path, 'triangle')) print(f'We got {len(circle_paths)} circles, {len(square_paths)} squares, {len(star_paths)} stars, and {len(triangle_paths)} triangles' ) circles = pd.DataFrame() squares = pd.DataFrame() stars = pd.DataFrame() triangles = pd.DataFrame() for n,i in enumerate(tqdm(range(len(circle_paths)))): circle_path = os.path.join(current_path, 'circle' ,circle_paths[i]) circles.loc[n,'path'] = circle_path circles.loc[n, 'circle'] = 1 circles.loc[n, 'square'] = 2 circles.loc[n, 'star'] = 3 circles.loc[n, 'triangle'] = 0 for n,i in enumerate(tqdm(range(len(square_paths)))): square_path = os.path.join(current_path, 'square' ,square_paths[i]) squares.loc[n,'path'] = square_path squares.loc[n, 'circle'] = 0 squares.loc[n, 'square'] = 1 squares.loc[n, 'star'] = 2 squares.loc[n, 'triangle'] = 3 for n,i in enumerate(tqdm(range(len(star_paths)))): star_path = os.path.join(current_path, 'star' ,star_paths[i]) stars.loc[n,'path'] = star_path stars.loc[n, 'circle'] = 3 stars.loc[n, 'square'] = 0 stars.loc[n, 'star'] = 1 stars.loc[n, 'triangle'] = 2 for n,i in enumerate(tqdm(range(len(triangle_paths)))): triangle_path = os.path.join(current_path, 'triangle' ,triangle_paths[i]) triangles.loc[n,'path'] = triangle_path triangles.loc[n, 'circle'] = 2 triangles.loc[n, 'square'] = 3 triangles.loc[n, 'star'] = 0 triangles.loc[n, 'triangle'] = 1 data = pd.concat([circles, squares, stars, triangles], axis=0).sample(frac=1.0, random_state=42).reset_index(drop=True) plt.figure(figsize=(16,16)) for i in range(36): plt.subplot(6,6,i+1) img = cv2.imread(data.path[i]) plt.imshow(img) plt.title(data.iloc[i,1:].sort_values().index[1]) plt.axis('off') train, test = train_test_split(data, test_size=.3, random_state=42) train.shape, test.shape example = train.sample(n=1).reset_index(drop=True) example_data_gen = ImageDataGenerator( rescale=1./255, horizontal_flip=True, vertical_flip=True, ) example_gen = example_data_gen.flow_from_dataframe(example, target_size=(200,200), x_col="path", y_col=['circle', 'square', 'star','triangle'], class_mode='raw', shuffle=False, batch_size=32) plt.figure(figsize=(20, 20)) for i in range(0, 9): plt.subplot(3, 3, i+1) for X_batch, _ in example_gen: image = X_batch[0] plt.imshow(image) plt.axis('off') break test_data_gen= ImageDataGenerator(rescale=1./255) train_data_gen= ImageDataGenerator( rescale=1./255, horizontal_flip=True, vertical_flip=True, ) train_generator=train_data_gen.flow_from_dataframe(train, target_size=(200,200), x_col="path", y_col=['circle','square', 'star','triangle'], class_mode='raw', shuffle=False, batch_size=32) test_generator=test_data_gen.flow_from_dataframe(test, target_size=(200,200), x_col="path", y_col=['circle', 'square','star','triangle'], class_mode='raw', shuffle=False, batch_size=1) def get_model(): base_model = efn.EfficientNetB0(weights='imagenet', include_top=False, pooling='avg', input_shape=(200, 200, 3)) x = base_model.output predictions = Dense(4, activation='softmax')(x) model = Model(inputs=base_model.input, outputs=predictions) model.compile(optimizer='adam', loss='categorical_crossentropy',metrics=['accuracy']) return model model = get_model() model.fit_generator(train_generator, epochs=1, steps_per_epoch=train_generator.n/32, ) model.evaluate(test_generator) pred_test = np.argmax(model.predict(test_generator, verbose=1), axis=1) plt.figure(figsize=(24,24)) for i in range(100): plt.subplot(10,10,i+1) img = cv2.imread(test.reset_index(drop=True).path[i]) plt.imshow(img) plt.title(test.reset_index(drop=True).iloc[0,1:].index[pred_test[i]]) plt.axis('off') ```
github_jupyter
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import math import seaborn as sns from sklearn import datasets from sklearn import metrics from sklearn.preprocessing import LabelBinarizer from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import confusion_matrix, roc_auc_score, f1_score, precision_score, recall_score os.environ['TF_CPP_MIN_LOG_LEVEL']= '2' # !gdown --id '1S9iwczSf6KL5jMSmU20SXKCSD3BUx4o_' --output level-6.csv #GMPR_genus !gdown --id '1q0yp1iM66BKvqee46bOuSZYwl_SJCTp0' --output level-6.csv #GMPR_species train = pd.read_csv("level-6.csv") train.head() train.info() from sklearn.preprocessing import LabelEncoder labelencoder = LabelEncoder() train["Diagnosis"] = labelencoder.fit_transform(train["Diagnosis"]) # test["Diagnosis"] = labelencoder.fit_transform(test["Diagnosis"]) # for i in range(len(train)): # if train["Diagnosis"][i] == 'Cancer': # train["Diagnosis"][i] = str(1) # else: # train["Diagnosis"][i] = str(0) train not_select = ["index", "Diagnosis"] train_select = train.drop(not_select,axis=1) df_final_select = train_select ``` #Random Forest Classifier ``` #Use RandomForestClassifier to predict Cancer x = df_final_select y = train["Diagnosis"] # y = np.array(y,dtype=int) X_train,X_test,y_train,y_test = train_test_split(x,y,test_size=0.2,random_state=0) #RandomForest rfc = RandomForestClassifier(n_estimators=1000) rfc.fit(X_train,y_train) y_predict = rfc.predict(X_test) score_rfc = rfc.score(X_test,y_test) score_rfc_train = rfc.score(X_train,y_train) print("train_accuracy = ",score_rfc_train*100," %") print("val_accuracy = ",score_rfc*100," %") mat = confusion_matrix(y_test, y_predict) sns.heatmap(mat.T, square=True, annot=True, fmt='d', cbar=False) plt.xlabel('true label') plt.ylabel('predicted label') score_recall = recall_score(y_test, y_predict, average=None) f1score = f1_score(y_test, y_predict, average="macro") precisionscore = precision_score(y_test, y_predict, average=None) auc_roc = roc_auc_score(y_test, y_predict) print("precision = ",precisionscore) print("recall = ",score_recall) print("auc_roc = ",auc_roc) print("f1_score = ",f1score) with open('RF_result.csv','w') as f: f.write('Precision_Normal,Precision_Cancer,Recall_Normal,Recall_Cancer,Auc_Score,F1_Score,') f.write('\n') f.write(str(precisionscore[0])+','+str(precisionscore[1])+','+str(score_recall[0])+','+str(score_recall[1])+','+str(auc_roc)+','+str(f1score)) ```
github_jupyter
``` # disable overly verbose tensorflow logging import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # or any {'0', '1', '2'} import tensorflow as tf import numpy as np from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.models import Sequential, Model from tensorflow.keras.layers import Input, Dense, GlobalAveragePooling2D, Dropout, Flatten, Conv2D, MaxPool2D, Reshape from tensorflow.keras.optimizers import SGD from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard from tensorflow.keras.applications.mobilenet_v2 import MobileNetV2, preprocess_input # unused for now, to be used for ROC analysis from sklearn.metrics import roc_curve, auc allow_growth = True # the size of the images in the PCAM dataset IMAGE_SIZE = 96 datagen = ImageDataGenerator(preprocessing_function=preprocess_input) ``` # Initialize the MobileNetV2 model for fine-tuning on the dataset ``` input_shape = (IMAGE_SIZE, IMAGE_SIZE, 3) input = Input(input_shape) # get the pretrained model, cut out the top layer pretrained = MobileNetV2(input_shape=input_shape, include_top=False, weights='imagenet') pretrained.summary() # if the pretrained model it to be used as a feature extractor, and not for # fine-tuning, the weights of the model can be frozen in the following way # for layer in pretrained.layers: # layer.trainable = False output = pretrained(input) output = GlobalAveragePooling2D()(output) output = Dropout(0.5)(output) output = Dense(1, activation='sigmoid')(output) model = Model(input, output) # note the lower lr compared to the cnn example model.compile(SGD(lr=0.001, momentum=0.95), loss = 'binary_crossentropy', metrics=['accuracy']) # print a summary of the model on screen model.summary() ``` # Get the datagenerators ``` def get_pcam_generators(base_dir, train_batch_size=32, val_batch_size=32): # dataset parameters train_path = os.path.join(base_dir, 'train+val', 'train') valid_path = os.path.join(base_dir, 'train+val', 'valid') # instantiate data generators datagen = ImageDataGenerator(preprocessing_function=preprocess_input) train_gen = datagen.flow_from_directory(train_path, target_size=(IMAGE_SIZE, IMAGE_SIZE), batch_size=train_batch_size, class_mode='binary') val_gen = datagen.flow_from_directory(valid_path, target_size=(IMAGE_SIZE, IMAGE_SIZE), batch_size=val_batch_size, class_mode='binary') return train_gen, val_gen # get the data generators train_gen, val_gen = get_pcam_generators(r'C:\Users\20173884\Documents\8P361') ``` # Model ``` # save the model and weights model_name = 'transfer_4.2_ImageNet_model' model_filepath = model_name + '.json' weights_filepath = model_name + '_weights.hdf5' model_json = model.to_json() # serialize model to JSON with open(model_filepath, 'w') as json_file: json_file.write(model_json) # define the model checkpoint and Tensorboard callbacks checkpoint = ModelCheckpoint(weights_filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min') tensorboard = TensorBoard(os.path.join('logs', model_name)) callbacks_list = [checkpoint, tensorboard] # train the model, note that we define "mini-epochs" train_steps = train_gen.n//train_gen.batch_size//20 val_steps = val_gen.n//val_gen.batch_size//20 # since the model is trained for only 10 "mini-epochs", i.e. half of the data is # not used during training history = model.fit_generator(train_gen, steps_per_epoch=train_steps, validation_data=val_gen, validation_steps=val_steps, epochs=10, callbacks=callbacks_list) ``` ### ### View loss graph ````bash activate 8p361 cd 'path/where/logs/are' tensorboard --logdir logs ````
github_jupyter
``` import os os.environ['CUDA_VISIBLE_DEVICES'] = '3' from tensor2tensor.data_generators import problem from tensor2tensor.data_generators import text_problems from tensor2tensor.data_generators import translate from tensor2tensor.layers import common_attention from tensor2tensor.utils import registry from tensor2tensor import problems import tensorflow as tf import os import logging import sentencepiece as spm import transformer_tag from tensor2tensor.layers import modalities vocab = 'sp10m.cased.t5.model' sp = spm.SentencePieceProcessor() sp.Load(vocab) class Encoder: def __init__(self, sp): self.sp = sp self.vocab_size = sp.GetPieceSize() + 100 def encode(self, s): return self.sp.EncodeAsIds(s) def decode(self, ids, strip_extraneous = False): return self.sp.DecodeIds(list(ids)) d = [ {'class': 0, 'Description': 'PAD', 'salah': '', 'betul': ''}, { 'class': 1, 'Description': 'kesambungan subwords', 'salah': '', 'betul': '', }, { 'class': 2, 'Description': 'tiada kesalahan', 'salah': '', 'betul': '', }, { 'class': 3, 'Description': 'kesalahan frasa nama, Perkara yang diterangkan mesti mendahului "penerang"', 'salah': 'Cili sos', 'betul': 'sos cili', }, { 'class': 4, 'Description': 'kesalahan kata jamak', 'salah': 'mereka-mereka', 'betul': 'mereka', }, { 'class': 5, 'Description': 'kesalahan kata penguat', 'salah': 'sangat tinggi sekali', 'betul': 'sangat tinggi', }, { 'class': 6, 'Description': 'kata adjektif dan imbuhan "ter" tanpa penguat.', 'salah': 'Sani mendapat markah yang tertinggi sekali.', 'betul': 'Sani mendapat markah yang tertinggi.', }, { 'class': 7, 'Description': 'kesalahan kata hubung', 'salah': 'Sally sedang membaca bila saya tiba di rumahnya.', 'betul': 'Sally sedang membaca apabila saya tiba di rumahnya.', }, { 'class': 8, 'Description': 'kesalahan kata bilangan', 'salah': 'Beribu peniaga tidak membayar cukai pendapatan.', 'betul': 'Beribu-ribu peniaga tidak membayar cukai pendapatan', }, { 'class': 9, 'Description': 'kesalahan kata sendi', 'salah': 'Umar telah berpindah daripada sekolah ini bulan lalu.', 'betul': 'Umar telah berpindah dari sekolah ini bulan lalu.', }, { 'class': 10, 'Description': 'kesalahan penjodoh bilangan', 'salah': 'Setiap orang pelajar', 'betul': 'Setiap pelajar.', }, { 'class': 11, 'Description': 'kesalahan kata ganti diri', 'salah': 'Pencuri itu telah ditangkap. Beliau dibawa ke balai polis.', 'betul': 'Pencuri itu telah ditangkap. Dia dibawa ke balai polis.', }, { 'class': 12, 'Description': 'kesalahan ayat pasif', 'salah': 'Cerpen itu telah dikarang oleh saya.', 'betul': 'Cerpen itu telah saya karang.', }, { 'class': 13, 'Description': 'kesalahan kata tanya', 'salah': 'Kamu berasal dari manakah ?', 'betul': 'Kamu berasal dari mana ?', }, { 'class': 14, 'Description': 'kesalahan tanda baca', 'salah': 'Kamu berasal dari manakah .', 'betul': 'Kamu berasal dari mana ?', }, { 'class': 15, 'Description': 'kesalahan kata kerja tak transitif', 'salah': 'Dia kata kepada saya', 'betul': 'Dia berkata kepada saya', }, { 'class': 16, 'Description': 'kesalahan kata kerja transitif', 'salah': 'Dia suka baca buku', 'betul': 'Dia suka membaca buku', }, { 'class': 17, 'Description': 'penggunaan kata yang tidak tepat', 'salah': 'Tembuk Besar negeri Cina dibina oleh Shih Huang Ti.', 'betul': 'Tembok Besar negeri Cina dibina oleh Shih Huang Ti', }, ] class Tatabahasa: def __init__(self, d): self.d = d self.kesalahan = {i['Description']: no for no, i in enumerate(self.d)} self.reverse_kesalahan = {v: k for k, v in self.kesalahan.items()} self.vocab_size = len(self.d) def encode(self, s): return [self.kesalahan[i] for i in s] def decode(self, ids, strip_extraneous = False): return [self.reverse_kesalahan[i] for i in ids] @registry.register_problem class Grammar(text_problems.Text2TextProblem): """grammatical error correction.""" def feature_encoders(self, data_dir): encoder = Encoder(sp) t = Tatabahasa(d) return {'inputs': encoder, 'targets': encoder, 'targets_error_tag': t} def hparams(self, defaults, model_hparams): super(Grammar, self).hparams(defaults, model_hparams) if 'use_error_tags' not in model_hparams: model_hparams.add_hparam('use_error_tags', True) if 'middle_prediction' not in model_hparams: model_hparams.add_hparam('middle_prediction', False) if 'middle_prediction_layer_factor' not in model_hparams: model_hparams.add_hparam('middle_prediction_layer_factor', 2) if 'ffn_in_prediction_cascade' not in model_hparams: model_hparams.add_hparam('ffn_in_prediction_cascade', 1) if 'error_tag_embed_size' not in model_hparams: model_hparams.add_hparam('error_tag_embed_size', 12) if model_hparams.use_error_tags: defaults.modality[ 'targets_error_tag' ] = modalities.ModalityType.SYMBOL error_tag_vocab_size = self._encoders[ 'targets_error_tag' ].vocab_size defaults.vocab_size['targets_error_tag'] = error_tag_vocab_size def example_reading_spec(self): data_fields, _ = super(Grammar, self).example_reading_spec() data_fields['targets_error_tag'] = tf.VarLenFeature(tf.int64) return data_fields, None @property def approx_vocab_size(self): return 32100 @property def is_generate_per_split(self): return False @property def dataset_splits(self): return [ {'split': problem.DatasetSplit.TRAIN, 'shards': 200}, {'split': problem.DatasetSplit.EVAL, 'shards': 1}, ] DATA_DIR = os.path.expanduser('t2t-tatabahasa/data') TMP_DIR = os.path.expanduser('t2t-tatabahasa/tmp') TRAIN_DIR = os.path.expanduser('t2t-tatabahasa/train-base') PROBLEM = 'grammar' t2t_problem = problems.problem(PROBLEM) MODEL = 'transformer_tag' HPARAMS = 'transformer_base' from tensor2tensor.utils.trainer_lib import create_run_config, create_experiment from tensor2tensor.utils.trainer_lib import create_hparams from tensor2tensor.utils import registry from tensor2tensor import models from tensor2tensor import problems from tensor2tensor.utils import trainer_lib X = tf.placeholder(tf.int32, [None, None], name = 'x_placeholder') Y = tf.placeholder(tf.int32, [None, None], name = 'y_placeholder') targets_error_tag = tf.placeholder(tf.int32, [None, None], 'error_placeholder') X_seq_len = tf.count_nonzero(X, 1, dtype=tf.int32) maxlen_decode = tf.reduce_max(X_seq_len) x = tf.expand_dims(tf.expand_dims(X, -1), -1) y = tf.expand_dims(tf.expand_dims(Y, -1), -1) targets_error_tag_ = tf.expand_dims(tf.expand_dims(targets_error_tag, -1), -1) features = { "inputs": x, "targets": y, "target_space_id": tf.constant(1, dtype=tf.int32), 'targets_error_tag': targets_error_tag, } Modes = tf.estimator.ModeKeys hparams = trainer_lib.create_hparams(HPARAMS, data_dir=DATA_DIR, problem_name=PROBLEM) hparams.filter_size = 3072 hparams.hidden_size = 768 hparams.num_heads = 12 hparams.num_hidden_layers = 8 hparams.vocab_divisor = 128 hparams.dropout = 0.1 hparams.max_length = 256 # LM hparams.label_smoothing = 0.0 hparams.shared_embedding_and_softmax_weights = False hparams.eval_drop_long_sequences = True hparams.max_length = 256 hparams.multiproblem_mixing_schedule = 'pretrain' # tpu hparams.symbol_modality_num_shards = 1 hparams.attention_dropout_broadcast_dims = '0,1' hparams.relu_dropout_broadcast_dims = '1' hparams.layer_prepostprocess_dropout_broadcast_dims = '1' model = registry.model(MODEL)(hparams, Modes.PREDICT) # logits = model(features) # logits # sess = tf.InteractiveSession() # sess.run(tf.global_variables_initializer()) # l = sess.run(logits, feed_dict = {X: [[10,10, 10, 10,10,1],[10,10, 10, 10,10,1]], # Y: [[10,10, 10, 10,10,1],[10,10, 10, 10,10,1]], # targets_error_tag: [[10,10, 10, 10,10,1], # [10,10, 10, 10,10,1]]}) features = { "inputs": x, "target_space_id": tf.constant(1, dtype=tf.int32), } with tf.variable_scope(tf.get_variable_scope(), reuse = False): fast_result = model._greedy_infer(features, maxlen_decode) result_seq = tf.identity(fast_result['outputs'], name = 'greedy') result_tag = tf.identity(fast_result['outputs_tag'], name = 'tag_greedy') from tensor2tensor.layers import common_layers def accuracy_per_sequence(predictions, targets, weights_fn = common_layers.weights_nonzero): padded_predictions, padded_labels = common_layers.pad_with_zeros(predictions, targets) weights = weights_fn(padded_labels) padded_labels = tf.to_int32(padded_labels) padded_predictions = tf.to_int32(padded_predictions) not_correct = tf.to_float(tf.not_equal(padded_predictions, padded_labels)) * weights axis = list(range(1, len(padded_predictions.get_shape()))) correct_seq = 1.0 - tf.minimum(1.0, tf.reduce_sum(not_correct, axis=axis)) return tf.reduce_mean(correct_seq) def padded_accuracy(predictions, targets, weights_fn = common_layers.weights_nonzero): padded_predictions, padded_labels = common_layers.pad_with_zeros(predictions, targets) weights = weights_fn(padded_labels) padded_labels = tf.to_int32(padded_labels) padded_predictions = tf.to_int32(padded_predictions) n = tf.to_float(tf.equal(padded_predictions, padded_labels)) * weights d = tf.reduce_sum(weights) return tf.reduce_sum(n) / d acc_seq = padded_accuracy(result_seq, Y) acc_tag = padded_accuracy(result_tag, targets_error_tag) ckpt_path = tf.train.latest_checkpoint(os.path.join(TRAIN_DIR)) ckpt_path sess = tf.InteractiveSession() sess.run(tf.global_variables_initializer()) var_lists = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES) saver = tf.train.Saver(var_list = var_lists) saver.restore(sess, ckpt_path) import pickle with open('../pure-text/dataset-tatabahasa.pkl', 'rb') as fopen: data = pickle.load(fopen) encoder = Encoder(sp) def get_xy(row, encoder): x, y, tag = [], [], [] for i in range(len(row[0])): t = encoder.encode(row[0][i][0]) y.extend(t) t = encoder.encode(row[1][i][0]) x.extend(t) tag.extend([row[1][i][1]] * len(t)) # EOS x.append(1) y.append(1) tag.append(0) return x, y, tag import numpy as np x, y, tag = get_xy(data[10], encoder) e = encoder.encode('Pilih mana jurusan yang sesuai dengan kebolehan anda dalam peperiksaan Sijil Pelajaran Malaysia semasa memohon kemasukan ke institusi pengajian tinggi.') + [1] r = sess.run(fast_result, feed_dict = {X: [e]}) r['outputs_tag'] encoder.decode(r['outputs'][0].tolist()) encoder.decode(x) encoder.decode(y) hparams.problem.example_reading_spec()[0] def parse(serialized_example): data_fields = hparams.problem.example_reading_spec()[0] features = tf.parse_single_example( serialized_example, features = data_fields ) for k in features.keys(): features[k] = features[k].values return features dataset = tf.data.TFRecordDataset('t2t-tatabahasa/data/grammar-dev-00000-of-00001') dataset = dataset.map(parse, num_parallel_calls=32) dataset = dataset.padded_batch(32, padded_shapes = { 'inputs': tf.TensorShape([None]), 'targets': tf.TensorShape([None]), 'targets_error_tag': tf.TensorShape([None]) }, padding_values = { 'inputs': tf.constant(0, dtype = tf.int64), 'targets': tf.constant(0, dtype = tf.int64), 'targets_error_tag': tf.constant(0, dtype = tf.int64), }) dataset = dataset.make_one_shot_iterator().get_next() dataset seqs, tags = [], [] index = 0 while True: try: d = sess.run(dataset) s, t = sess.run([acc_seq, acc_tag], feed_dict = {X:d['inputs'], Y: d['targets'], targets_error_tag: d['targets_error_tag']}) seqs.append(s) tags.append(t) print(f'done {index}') index += 1 except: break np.mean(seqs), np.mean(tags) saver = tf.train.Saver(tf.trainable_variables()) saver.save(sess, 'transformertag-base/model.ckpt') strings = ','.join( [ n.name for n in tf.get_default_graph().as_graph_def().node if ('Variable' in n.op or 'Placeholder' in n.name or 'greedy' in n.name or 'tag_greedy' in n.name or 'x_placeholder' in n.name or 'self/Softmax' in n.name) and 'adam' not in n.name and 'beta' not in n.name and 'global_step' not in n.name and 'modality' not in n.name and 'Assign' not in n.name ] ) strings.split(',') def freeze_graph(model_dir, output_node_names): if not tf.gfile.Exists(model_dir): raise AssertionError( "Export directory doesn't exists. Please specify an export " 'directory: %s' % model_dir ) checkpoint = tf.train.get_checkpoint_state(model_dir) input_checkpoint = checkpoint.model_checkpoint_path absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1]) output_graph = absolute_model_dir + '/frozen_model.pb' clear_devices = True with tf.Session(graph = tf.Graph()) as sess: saver = tf.train.import_meta_graph( input_checkpoint + '.meta', clear_devices = clear_devices ) saver.restore(sess, input_checkpoint) output_graph_def = tf.graph_util.convert_variables_to_constants( sess, tf.get_default_graph().as_graph_def(), output_node_names.split(','), ) with tf.gfile.GFile(output_graph, 'wb') as f: f.write(output_graph_def.SerializeToString()) print('%d ops in the final graph.' % len(output_graph_def.node)) freeze_graph('transformertag-base', strings) def load_graph(frozen_graph_filename): with tf.gfile.GFile(frozen_graph_filename, 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) with tf.Graph().as_default() as graph: tf.import_graph_def(graph_def) return graph g = load_graph('transformertag-base/frozen_model.pb') x = g.get_tensor_by_name('import/x_placeholder:0') greedy = g.get_tensor_by_name('import/greedy:0') tag_greedy = g.get_tensor_by_name('import/tag_greedy:0') test_sess = tf.InteractiveSession(graph = g) test_sess.run([greedy, tag_greedy], feed_dict = {x:d['inputs']}) import tensorflow as tf from tensorflow.tools.graph_transforms import TransformGraph from glob import glob tf.set_random_seed(0) import tensorflow_text import tf_sentencepiece transforms = ['add_default_attributes', 'remove_nodes(op=Identity, op=CheckNumerics, op=Dropout)', 'fold_constants(ignore_errors=true)', 'fold_batch_norms', 'fold_old_batch_norms', 'quantize_weights(fallback_min=-10, fallback_max=10)', 'strip_unused_nodes', 'sort_by_execution_order'] pb = 'transformertag-base/frozen_model.pb' input_graph_def = tf.GraphDef() with tf.gfile.FastGFile(pb, 'rb') as f: input_graph_def.ParseFromString(f.read()) transformed_graph_def = TransformGraph(input_graph_def, ['x_placeholder'], ['greedy', 'tag_greedy'], transforms) with tf.gfile.GFile(f'{pb}.quantized', 'wb') as f: f.write(transformed_graph_def.SerializeToString()) g = load_graph('transformertag-base/frozen_model.pb.quantized') x = g.get_tensor_by_name('import/x_placeholder:0') greedy = g.get_tensor_by_name('import/greedy:0') tag_greedy = g.get_tensor_by_name('import/tag_greedy:0') test_sess = tf.InteractiveSession(graph = g) test_sess.run([greedy, tag_greedy], feed_dict = {x:d['inputs']}) ```
github_jupyter
# Objective: Classify Amazon food reviews using Random Forest Classifier. We'll do the following exercises in this notebook * Load the data stored in the format 1. BoW 2. Tfidf 3. Avg. W2V 4. Tfidf weighted W2V * Divide the data in cross validation sets and find the optimal parameters n_estimators and max_depth using GridSearchCV * Observe the Cross Validation score for **each** combination of *n_estimator* and *max_depth* in Cross Validation * Plot confusion matrix and calculate Precision, Recall, FPR, TNR, FNR. **Brief Summary of Random Forest Classifier** ``` from IPython.display import Image Image(r'C:\Users\ucanr\Dropbox\AAIC\assignments mandatory\9. RF and GBDT\RF_summary.jpg') # To suprress the warnings as they make the notebook less presentable. import sys import warnings if not sys.warnoptions: warnings.simplefilter("ignore") ``` Import the necessary libraries. ``` import pandas as pd import matplotlib.pyplot as plt import seaborn as sbn from sklearn.preprocessing import StandardScaler from sklearn.metrics import recall_score, precision_score, confusion_matrix, accuracy_score from sklearn.model_selection import TimeSeriesSplit, GridSearchCV, cross_val_score, RandomizedSearchCV from sklearn.ensemble import RandomForestClassifier import pickle import numpy as np from sklearn.svm import SVC ``` This Jupyter notebook extension notifies you when a cell finishes its execution! ``` %load_ext jupyternotify ``` ## Important parameters of Random Forest * *n_estimators* : This parameter specifies how many base learners to use. Generally, more the number of estimators, better the results due to the aggregation. * *max_depth* : This determines how deep the decision trees will be built. For RF, training deep trees is preferred since the overfitting is nullified in the aggregation phase. Load the target variable y of train and test sets. Note that the entire dataset is being used. All 350k reviews. The dataset is divided into train and test with ratio 80:20 respectively. ``` # f = open(r'D:\data_science\datasets\amazon2\y_train_full80_20.pkl', 'rb') f = open('/home/ucanreachtvk/data/y_train_full80_20.pkl', 'rb') y_train = pickle.load(f) f.close() print('The datatype of y_train is : {}'.format(type(y_train))) print('The shape of y_train is : {}'.format(y_train.shape)) # f = open(r'D:\data_science\datasets\amazon2\y_test_full80_20.pkl', 'rb') f = open('/home/ucanreachtvk/data/y_test_full80_20.pkl', 'rb') y_test = pickle.load(f) f.close() print('The datatype of y_test is : {}'.format(type(y_test))) print('The shape of y_test is : {}'.format(y_test.shape)) ``` ## Bag of Words I had saved the trained BoW model and the transformed data on disk. Let's load it. ``` # f = open(r'D:\data_science\datasets\amazon2\X_train_transformed_bow_full_nparray.pkl', 'rb') f = open('/home/ucanreachtvk/data/X_train_transformed_bow_full_nparray.pkl', 'rb') X_train_transformed_bow = pickle.load(f) f.close() print('The datatype of X_train_transformed_bow is : {}'.format(type(X_train_transformed_bow))) print('The shape of X_train_transformed_bow is : {}'.format(X_train_transformed_bow.shape)) ``` There are 64221 features in the bow representation. Load test data too. ``` # f = open(r'D:\data_science\datasets\amazon2\X_test_transformed_bow_full_nparray.pkl', 'rb') f = open('/home/ucanreachtvk/data/X_test_transformed_bow_full_nparray.pkl', 'rb') X_test_transformed_bow = pickle.load(f) f.close() print('The datatype of X_test_transformed_bow is : {}'.format(type(X_test_transformed_bow))) print('The shape of X_test_transformed_bow is : {}'.format(X_test_transformed_bow.shape)) ``` Count the number of non-zero elements in the array. ``` X_train_transformed_bow.count_nonzero ``` ## Feature scaling Since the base learner of RF is decision trees, it doesn't really need data to be standardized. But for the sake of consistency in the workflow, let's do it. ``` scaler = StandardScaler(with_mean = False) X_train_transformed_bow_std = scaler.fit_transform(X_train_transformed_bow) X_test_transformed_bow_std = scaler.transform(X_test_transformed_bow) X_train_transformed_bow_std.shape X_test_transformed_bow_std.shape y_train.shape ``` ## Some Functions Let's define some functions that we'll call repeatedly in this notebook. 1. **n_depth_score** : Returns a dataframe containing the n_estimator, max_depth and accuracy score tried by GridSearch 2. **give_me_ratios** : To plot ratios such as Precision, Recall, TNR, FPR, FNR. 3. **plot_confusion_matrix** : As the name says. 4. **GridSearch** : Create Time based cross validation splits using TimeSeriesSplit() and create a gridsearch object for a Random Forest classifier. 5. **headmap** : For each pair of (n_estimator, max_depth) value in GridSearch, it will plot the score for train and test data during cross validation. ``` def n_depth_score(cv_results_): D={'n':[], 'depth':[], 'score':[]} for n in [ 10 , 30, 50 , 100 , 150 , 175]: for depth in [3, 5, 9, 13, 17, 23, 31]: d={'n_estimators': n, 'max_depth': depth} flag=True try: ind=cv_results_['params'].index(d) except: flag = False D['n'].append(n) D['depth'].append(depth) if flag == False: D['score'].append(-1) else: D['score'].append(cv_results_['mean_train_score'][ind]) return(pd.DataFrame.from_dict(D)) def give_me_ratios(X_train, y_train, X_test, y_test, vector_type, table, clf, best_n, best_depth): cm_train = confusion_matrix(y_train, clf.predict(X_train)) tn, fp, fn, tp = cm_train.ravel() recall_train = round(tp/(tp+fn),2) precision_train = round(tp/(tp+fp),2) tnr_train = round(tn/(tn+fp),2) fpr_train = round(fp/(fp+tn),2) fnr_train = round(fn/(fn+tp),2) accuracy_train = round((tp+tn)/(tp+tn+fp+fn)) accuracy_train = (tp+tn)/(tp+tn+fp+fn) cm_test = confusion_matrix(y_test, clf.predict(X_test)) tn, fp, fn, tp = cm_test.ravel() recall_test = round(tp/(tp+fn),2) precision_test = round(tp/(tp+fp),2) tnr_test = round(tn/(tn+fp),2) fpr_test = round(fp/(fp+tn),2) fnr_test = round(fn/(fn+tp),2) accuracy_test = round(tp+tn)/(tp+tn+fp+fn) table.field_names = ['Vector Type','Data Set','Best n_estimators','Best max_depth', 'Precision', 'Recall', 'TNR', 'FPR', 'FNR', 'Accuracy'] table.add_row([vector_type,'Train',best_n,best_depth, precision_train, recall_train, tnr_train, fpr_train, fnr_train, accuracy_train]) table.add_row([vector_type,'Test',best_n,best_depth, precision_test, recall_test, tnr_test, fpr_test, fnr_test, accuracy_test]) print(table) return (cm_train, cm_test) def plot_confusion_matrix(cm_train, cm_test, title): import pandas as pd plt.style.use('fivethirtyeight') plt.figure(figsize=(15,6)).suptitle(title, fontsize=15) plt.subplot(1,2,1) df_cm = pd.DataFrame(cm_train, range(2), range(2)) sbn.heatmap(df_cm, annot=True, annot_kws={"size":16}, cbar=False, fmt = 'd', xticklabels=['Negative', 'Positive'], yticklabels=['Negative', 'Positive'], cmap="YlGnBu") plt.xticks(fontsize=13) plt.yticks(fontsize=13) plt.xlabel('Predicted Class', fontsize=15) plt.ylabel('Actual Class', fontsize=15) plt.title('Train Data', fontsize = 14) plt.subplot(1,2,2) df_cm = pd.DataFrame(cm_test, range(2), range(2)) sbn.heatmap(df_cm, annot=True, annot_kws={"size":16}, cbar=False, fmt = 'd', xticklabels=['Negative', 'Positive'], yticklabels=['Negative', 'Positive'], cmap="YlGnBu") plt.xticks(fontsize=13) plt.yticks(fontsize=13) plt.xlabel('Predicted Class', fontsize=15) # plt.ylabel('Actual Class', fontsize=15) plt.title('Test Data', fontsize = 14) plt.tight_layout() def GridSearch(X_train): tscv = TimeSeriesSplit(n_splits=7) my_cv = tscv.split(X_train) rfc = RandomForestClassifier(class_weight='balanced') hyp_par = { 'max_depth' : [ 3, 5, 9, 13, 17, 23, 31 ], 'n_estimators' : [ 10 , 30, 50 , 100 , 150 , 175 ] } clf = GridSearchCV(estimator=rfc, cv=my_cv, param_grid=hyp_par, n_jobs=6, return_train_score=True) return clf def heatmap(df, vector_type, style): plt.figure(figsize=(15,6)) plt.style.use(style) plt.subplot(1,1,1) sbn.heatmap(data=df.pivot('n','depth','score'), annot=True, linewidth = 0.5, cmap="YlGnBu") plt.title('{} | Training/CV Accuracy'.format(vector_type), fontsize = 15) plt.xlabel('Max Depth', fontsize = 14) plt.ylabel('# estimators', fontsize = 14) plt.xticks(fontsize=13) plt.yticks(fontsize=13) plt.tight_layout() plt.show() ``` **BoW | GridSearchCV** Get the classifier by calling the GridSearch funtion. ``` clf = GridSearch(X_train_transformed_bow_std) ``` Train the model ``` %%notify %%time clf.fit(X_train_transformed_bow_std, y_train) ``` **Heatmap of scores for each pair of (n_estimators,max_depth) found during GridSearch.** ``` heatmap(n_depth_score(clf.cv_results_), vector_type='BoW', style = 'bmh') ``` Import prettytable to summarize the results in a table ``` from prettytable import PrettyTable table = PrettyTable() ``` **Ratios | BoW** ``` cm_bow_train, cm_bow_test = give_me_ratios(X_train_transformed_bow_std, y_train, X_test_transformed_bow_std, y_test, 'Bag of Words', table, clf, clf.best_params_['n_estimators'],clf.best_params_['max_depth']) ``` **confusion matrix | BoW** ``` plot_confusion_matrix(cm_bow_train, cm_bow_test, title="BOW | Grid Search") ``` ## Tfidf In this section, we'll apply Random Forests on reviews represented in the Tfidf format. Load the transformed train and test sets. ``` # f = open(r'D:\data_science\datasets\amazon2\X_train_transformed_tfidf_full_nparray.pkl', 'rb') f = open('/home/ucanreachtvk/data/X_train_transformed_tfidf_full_nparray.pkl', 'rb') X_train_transformed_tfidf = pickle.load(f) f.close() print('The datatype of X_train_transformed_tfidf is : {}'.format(type(X_train_transformed_tfidf))) print('The shape of X_train_transformed_tfidf is : {}'.format(X_train_transformed_tfidf.shape)) # f = open(r'D:\data_science\datasets\amazon2\X_test_transformed_tfidf_full_nparray.pkl', 'rb') f = open('/home/ucanreachtvk/data/X_test_transformed_tfidf_full_nparray.pkl', 'rb') X_test_transformed_tfidf = pickle.load(f) f.close() print('The datatype of X_test_transformed_tfidf is : {}'.format(type(X_test_transformed_tfidf))) print('The shape of X_test_transformed_tfidf is : {}'.format(X_test_transformed_tfidf.shape)) ``` Standardize data ``` scaler = StandardScaler(with_mean = False) X_train_transformed_tfidf_std = scaler.fit_transform(X_train_transformed_tfidf) X_test_transformed_tfidf_std = scaler.transform(X_test_transformed_tfidf) ``` **GridSearch | TFIDF ** ``` clf = GridSearch(X_train_transformed_tfidf_std) ``` Train the model ``` %%notify %%time clf.fit(X_train_transformed_tfidf_std, y_train) ``` **Score Heatmap | Tfidf** ``` heatmap(n_depth_score(clf.cv_results_), vector_type='Tfidf', style = 'ggplot') ``` **Ratios | Tfidf** ``` %%notify cm_tfidf_train, cm_tfidf_test = give_me_ratios(X_train_transformed_tfidf_std, y_train, X_test_transformed_tfidf_std, y_test, 'Tfidf', table, clf, clf.best_params_['n_estimators'],clf.best_params_['max_depth']) ``` **confusion matrix | Tfidf** ``` plot_confusion_matrix(cm_tfidf_train, cm_tfidf_test, title="Tfidf | Grid Search") ``` ## Avg W2V In this section, we'll apply Random Forests on data represented in the avg. W2V format. load the train and test data stored on disk. ``` # f = open(r'D:\data_science\datasets\amazon2\X_train_transformed_avgW2V_full80_20_nparray.pkl', 'rb') f = open('/home/ucanreachtvk/data/X_train_transformed_avgW2V_full80_20_nparray.pkl', 'rb') X_train_transformed_avgW2V = pickle.load(f) f.close() print('The datatype of X_train_transformed_avgW2V is : {}'.format(type(X_train_transformed_avgW2V))) print('The shape of X_train_transformed_avgW2V is : {}'.format(X_train_transformed_avgW2V.shape)) # f = open(r'D:\data_science\datasets\amazon2\X_test_transformed_avgW2V_full80_20_nparray.pkl', 'rb') f = open('/home/ucanreachtvk/data/X_test_transformed_avgW2V_full80_20_nparray.pkl', 'rb') X_test_transformed_avgW2V = pickle.load(f) f.close() print('The datatype of X_test_transformed_avgW2V is : {}'.format(type(X_test_transformed_avgW2V))) print('The shape of X_test_transformed_avgW2V is : {}'.format(X_test_transformed_avgW2V.shape)) ``` Standardize the data ``` scaler = StandardScaler(with_mean = True) X_train_transformed_avgW2V_std = scaler.fit_transform(X_train_transformed_avgW2V) X_test_transformed_avgW2V_std = scaler.transform(X_test_transformed_avgW2V) ``` **GridSearch | avg. W2V ** ``` clf = GridSearch(X_train_transformed_avgW2V_std) %%notify %%time clf.fit(X_train_transformed_avgW2V_std, y_train) ``` **Score Heatmap | Avg. W2V** ``` heatmap(n_depth_score(clf.cv_results_), vector_type='Avg. W2V', style = 'ggplot') ``` **Ratios | Avg. W2V** ``` %%notify cm_w2v_train, cm_w2v_test = give_me_ratios(X_train_transformed_avgW2V_std, y_train, X_test_transformed_avgW2V_std, y_test, 'Avg. W2V', table, clf, clf.best_params_['n_estimators'],clf.best_params_['max_depth']) ``` **Confusion Matrix | Avg. W2V** ``` plot_confusion_matrix(cm_w2v_train, cm_w2v_test, title="Avg. W2V | Grid Search") ``` ## Tfidf weighted W2V In this last section, we apply Random Forests on vectors represented in the form of Tfidf weighted W2V. ``` # f = open(r'D:\data_science\datasets\amazon2\X_train_transformed_TfidfWeightedW2V_full80_20_nparray.pkl', 'rb') f = open('/home/ucanreachtvk/data/X_train_transformed_TfidfWeightedW2V_full80_20_nparray.pkl', 'rb') X_train_transformed_TfidfW2V = pickle.load(f) f.close() print('The datatype of X_train_transformed_TfidfW2V is : {}'.format(type(X_train_transformed_TfidfW2V))) print('The shape of X_train_transformed_TfidfW2V is : {}'.format(X_train_transformed_TfidfW2V.shape)) # f = open(r'D:\data_science\datasets\amazon2\X_test_transformed_TfidfWeightedW2V_full80_20_nparray.pkl', 'rb') f = open('/home/ucanreachtvk/data/X_test_transformed_TfidfWeightedW2V_full80_20_nparray.pkl', 'rb') X_test_transformed_TfidfW2V = pickle.load(f) f.close() print('The datatype of X_test_transformed_TfidfW2V is : {}'.format(type(X_test_transformed_TfidfW2V))) print('The shape of X_train_transformed_TfidfW2V is : {}'.format(X_test_transformed_TfidfW2V.shape)) ``` Standardize data ``` scaler = StandardScaler(with_mean = True) X_train_transformed_TfidfW2V_std = scaler.fit_transform(X_train_transformed_TfidfW2V) X_test_transformed_TfidfW2V_std = scaler.transform(X_test_transformed_TfidfW2V) ``` **GridSearch | Tfidf Weighted W2V** ``` clf = GridSearch(X_train_transformed_TfidfW2V_std) %%notify %%time clf.fit(X_train_transformed_TfidfW2V_std, y_train) ``` **Score Heatmap | Tfidf wt. W2V** ``` heatmap(n_depth_score(clf.cv_results_), vector_type='Tfidf wt. W2V', style = 'ggplot') ``` **Ratios | Tfidf wt. W2V** ``` cm_tfidfw2v_train, cm_tfidfw2v_test = give_me_ratios(X_train_transformed_TfidfW2V_std, y_train, X_test_transformed_TfidfW2V_std, y_test, 'Tfidf wt. W2V', table, clf, clf.best_params_['n_estimators'],clf.best_params_['max_depth']) ``` **Confusion Matrix | Tfidf wt. W2V** ``` plot_confusion_matrix(cm_tfidfw2v_train, cm_tfidfw2v_test, title="Tfidf wt. W2V | Grid Search") ``` ### Conclusion: * We applied Random Forests on amazon food reviews for various vector representations. * Found that higher the number of estimators and max_depth, better the performance of the model, as expected. * Plotted the confusion matrix for train and test data and also calculated several important ratios based on it such as Precision, Recall, FNR, etc.
github_jupyter
# Preprocess "ROC Stories" for Story Completion ``` %load_ext autoreload %autoreload 2 %matplotlib inline import os import glob import pandas as pd DATAPATH = '/path/to/ROCStories' ROCstory_spring2016 = pd.read_csv(os.path.join(DATAPATH, "ROCStories__spring2016 - ROCStories_spring2016.csv")) ROCstory_winter2017 = pd.read_csv(os.path.join(DATAPATH, "ROCStories_winter2017 - ROCStories_winter2017.csv")) ROCstory_train = pd.concat([ROCstory_spring2016, ROCstory_winter2017]) len(ROCstory_train["storyid"].unique()) stories = ROCstory_train.loc[:, "sentence1":"sentence5"].values ``` ## Train, Dev, Test ``` from sklearn.model_selection import train_test_split train_and_dev, test_stories = train_test_split(stories, test_size=0.1) train_stories, dev_stories = train_test_split(train_and_dev, test_size=1/9) len(train_stories), len(dev_stories), len(test_stories) ``` ### dev ``` import numpy as np np.random.seed(1234) dev_missing_indexes = np.random.randint(low=0, high=5, size=len(dev_stories)) dev_stories_with_missing = [] for st, mi in zip(dev_stories, dev_missing_indexes): missing_sentence = st[mi] remain_sentences = np.delete(st, mi) dev_stories_with_missing.append([remain_sentences[0], remain_sentences[1], remain_sentences[2], remain_sentences[3], mi, missing_sentence]) dev_df = pd.DataFrame(dev_stories_with_missing, columns=['stories_with_missing_sentence1', 'stories_with_missing_sentence2', 'stories_with_missing_sentence3', 'stories_with_missing_sentence4', 'missing_id', 'missing_sentence']) dev_df.to_csv("./data/rocstories_completion_dev.csv", index=False) ``` ### test ``` test_missing_indexes = np.random.randint(low=0, high=5, size=len(test_stories)) test_stories_with_missing = [] for st, mi in zip(test_stories, test_missing_indexes): missing_sentence = st[mi] remain_sentences = np.delete(st, mi) test_stories_with_missing.append([remain_sentences[0], remain_sentences[1], remain_sentences[2], remain_sentences[3], mi, missing_sentence]) test_df = pd.DataFrame(test_stories_with_missing, columns=['stories_with_missing_sentence1', 'stories_with_missing_sentence2', 'stories_with_missing_sentence3', 'stories_with_missing_sentence4', 'missing_id', 'missing_sentence']) test_df.to_csv("./data/rocstories_completion_test.csv", index=False) ``` ### train ``` train_df = pd.DataFrame(train_stories, columns=['sentence1', 'sentence2', 'sentence3', 'sentence4', 'sentence5']) train_df.to_csv("./data/rocstories_completion_train.csv", index=False) ``` ## load saved data ``` train_df2 = pd.read_csv("./data/rocstories_completion_train.csv") # train_df2.head() dev_df2 = pd.read_csv("./data/rocstories_completion_dev.csv") # dev_df2.head() test_df2 = pd.read_csv("./data/rocstories_completion_test.csv") # test_df2.head() dev_df2.missing_id.value_counts() test_df2.missing_id.value_counts() ``` ### mini size dataset ``` train_mini, train_else = train_test_split(train_df, test_size=0.9) len(train_mini) train_mini.to_csv("./data/rocstories_completion_train_mini.csv", index=False) dev_mini, dev_else = train_test_split(dev_df, test_size=0.9) len(dev_mini) dev_mini.to_csv("./data/rocstories_completion_dev_mini.csv", index=False) ```
github_jupyter
``` from IPython.core.display import display, HTML display(HTML("<style>.container { width:85% !important; }</style>")) import os import time import numpy as np import pandas as pd from os import listdir from io import BytesIO import requests import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers,models,utils from tensorflow.keras.layers import Dense,Flatten from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.callbacks import EarlyStopping from scipy import stats from sklearn import preprocessing from sklearn.metrics import confusion_matrix from sklearn.metrics import roc_curve, auc import PIL from PIL import Image import seaborn as sns from matplotlib.pyplot import imshow import matplotlib.pyplot as plt if tf.test.gpu_device_name(): print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) else: print("Please install GPU version of TF") DATA_DIR = 'data/caps_and_shoes_squared/' IMAGE_SIZE = (28,28) FEATURE_SIZE = IMAGE_SIZE[0]*IMAGE_SIZE[1] def convert_img_to_data(image): data = np.asarray(image) gs_image = image.convert(mode='L') gs_data = np.asarray(gs_image) gs_image.thumbnail(IMAGE_SIZE, Image.ANTIALIAS) gs_resized = gs_image.resize(IMAGE_SIZE,Image.ANTIALIAS) gs_resized_data = np.asarray(gs_resized) reshaped_gs_data = gs_resized_data.reshape(IMAGE_SIZE[0]*IMAGE_SIZE[1]) return reshaped_gs_data def convert_images_from_dir(dir_path): image_data = [] for filename in listdir(dir_path): image = Image.open(dir_path +os.sep + filename) reshaped_gs_data = convert_img_to_data(image) image_data.append(reshaped_gs_data) return image_data def load_from_dir(dir_path, labels): label_data = [] image_data = [] for label in labels: data_from_dir = convert_images_from_dir(dir_path + label) labels_for_data = [label for i in range(len(data_from_dir))] image_data += data_from_dir label_data += labels_for_data print('Found %d images belonging to %d classes' % (len(image_data), len(labels))) return (np.array(image_data),np.array(label_data)) def load_img_data(data_dir): train_dir = DATA_DIR + 'train/' validation_dir = DATA_DIR + 'val/' test_dir = DATA_DIR + 'test/' if (os.path.isdir(train_dir) and os.path.isdir(validation_dir) and os.path.isdir(test_dir)) : labels = [subdirname.name for subdirname in os.scandir(train_dir) if subdirname.is_dir()] train_data = load_from_dir(train_dir,labels) validation_data = load_from_dir(validation_dir,labels) test_data = load_from_dir(test_dir,labels) return train_data, validation_data, test_data train_data, validation_data, test_data = load_img_data(DATA_DIR) X_train, y_train = train_data X_val, y_val = validation_data X_test, y_test = test_data X_train = X_train.astype('float32') / 255 X_val = X_val.astype('float32') / 255 X_test = X_test.astype('float32') / 255 le = preprocessing.LabelEncoder() le.fit(y_train) y_train = le.transform(y_train) y_val = le.transform(y_val) y_test = le.transform(y_test) y_train = utils.to_categorical(y_train) y_val = utils.to_categorical(y_val) y_test = utils.to_categorical(y_test) def define_multilayer_model_architecture_64_32_16(): model = models.Sequential() model.add(Dense(64, activation='relu', input_shape=(FEATURE_SIZE,))) model.add(Dense(32, activation='relu')) model.add(Dense(16, activation='relu')) model.add(Dense(2, activation='softmax')) model.compile(optimizer='sgd', loss='binary_crossentropy', metrics=['accuracy']) return model model = define_multilayer_model_architecture_64_32_16() %time history = model.fit(X_train, y_train, validation_data = (X_val,y_val), epochs=500, batch_size=32, shuffle=True, verbose = 1) plt.figure(num=None, figsize=(16, 6)) plt.plot(history.history['accuracy'], label='train') plt.plot(history.history['val_accuracy'], label='validation') plt.legend() plt.xlim(0, 500) plt.show() ITER = 10 training_time_list = [] test_accuracy_list = [] for iter_count in range(ITER): model = define_multilayer_model_architecture_64_32_16() start_time = time.time() model.fit(X_train, y_train, validation_data = (X_val,y_val), epochs=250, batch_size=32, verbose=0, shuffle=True) training_time = time.time() - start_time training_time_list.append(training_time) test_loss, test_accuracy = model.evaluate(X_test, y_test, batch_size=32, verbose=0) test_accuracy_list.append(test_accuracy) print('Accuracies over 10 runs : %s' % test_accuracy_list) print('Avg training time : %.3f s' % np.mean(training_time_list)) print('Avg test accuracy : %.4f +- %.2f' % (np.mean(test_accuracy_list), np.std(test_accuracy_list))) print('Total parameters : %d' % model.count_params()) def define_multilayer_model_architecture_32_8(): model = models.Sequential() model.add(Dense(32, activation='relu', input_shape=(FEATURE_SIZE,))) model.add(Dense(8, activation='relu')) model.add(Dense(2, activation='softmax')) model.compile(optimizer='sgd', loss='binary_crossentropy', metrics=['accuracy']) return model model = define_multilayer_model_architecture_32_8() %time history = model.fit(X_train, y_train, validation_data = (X_val,y_val), epochs=500, batch_size=32, shuffle=True, verbose = 1) plt.figure(num=None, figsize=(16, 6)) plt.plot(history.history['accuracy'], label='train') plt.plot(history.history['val_accuracy'], label='validation') plt.legend() plt.xlim(0, 500) plt.show() ITER = 10 training_time_list = [] test_accuracy_list = [] for iter_count in range(ITER): model = define_multilayer_model_architecture_32_8() start_time = time.time() model.fit(X_train, y_train, validation_data = (X_val,y_val), epochs=250, batch_size=32, verbose=0, shuffle=True) training_time = time.time() - start_time training_time_list.append(training_time) test_loss, test_accuracy = model.evaluate(X_test, y_test, batch_size=32, verbose=0) test_accuracy_list.append(test_accuracy) print('Accuracies over 10 runs : %s' % test_accuracy_list) print('Avg training time : %.3f s' % np.mean(training_time_list)) print('Avg test accuracy : %.4f +- %.2f' % (np.mean(test_accuracy_list), np.std(test_accuracy_list))) print('Total parameters : %d' % model.count_params()) model = define_multilayer_model_architecture_64_32_16() %time history = model.fit(X_train, y_train, validation_split = 0.2, epochs=225, batch_size=32, shuffle=True, verbose = 0) plt.figure(num=None, figsize=(16, 6)) plt.plot(history.history['accuracy'], label='train') plt.plot(history.history['val_accuracy'], label='validation') plt.legend() plt.xlim(0, 500) plt.show() model.fit(X_train, y_train, validation_data = (X_val,y_val), epochs=50, batch_size=32, shuffle=True, verbose = 2) test_loss, test_accuracy = model.evaluate(X_test, y_test, batch_size=32) print('Test loss: %.4f accuracy: %.4f' % (test_loss, test_accuracy)) ITER = 10 training_time_list = [] test_accuracy_list = [] for iter_count in range(ITER): model = define_multilayer_model_architecture_64_32_16() start_time = time.time() model.fit(X_train, y_train, validation_split = 0.2, epochs=200, batch_size=32, shuffle=True, verbose = 0) model.fit(X_train, y_train, validation_data = (X_val,y_val), epochs=100, batch_size=32, verbose=0, shuffle=True) training_time = time.time() - start_time training_time_list.append(training_time) test_loss, test_accuracy = model.evaluate(X_test, y_test, batch_size=32, verbose=0) test_accuracy_list.append(test_accuracy) print('iter # %d : %.3f'%(iter_count+1,test_accuracy)) print('Accuracies over 10 runs : %s' % test_accuracy_list) print('Avg training time : %.3f s' % np.mean(training_time_list)) print('Avg test accuracy : %.4f +- %.2f' % (np.mean(test_accuracy_list), np.std(test_accuracy_list))) print('Total parameters : %d' % model.count_params()) ```
github_jupyter
# Introduction to the jupyter ecosystem & notebooks ## Before we get started ... <br> - most of what you’ll see within this lecture was prepared by Ross Markello, Michael Notter and Peer Herholz and further adapted for this course by Peer Herholz - based on Tal Yarkoni's ["Introduction to Python" lecture at Neurohackademy 2019](https://neurohackademy.org/course/introduction-to-python-2/) - based on [IPython notebooks from J. R. Johansson](http://github.com/jrjohansson/scientific-python-lectures) - based on http://www.stavros.io/tutorials/python/ & http://www.swaroopch.com/notes/python - based on https://github.com/oesteban/biss2016 & https://github.com/jvns/pandas-cookbook ## Objectives 📍 * learn basic and efficient usage of the `jupyter ecosystem` & `notebooks` * what is `Jupyter` & how to utilize `jupyter notebooks` ## To Jupyter & beyond <img align="center" src="https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/jupyter_ecosystem.png" alt="logo" title="jupyter" width="500" height="200" /> - a community of people - an ecosystem of open tools and standards for interactive computing - language-agnostic and modular - empower people to use other open tools ## To Jupyter & beyond <img align="center" src="https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/jupyter_example.png" alt="logo" title="jupyter" width="900" height="400" /> ## Before we get started 2... We're going to be working in [Jupyter notebooks]() for most of this presentation! To load yours, do the following: 1. Open a terminal/shell & navigate to the folder where you stored the course material (`cd`) 2. Type `jupyter notebook` 3. If you're not automatically directed to a webpage copy the URL (`https://....`) printed in the `terminal` and paste it in your `browser` ## Files Tab The `files tab` provides an interactive view of the portion of the `filesystem` which is accessible by the `user`. This is typically rooted by the directory in which the notebook server was started. The top of the `files list` displays `clickable` breadcrumbs of the `current directory`. It is possible to navigate the `filesystem` by clicking on these `breadcrumbs` or on the `directories` displayed in the `notebook list`. A new `notebook` can be created by clicking on the `New dropdown button` at the top of the list, and selecting the desired `language kernel`. `Notebooks` can also be `uploaded` to the `current directory` by dragging a `notebook` file onto the list or by clicking the `Upload button` at the top of the list. ### The Notebook When a `notebook` is opened, a new `browser tab` will be created which presents the `notebook user interface (UI)`. This `UI` allows for `interactively editing` and `running` the `notebook document`. A new `notebook` can be created from the `dashboard` by clicking on the `Files tab`, followed by the `New dropdown button`, and then selecting the `language` of choice for the `notebook`. An `interactive tour` of the `notebook UI` can be started by selecting `Help` -> `User Interface Tour` from the `notebook menu bar`. ### Header At the top of the `notebook document` is a `header` which contains the `notebook title`, a `menubar`, and `toolbar`. This `header` remains `fixed` at the top of the screen, even as the `body` of the `notebook` is `scrolled`. The `title` can be edited `in-place` (which renames the `notebook file`), and the `menubar` and `toolbar` contain a variety of actions which control `notebook navigation` and `document structure`. <img align="center" src="https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/notebook_header_4_0.png" alt="logo" title="jupyter" width="600" height="100" /> ### Body The `body` of a `notebook` is composed of `cells`. Each `cell` contains either `markdown`, `code input`, `code output`, or `raw text`. `Cells` can be included in any order and edited at-will, allowing for a large amount of flexibility for constructing a narrative. - `Markdown cells` - These are used to build a `nicely formatted narrative` around the `code` in the document. The majority of this lesson is composed of `markdown cells`. - to get a `markdown cell` you can either select the `cell` and use `esc` + `m` or via `Cell -> cell type -> markdown` <img align="center" src="https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/notebook_body_4_0.png" alt="logo" title="jupyter" width="700" height="200" /> - `Code cells` - These are used to define the `computational code` in the `document`. They come in `two forms`: - the `input cell` where the `user` types the `code` to be `executed`, - and the `output cell` which is the `representation` of the `executed code`. Depending on the `code`, this `representation` may be a `simple scalar value`, or something more complex like a `plot` or an `interactive widget`. - to get a `code cell` you can either select the `cell` and use `esc` + `y` or via `Cell -> cell type -> code` <img align="center" src="https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/notebook_body_4_0.png" alt="logo" title="jupyter" width="700" height="200" /> - `Raw cells` - These are used when `text` needs to be included in `raw form`, without `execution` or `transformation`. <img align="center" src="https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/notebook_body_4_0.png" alt="logo" title="jupyter" width="700" height="200" /> ### Modality The `notebook user interface` is `modal`. This means that the `keyboard` behaves `differently` depending upon the `current mode` of the `notebook`. A `notebook` has `two modes`: `edit` and `command`. `Edit mode` is indicated by a `green cell border` and a `prompt` showing in the `editor area`. When a `cell` is in `edit mode`, you can type into the `cell`, like a `normal text editor`. <img align="center" src="https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/edit_mode.png" alt="logo" title="jupyter" width="700" height="100" /> `Command mode` is indicated by a `grey cell border`. When in `command mode`, the structure of the `notebook` can be modified as a whole, but the `text` in `individual cells` cannot be changed. Most importantly, the `keyboard` is `mapped` to a set of `shortcuts` for efficiently performing `notebook and cell actions`. For example, pressing `c` when in `command` mode, will `copy` the `current cell`; no modifier is needed. <img align="center" src="https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/command_mode.png" alt="logo" title="jupyter" width="700" height="100" /> ### Mouse navigation The `first concept` to understand in `mouse-based navigation` is that `cells` can be `selected by clicking on them`. The `currently selected cell` is indicated with a `grey` or `green border depending` on whether the `notebook` is in `edit or command mode`. Clicking inside a `cell`'s `editor area` will enter `edit mode`. Clicking on the `prompt` or the `output area` of a `cell` will enter `command mode`. The `second concept` to understand in `mouse-based navigation` is that `cell actions` usually apply to the `currently selected cell`. For example, to `run` the `code in a cell`, select it and then click the `Run button` in the `toolbar` or the `Cell` -> `Run` menu item. Similarly, to `copy` a `cell`, select it and then click the `copy selected cells button` in the `toolbar` or the `Edit` -> `Copy` menu item. With this simple pattern, it should be possible to perform nearly every `action` with the `mouse`. `Markdown cells` have one other `state` which can be `modified` with the `mouse`. These `cells` can either be `rendered` or `unrendered`. When they are `rendered`, a nice `formatted representation` of the `cell`'s `contents` will be presented. When they are `unrendered`, the `raw text source` of the `cell` will be presented. To `render` the `selected cell` with the `mouse`, click the `button` in the `toolbar` or the `Cell` -> `Run` menu item. To `unrender` the `selected cell`, `double click` on the `cell`. ### Keyboard Navigation The `modal user interface` of the `IPython Notebook` has been optimized for efficient `keyboard` usage. This is made possible by having `two different sets` of `keyboard shortcuts`: one set that is `active in edit mode` and another in `command mode`. The most important `keyboard shortcuts` are `Enter`, which enters `edit mode`, and `Esc`, which enters `command mode`. In `edit mode`, most of the `keyboard` is dedicated to `typing` into the `cell's editor`. Thus, in `edit mode` there are relatively `few shortcuts`. In `command mode`, the entire `keyboard` is available for `shortcuts`, so there are many more possibilities. The following images give an overview of the available `keyboard shortcuts`. These can viewed in the `notebook` at any time via the `Help` -> `Keyboard Shortcuts` menu item. <img align="center" src="https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/notebook_shortcuts_4_0.png" alt="logo" title="jupyter" width="500" height="500" /> The following shortcuts have been found to be the most useful in day-to-day tasks: - Basic navigation: `enter`, `shift-enter`, `up/k`, `down/j` - Saving the `notebook`: `s` - `Cell types`: `y`, `m`, `1-6`, `r` - `Cell creation`: `a`, `b` - `Cell editing`: `x`, `c`, `v`, `d`, `z`, `ctrl+shift+-` - `Kernel operations`: `i`, `.` ### Markdown Cells `Text` can be added to `IPython Notebooks` using `Markdown cells`. `Markdown` is a popular `markup language` that is a `superset of HTML`. Its specification can be found here: http://daringfireball.net/projects/markdown/ You can view the `source` of a `cell` by `double clicking` on it, or while the `cell` is selected in `command mode`, press `Enter` to edit it. Once a `cell` has been `edited`, use `Shift-Enter` to `re-render` it. ### Markdown basics You can make text _italic_ or **bold**. You can build nested itemized or enumerated lists: * One - Sublist - This - Sublist - That - The other thing * Two - Sublist * Three - Sublist Now another list: 1. Here we go 1. Sublist 2. Sublist 2. There we go 3. Now this You can add horizontal rules: --- Here is a blockquote: > Beautiful is better than ugly. > Explicit is better than implicit. > Simple is better than complex. > Complex is better than complicated. > Flat is better than nested. > Sparse is better than dense. > Readability counts. > Special cases aren't special enough to break the rules. > Although practicality beats purity. > Errors should never pass silently. > Unless explicitly silenced. > In the face of ambiguity, refuse the temptation to guess. > There should be one-- and preferably only one --obvious way to do it. > Although that way may not be obvious at first unless you're Dutch. > Now is better than never. > Although never is often better than *right* now. > If the implementation is hard to explain, it's a bad idea. > If the implementation is easy to explain, it may be a good idea. > Namespaces are one honking great idea -- let's do more of those! You can add headings using Markdown's syntax: <pre> # Heading 1 # Heading 2 ## Heading 2.1 ## Heading 2.2 </pre> ### Embedded code You can embed code meant for illustration instead of execution in Python: def f(x): """a docstring""" return x**2 or other languages: if (i=0; i<n; i++) { printf("hello %d\n", i); x += 4; } ### Github flavored markdown (GFM) The `Notebook webapp` supports `Github flavored markdown` meaning that you can use `triple backticks` for `code blocks` <pre> ```python print "Hello World" ``` ```javascript console.log("Hello World") ``` </pre> Gives ```python print "Hello World" ``` ```javascript console.log("Hello World") ``` And a table like this : <pre> | This | is | |------|------| | a | table| </pre> A nice HTML Table | This | is | |------|------| | a | table| ### General HTML Because `Markdown` is a `superset of HTML` you can even add things like `HTML tables`: <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> ### Local files If you have `local files` in your `Notebook directory`, you can refer to these `files` in `Markdown cells` directly: [subdirectory/]<filename> These do not `embed` the data into the `notebook file`, and require that the `files` exist when you are viewing the `notebook`. ### Security of local files Note that this means that the `IPython notebook server` also acts as a `generic file server` for `files` inside the same `tree` as your `notebooks`. Access is not granted outside the `notebook` folder so you have strict control over what `files` are `visible`, but for this reason **it is highly recommended that you do not run the notebook server with a notebook directory at a high level in your filesystem (e.g. your home directory)**. When you run the `notebook` in a `password-protected` manner, `local file` access is `restricted` to `authenticated users` unless `read-only views` are active. ### Markdown attachments Since `Jupyter notebook version 5.0`, in addition to `referencing external files` you can `attach a file` to a `markdown cell`. To do so `drag` the `file` from e.g. the `browser` or local `storage` in a `markdown cell` while `editing` it. `Files` are stored in `cell metadata` and will be `automatically scrubbed` at `save-time` if not `referenced`. You can recognize `attached images` from other `files` by their `url` that starts with `attachment`. Keep in mind that `attached files` will `increase the size` of your `notebook`. You can manually edit the `attachement` by using the `View` > `Cell Toolbar` > `Attachment` menu, but you should not need to. ### Code cells When executing code in `IPython`, all valid `Python syntax` works as-is, but `IPython` provides a number of `features` designed to make the `interactive experience` more `fluid` and `efficient`. First, we need to explain how to run `cells`. Try to run the `cell` below! ``` import pandas as pd print("Hi! This is a cell. Click on it and press the ▶ button above to run it") ``` You can also run a cell with `Ctrl+Enter` or `Shift+Enter`. Experiment a bit with that. ### Tab Completion One of the most useful things about `Jupyter Notebook` is its tab completion. Try this: click just after `read_csv`( in the cell below and press `Shift+Tab` 4 times, slowly. Note that if you're using `JupyterLab` you don't have an additional help box option. ``` pd.read_csv( ``` After the first time, you should see this: <img align="center" src="https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/jupyter_tab-once.png" alt="logo" title="jupyter" width="700" height="200" /> After the second time: <img align="center" src="https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/jupyter_tab-twice.png" alt="logo" title="jupyter" width="500" height="200" /> After the fourth time, a big help box should pop up at the bottom of the screen, with the full documentation for the `read_csv` function: <img align="center" src="https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/jupyter_tab-4-times.png" alt="logo" title="jupyter" width="700" height="300" /> This is amazingly useful. You can think of this as "the more confused I am, the more times I should press `Shift+Tab`". Okay, let's try `tab completion` for `function names`! ``` pd.r ``` You should see this: <img align="center" src="https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/jupyter_function-completion.png" alt="logo" title="jupyter" width="300" height="200" /> ## Get Help There's an additional way on how you can reach the help box shown above after the fourth `Shift+Tab` press. Instead, you can also use `obj`? or `obj`?? to get help or more help for an object. ``` pd.read_csv? ``` ## Writing code Writing code in a `notebook` is pretty normal. ``` def print_10_nums(): for i in range(10): print(i) print_10_nums() ``` If you messed something up and want to revert to an older version of a code in a cell, use `Ctrl+Z` or to go than back `Ctrl+Y`. For a full list of all keyboard shortcuts, click on the small `keyboard icon` in the `notebook header` or click on `Help` > `Keyboard Shortcuts`. ### The interactive workflow: input, output, history `Notebooks` provide various options for `inputs` and `outputs`, while also allowing to access the `history` of `run commands`. ``` 2+10 _+10 ``` You can suppress the `storage` and `rendering` of `output` if you append `;` to the last `cell` (this comes in handy when plotting with `matplotlib`, for example): ``` 10+20; _ ``` The `output` is stored in `_N` and `Out[N]` variables: ``` _8 == Out[8] ``` Previous inputs are available, too: ``` In[9] _i %history -n 1-5 ``` ### Accessing the underlying operating system Through `notebooks` you can also access the underlying `operating system` and `communicate` with it as you would do in e.g. a `terminal` via `bash`: ``` !pwd files = !ls print("My current directory's files:") print(files) !echo $files !echo {files[0].upper()} ``` ### Magic functions `IPython` has all kinds of `magic functions`. `Magic functions` are prefixed by `%` or `%%,` and typically take their `arguments` without `parentheses`, `quotes` or even `commas` for convenience. `Line magics` take a single `%` and `cell magics` are prefixed with two `%%`. Some useful magic functions are: Magic Name | Effect ---------- | ------------------------------------------------------------- %env | Get, set, or list environment variables %pdb | Control the automatic calling of the pdb interactive debugger %pylab | Load numpy and matplotlib to work interactively %%debug | Activates debugging mode in cell %%html | Render the cell as a block of HTML %%latex | Render the cell as a block of latex %%sh | %%sh script magic %%time | Time execution of a Python statement or expression You can run `%magic` to get a list of `magic functions` or `%quickref` for a reference sheet. ``` %magic ``` `Line` vs `cell magics`: ``` %timeit list(range(1000)) %%timeit list(range(10)) list(range(100)) ``` `Line magics` can be used even inside `code blocks`: ``` for i in range(1, 5): size = i*100 print('size:', size, end=' ') %timeit list(range(size)) ``` `Magics` can do anything they want with their input, so it doesn't have to be valid `Python`: ``` %%bash echo "My shell is:" $SHELL echo "My disk usage is:" df -h ``` Another interesting `cell magic`: create any `file` you want `locally` from the `notebook`: ``` %%writefile test.txt This is a test file! It can contain anything I want... And more... !cat test.txt ``` Let's see what other `magics` are currently defined in the `system`: ``` %lsmagic ``` ## Writing latex Let's use `%%latex` to render a block of `latex`: ``` %%latex $$F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} \mathrm{d} x$$ ``` ### Running normal Python code: execution and errors Not only can you input normal `Python code`, you can even paste straight from a `Python` or `IPython shell session`: ``` >>> # Fibonacci series: ... # the sum of two elements defines the next ... a, b = 0, 1 >>> while b < 10: ... print(b) ... a, b = b, a+b In [1]: for i in range(10): ...: print(i, end=' ') ...: ``` And when your code produces errors, you can control how they are displayed with the `%xmode` magic: ``` %%writefile mod.py def f(x): return 1.0/(x-1) def g(y): return f(y+1) ``` Now let's call the function `g` with an argument that would produce an error: ``` import mod mod.g(0) %xmode plain mod.g(0) %xmode verbose mod.g(0) ``` The default `%xmode` is "context", which shows additional context but not all local variables. Let's restore that one for the rest of our session. ``` %xmode context ``` ## Running code in other languages with special `%%` magics ``` %%perl @months = ("July", "August", "September"); print $months[0]; %%ruby name = "world" puts "Hello #{name.capitalize}!" ``` ### Raw Input in the notebook Since `1.0` the `IPython notebook web application` supports `raw_input` which for example allow us to invoke the `%debug` `magic` in the `notebook`: ``` mod.g(0) %debug ``` Don't forget to exit your `debugging session`. `Raw input` can of course be used to ask for `user input`: ``` enjoy = input('Are you enjoying this tutorial? ') print('enjoy is:', enjoy) ``` ### Plotting in the notebook `Notebooks` support a variety of fantastic `plotting options`, including `static` and `interactive` graphics. This `magic` configures `matplotlib` to `render` its `figures` `inline`: ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 2*np.pi, 300) y = np.sin(x**2) plt.plot(x, y) plt.title("A little chirp") fig = plt.gcf() # let's keep the figure object around for later... import plotly.figure_factory as ff # Add histogram data x1 = np.random.randn(200) - 2 x2 = np.random.randn(200) x3 = np.random.randn(200) + 2 x4 = np.random.randn(200) + 4 # Group data together hist_data = [x1, x2, x3, x4] group_labels = ['Group 1', 'Group 2', 'Group 3', 'Group 4'] # Create distplot with custom bin_size fig = ff.create_distplot(hist_data, group_labels, bin_size=.2) fig.show() ``` ## The IPython kernel/client model ``` %connect_info ``` We can connect automatically a Qt Console to the currently running kernel with the `%qtconsole` magic, or by typing `ipython console --existing <kernel-UUID>` in any terminal: ``` %qtconsole ``` ## Saving a Notebook `Jupyter Notebooks` `autosave`, so you don't have to worry about losing code too much. At the top of the page you can usually see the current save status: `Last Checkpoint: 2 minutes ago (unsaved changes)` `Last Checkpoint: a few seconds ago (autosaved)` If you want to save a notebook on purpose, either click on `File` > `Save` and `Checkpoint` or press `Ctrl+S`. ## To Jupyter & beyond <img align="center" src="https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/jupyter_example.png" alt="logo" title="jupyter" width="800" height="400" /> 1. Open a terminal 2. Type `jupyter lab` 3. If you're not automatically directed to a webpage copy the URL printed in the terminal and paste it in your browser 4. Click "New" in the top-right corner and select "Python 3" 5. You have a `Jupyter notebook` within `Jupyter lab`!
github_jupyter
## Use barcharts and heatmaps to visualize patterns in your data IGN Game Reviews provide scores from experts for the most recent game releases, ranging from 0 (Disaster) to 10 (Masterpiece). <img src="https://i.imgur.com/Oh06Fu1.png"> ## Load the data 1. Read the IGN data file into a dataframe named `ign_scores`. 2. Use the `"Platform"` column to label the rows. ``` import pandas as pd import seaborn as sns import matplotlib.pyplot as plt IGN="https://raw.githubusercontent.com/csbfx/advpy122-data/master/ign_scores.csv" ## Your code here . . . ign_scores = pd.read_csv(IGN) ign_scores = ign_scores.set_index('Platform') ign_scores ``` ## Problem 1 Use the dataframe `ign_scores` to determine the highest score received by PC games, for any platform? ``` pc_games = ign_scores.loc['PC'].max() pc_games ``` ## Problem 2 Use the dataframe `ign_scores` to determine which genre has the lowest score for the `PlayStation Vita` platform. ``` psv_games = ign_scores.loc['PlayStation Vita'].idxmin() psv_games ``` ## Problem 3 Your instructor's favorite video game has been Mario Kart Wii, a racing game released for the Wii platform in 2008. And, IGN agrees with her that it is a great game -- their rating for this game is a whopping 8.9! Inspired by the success of this game, your instructor is considering creating your very own racing game for the Wii platform. Perform the following analyses to help her determine which platform she should focus on. 1. Create a bar chart that shows the score for *Racing* games, for each platform. Your chart should have one bar for each platform. Provide a meaningful title to the plot. 2. Based on the bar chart, do you expect a racing game for the **Wii** platform to receive a high rating? If not, use the pandas to find out from the dataframe `ign_scores` which gaming platform is the best for racing game? ``` ## Use ign_scores to determine which gaming platform is the best ## for racing game. ## Your code here . . . ign_scores.plot.bar(y='Racing', title="Bar chart for ratings of racing games for different platforms") ``` As shown in the bar plot, Wii has the lowest ratings for the 'Racing' genre. XBox one has the highest rating for racing games and hence would be the best platform ## Problem 4 Since your instructor's gaming interests are pretty broad, you can help her decide to use the IGN scores to determine the choice of genre and platform. 1. Create a heatmap using the IGN scores by genre and platform and include the scores in the cells of the heatmap. 2. Base on the heatmap, which combination of genre and platform receives the highest average ratings? Which combination receives the lowest average rankings? Write the answers in a markdown cell. ``` plt.figure(figsize=(10,7)) sns.heatmap(data= ign_scores, annot=True) plt.xlabel("Genre"); ``` Simulation games on the playStation 4 receive highest average ratings at 9.2. The lowest average ratings are scored on the Game Boy Color for fighting and shooter games. ## Problem 5 Use the Pokemon dataset to create a clustermap with color. First, filter the dataframe to only keep data with `Type 1` equals to one of the following values: `Water`, `Normal`, `Grass`, `Bug` and `Psychic`. Annotate the dendrogram using different colors for these five different `Type 1` values. Use `Name` as the index. pokemon_data is in https://raw.githubusercontent.com/csbfx/advpy122-data/master/Pokemon.csv ``` pokemon_data = pd.read_csv("https://raw.githubusercontent.com/csbfx/advpy122-data/master/Pokemon.csv") types = ['Water', 'Normal', 'Grass', 'Bug', 'Psychic'] pokemon_data = pokemon_data[pokemon_data['Type 1'].isin(types)] pokemon_data['Legendary'] = pokemon_data['Legendary'].astype('int') g = sns.clustermap(pokemon_data.set_index('Name').drop(columns=['Type 1', 'Type 2', '#', 'Legendary', 'Generation', 'Total']), cmap="BuPu", figsize=(12,8), row_colors=pokemon_data.set_index('Name')['Type 1'].replace( {"Normal":"red", "Psychic":"purple", "Water":"lightblue", "Grass": "green", "Bug": "black" })) ```
github_jupyter
<a href="https://colab.research.google.com/github/harmishpatel21/codesignal-IV-solutions/blob/main/code_signal_IV_solutions.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> **Code Signal Solution for Interview Challenges** **Problem Statement 1:** '''Given an array a that contains only numbers in the range from 1 to a.length, find the first duplicate number for which the second occurrence has the minimal index. In other words, if there are more than 1 duplicated numbers, return the number for which the second occurrence has a smaller index than the second occurrence of the other number does. If there are no such elements, return -1. Example For a = [2, 1, 3, 5, 3, 2], the output should be firstDuplicate(a) = 3. There are 2 duplicates: numbers 2 and 3. The second occurrence of 3 has a smaller index than the second occurrence of 2 does, so the answer is 3. For a = [2, 2], the output should be firstDuplicate(a) = 2; For a = [2, 4, 3, 5, 1], the output should be firstDuplicate(a) = -1. Input/Output [execution time limit] 4 seconds (py3) [input] array.integer a Guaranteed constraints: 1 ≤ a.length ≤ 105, 1 ≤ a[i] ≤ a.length. [output] integer The element in a that occurs in the array more than once and has the minimal index for its second occurrence. If there are no such elements, return -1.''' ``` def firstDuplicate(a): seen = set() for i in a: if i in seen: return i seen.add(i) return -1 a = [2, 1, 3, 5, 3, 2] print(firstDuplicate(a)) ``` **Problem Statement 2:** Given a string s consisting of small English letters, find and return the first instance of a non-repeating character in it. If there is no such character, return '_'. Example For s = "abacabad", the output should be firstNotRepeatingCharacter(s) = 'c'. There are 2 non-repeating characters in the string: 'c' and 'd'. Return c since it appears in the string first. For s = "abacabaabacaba", the output should be firstNotRepeatingCharacter(s) = '_'. There are no characters in this string that do not repeat. Input/Output [execution time limit] 4 seconds (py3) [input] string s A string that contains only lowercase English letters. Guaranteed constraints: 1 ≤ s.length ≤ 105. [output] char The first non-repeating character in s, or '_' if there are no characters that do not repeat. ``` def firstNotRepeatingCharacter(s): x = [] for i in s: if s.index(i) == s.rindex(i): return i return '_' s = "abacabad" # s = "abacabaabacaba" print(firstNotRepeatingCharacter(s)) ``` **Problem Statement 3:** Note: Try to solve this task in-place (with O(1) additional memory), since this is what you'll be asked to do during an interview. You are given an n x n 2D matrix that represents an image. Rotate the image by 90 degrees (clockwise). Example For a = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] the output should be rotateImage(a) = [[7, 4, 1], [8, 5, 2], [9, 6, 3]] Input/Output [execution time limit] 4 seconds (py3) [input] array.array.integer a Guaranteed constraints: 1 ≤ a.length ≤ 100, a[i].length = a.length, 1 ≤ a[i][j] ≤ 104. [output] array.array.integer ``` def rotateImage(a): return list(zip(*a[::-1])) a = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] # print(len(a)) # print(a[-2][-1]) rotateImage(a) a[::-1] ```
github_jupyter
Fashion MNIST dataset ``` #!pip install --upgrade tensorflow from __future__ import absolute_import, division, print_function, unicode_literals # TensorFlow and tf.keras import tensorflow as tf from tensorflow import keras # Helper libraries import numpy as np import matplotlib.pyplot as plt print(tf.__version__) ``` # Import the Fashion MNIST dataset This notebook uses the [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here: <table> <tr><td> <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> </td></tr> <tr><td align="center"> <b>Figure 1.</b> <a href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/>&nbsp; </td></tr> </table> Fashion MNIST is intended as a drop-in replacement for the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset—often used as the "Hello, World" of machine learning programs for computer vision. The MNIST dataset contains images of handwritten digits (0, 1, 2, etc.) in a format identical to that of the articles of clothing you'll use here. This guide uses Fashion MNIST for variety, and because it's a slightly more challenging problem than regular MNIST. Both datasets are relatively small and are used to verify that an algorithm works as expected. They're good starting points to test and debug code. Here, 60,000 images are used to train the network and 10,000 images to evaluate how accurately the network learned to classify images. You can access the Fashion MNIST directly from TensorFlow. Import and load the Fashion MNIST data directly from TensorFlow: ``` fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() ``` Loading the dataset returns four NumPy arrays: * The `train_images` and `train_labels` arrays are the *training set*—the data the model uses to learn. * The model is tested against the *test set*, the `test_images`, and `test_labels` arrays. The images are 28x28 NumPy arrays, with pixel values ranging from 0 to 255. The *labels* are an array of integers, ranging from 0 to 9. These correspond to the *class* of clothing the image represents: <table> <tr> <th>Label</th> <th>Class</th> </tr> <tr> <td>0</td> <td>T-shirt/top</td> </tr> <tr> <td>1</td> <td>Trouser</td> </tr> <tr> <td>2</td> <td>Pullover</td> </tr> <tr> <td>3</td> <td>Dress</td> </tr> <tr> <td>4</td> <td>Coat</td> </tr> <tr> <td>5</td> <td>Sandal</td> </tr> <tr> <td>6</td> <td>Shirt</td> </tr> <tr> <td>7</td> <td>Sneaker</td> </tr> <tr> <td>8</td> <td>Bag</td> </tr> <tr> <td>9</td> <td>Ankle boot</td> </tr> </table> Each image is mapped to a single label. Since the *class names* are not included with the dataset, store them here to use later when plotting the images: ``` class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] ``` ## Explore the data Let's explore the format of the dataset before training the model. The following shows there are 60,000 images in the training set, with each image represented as 28 x 28 pixels: ``` train_images.shape ``` Likewise, there are 60,000 labels in the training set: ``` len(train_labels) ``` Each label is an integer between 0 and 9: ``` train_labels[0:2] ``` There are 10,000 images in the test set. Again, each image is represented as 28 x 28 pixels: ``` test_images.shape ``` And the test set contains 10,000 images labels: ``` len(test_labels) ```
github_jupyter
# Part 2: Introduction to Umami and the `Residual` Class Umami is a package for calculating metrics for use with for Earth surface dynamics models. This notebook is the second notebook in a three-part introduction to using umami. ## Scope of this tutorial Before starting this tutorial, you should have completed [Part 1: Introduction to Umami and the `Metric` Class](IntroductionToMetric.ipynb). In this tutorial you will learn the basic principles behind using the `Residual` class to compare models and data using terrain statistics. If you have comments or questions about the notebooks, the best place to get help is through [GitHub Issues](https://github.com/TerrainBento/umami/issues). To begin this example, we will import the required python packages. ``` import warnings warnings.filterwarnings('ignore') from io import StringIO import numpy as np from landlab import RasterModelGrid, imshow_grid from umami import Residual ``` ## Step 1 Create grids Unlike the first notebook, here we need to compare model and data. We will create two grids, the `model_grid` and the `data_grid` each with a field called field called `topographic__elevation` to it. Both are size (10x10). The `data_grid` slopes to the south-west, while the `model_grid` has some additional noise added to it. First, we construct and plot the `data_grid`. ``` data_grid = RasterModelGrid((10, 10)) data_z = data_grid.add_zeros("node", "topographic__elevation") data_z += data_grid.x_of_node + data_grid.y_of_node imshow_grid(data_grid, data_z) ``` Next, we construct and plot `model_grid`. It differs only in that it has random noise added to the core nodes. ``` np.random.seed(42) model_grid = RasterModelGrid((10, 10)) model_z = model_grid.add_zeros("node", "topographic__elevation") model_z += model_grid.x_of_node + model_grid.y_of_node model_z[model_grid.core_nodes] += np.random.randn(model_grid.core_nodes.size) imshow_grid(model_grid, model_z) ``` We can difference the two grids to see how they differ. As expected, it looks like normally distributed noise. ``` imshow_grid(model_grid, data_z - model_z, cmap="seismic") ``` This example shows a difference map with 64 residuals on it. A more realistic application with a much larger domain would have tens of thousands. Methods of model analysis such as calibration and sensitivity analysis need model output, such as the topography shown here, to be distilled into a smaller number of values. This is the task that umami facilitates. ## Step 2: Construct an umami `Residual` Similar to constructing a `Metric`, a residual is specified by a dictionary or YAML-style input file. Here we repeat some of the content of the prior notebook: Each calculation gets its own unique name (the key in the dictionary), and is associated with a value, a dictionary specifying exactly what should be calculated. The only value of the dictionary required by all umami calculations is `_func`, which indicates which of the [`umami.calculations`](https://umami.readthedocs.io/en/latest/umami.calculations.html) will be performed. Subsequent elements of this dictionary are the required inputs to the calculation function and are described in their documentation. Note that some calculations listed in the [`umami.calculations`](https://umami.readthedocs.io/en/latest/umami.calculations.html) submodule are valid for both the umami `Metric` and `Residual` classes, while others are for `Residual`s only (the `Metric` class was covered in [Part 1](IntroductionToMetric.ipynb) of this notebook series). The order that calculations are listed is read in as an [OrderedDict](https://docs.python.org/3/library/collections.html#collections.OrderedDict) and retained as the "calculation order". In our example we will use the following dictionary: ```python residuals = { "me": { "_func": "aggregate", "method": "mean", "field": "topographic__elevation" }, "ep10": { "_func": "aggregate", "method": "percentile", "field": "topographic__elevation", "q": 10 } } ``` This specifies calculation of the mean of `topographic__elevation` (to be called "me") and the 10th percentile `topographic__elevation` (called "ep10"). The equivalent portion of a YAML input file would look like: ```yaml residuals: me: _func: aggregate method: mean field: topographic__elevation ep10: _func: aggregate method: percentile field: topographic__elevation q: 10 ``` The following code constructs the `Residual`. Note that the only difference with the prior notebook is that instead of specifying only one grid, here we provide two. Under the hood umami checkes that the grids are compatible and will raise errors if they are not. ``` residuals = { "me": { "_func": "aggregate", "method": "mean", "field": "topographic__elevation" }, "ep10": { "_func": "aggregate", "method": "percentile", "field": "topographic__elevation", "q": 10 } } residual = Residual(model_grid, data_grid, residuals=residuals) ``` To calculate the residuals, run the `calculate` bound method. ``` residual.calculate() ``` Just like `Metric` classes, the `Residual` has some usefull methods and attributes. `residual.names` gives the names as a list, in calculation order. ``` residual.names ``` `residual.values` gives the values as a list, in calculation order. ``` residual.values ``` And a function is available to get the value of a given metric. ``` residual.value("me") ``` ## Step 5: Write output The methods for writing output avaiable in `Metric` are also provided by `Residual`. ``` out = StringIO() residual.write_residuals_to_file(out, style="dakota") file_contents = out.getvalue().splitlines() for line in file_contents: print(line.strip()) out = StringIO() residual.write_residuals_to_file(out, style="yaml") file_contents = out.getvalue().splitlines() for line in file_contents: print(line.strip()) ``` # Next steps Now that you have a sense for how the `Metric` and `Residual` classes are used, try the next notebook: [Part 3: Other IO options (using umami without Landlab or terrainbento)](OtherIO_options.ipynb).
github_jupyter
``` import pickle as pk import pandas as pd %pylab inline y_dic = pk.load(open("labelDic.cPickle","rb")) X_dic = pk.load(open("vectorDicGDIpair.cPickle","rb")) df = pd.read_csv('dida_v2_full.csv', index_col=0).replace('CO', 1).replace('TD', 0).replace('UK', -1) rd = np.vectorize(lambda x: round(x * 10)/10) essA_changed = {} essB_changed = {} recA_changed = {} recB_changed = {} path_changed = {} deef_changed = {} for ddid in X_dic: x1 = rd(array(X_dic[ddid])[ [2, 3, 6, 7, 8] ]) x2 = rd(array(df.loc[ddid])[ [2, 3, 6, 7, 9, 12] ]) if x1[0] != x2[0]: recA_changed[ddid] = (x1[0], x2[0]) if x1[1] != x2[1]: essA_changed[ddid] = (x1[1], x2[1]) if x1[2] != x2[2]: recB_changed[ddid] = (x1[2], x2[2]) if x1[3] != x2[3]: essB_changed[ddid] = (x1[3], x2[3]) if x1[4] != x2[4]: path_changed[ddid] = (x1[4], x2[4]) if y_dic[ddid] != x2[5]: deef_changed[ddid] = (y_dic[ddid], x2[5]) print(essA_changed) print('Essentiality gene A lost: ' + ', '.join(sorted(essA_changed.keys()))) print('Essentiality gene B lost: ' + ', '.join(sorted(essB_changed.keys()))) print('Recessiveness gene A changed: dd207, 1.00 -> 0.15') df_sapiens = pd.read_csv('Mus musculus_consolidated.csv').drop(['locus', 'datasets', 'datasetIDs', 'essentiality status'], 1) df_sapiens.head() genes = [] for k in df['Pair']: g1, g2 = k.split('/') if g1 not in genes: genes.append(g1) if g2 not in genes: genes.append(g2) genes = sorted(genes) lookup_ess = {} for line in array(df_sapiens): name, ess = line if type(name) is float: continue; lookup_ess[name.upper()] = ess import pickle pathway_pickle = open('ess_pickle', 'wb') pickle.dump(lookup_ess, pathway_pickle) result_s = {} for g in genes: if g in lookup_ess: result_s[g] = lookup_ess[g] else: result_s[g] = 'N/A' print(g, 'not found.') for key in result_s: x = result_s[key] if x == 'Essential': result_s[key] = 1 elif x == 'Nonessential': result_s[key] = 0 new_essA, new_essB = [], [] for pair in df['Pair']: g1, g2 = pair.split('/') new_essA.append(result_s[g1]) new_essB.append(result_s[g2]) new_essA = array(new_essA) new_essB = array(new_essB) df2 = pd.read_csv('dida_v2_full.csv', index_col=0) new_essA[new_essA == 'N/A'] = 0.67 new_essB[new_essB == 'N/A'] = 0.62 df2['EssA'] = new_essA df2['EssB'] = new_essB df2.to_csv('dida_v2_full_newess.csv') pd.read_csv('dida_v2_full_newess.csv', index_col=0) new_essA[new_essA == 'N/A'] = 0 mean(array(new_essA).astype(int)) new_essB[new_essB == 'N/A'] = 0 mean(array(new_essB).astype(int)) for g in result_s: print(g + ',' + result_s[g]) ```
github_jupyter
# Operations on word vectors Welcome to your first assignment of this week! Because word embeddings are very computionally expensive to train, most ML practitioners will load a pre-trained set of embeddings. **After this assignment you will be able to:** - Load pre-trained word vectors, and measure similarity using cosine similarity - Use word embeddings to solve word analogy problems such as Man is to Woman as King is to ______. - Modify word embeddings to reduce their gender bias Let's get started! Run the following cell to load the packages you will need. ``` import numpy as np from w2v_utils import * ``` Next, lets load the word vectors. For this assignment, we will use 50-dimensional GloVe vectors to represent words. Run the following cell to load the `word_to_vec_map`. ``` words, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt') ``` You've loaded: - `words`: set of words in the vocabulary. - `word_to_vec_map`: dictionary mapping words to their GloVe vector representation. You've seen that one-hot vectors do not do a good job cpaturing what words are similar. GloVe vectors provide much more useful information about the meaning of individual words. Lets now see how you can use GloVe vectors to decide how similar two words are. # 1 - Cosine similarity To measure how similar two words are, we need a way to measure the degree of similarity between two embedding vectors for the two words. Given two vectors $u$ and $v$, cosine similarity is defined as follows: $$\text{CosineSimilarity(u, v)} = \frac {u . v} {||u||_2 ||v||_2} = cos(\theta) \tag{1}$$ where $u.v$ is the dot product (or inner product) of two vectors, $||u||_2$ is the norm (or length) of the vector $u$, and $\theta$ is the angle between $u$ and $v$. This similarity depends on the angle between $u$ and $v$. If $u$ and $v$ are very similar, their cosine similarity will be close to 1; if they are dissimilar, the cosine similarity will take a smaller value. <img src="images/cosine_sim.png" style="width:800px;height:250px;"> <caption><center> **Figure 1**: The cosine of the angle between two vectors is a measure of how similar they are</center></caption> **Exercise**: Implement the function `cosine_similarity()` to evaluate similarity between word vectors. **Reminder**: The norm of $u$ is defined as $ ||u||_2 = \sqrt{\sum_{i=1}^{n} u_i^2}$ ``` # GRADED FUNCTION: cosine_similarity def cosine_similarity(u, v): """ Cosine similarity reflects the degree of similariy between u and v Arguments: u -- a word vector of shape (n,) v -- a word vector of shape (n,) Returns: cosine_similarity -- the cosine similarity between u and v defined by the formula above. """ distance = 0.0 ### START CODE HERE ### # Compute the dot product between u and v (≈1 line) dot = np.dot(u,v) # Compute the L2 norm of u (≈1 line) norm_u = np.linalg.norm(u) # Compute the L2 norm of v (≈1 line) norm_v = np.linalg.norm(v) # Compute the cosine similarity defined by formula (1) (≈1 line) cosine_similarity = dot/(norm_u*norm_v) ### END CODE HERE ### return cosine_similarity father = word_to_vec_map["father"] mother = word_to_vec_map["mother"] ball = word_to_vec_map["ball"] crocodile = word_to_vec_map["crocodile"] france = word_to_vec_map["france"] italy = word_to_vec_map["italy"] paris = word_to_vec_map["paris"] rome = word_to_vec_map["rome"] print("cosine_similarity(father, mother) = ", cosine_similarity(father, mother)) print("cosine_similarity(ball, crocodile) = ",cosine_similarity(ball, crocodile)) print("cosine_similarity(france - paris, rome - italy) = ",cosine_similarity(france - paris, rome - italy)) ``` **Expected Output**: <table> <tr> <td> **cosine_similarity(father, mother)** = </td> <td> 0.890903844289 </td> </tr> <tr> <td> **cosine_similarity(ball, crocodile)** = </td> <td> 0.274392462614 </td> </tr> <tr> <td> **cosine_similarity(france - paris, rome - italy)** = </td> <td> -0.675147930817 </td> </tr> </table> After you get the correct expected output, please feel free to modify the inputs and measure the cosine similarity between other pairs of words! Playing around the cosine similarity of other inputs will give you a better sense of how word vectors behave. ## 2 - Word analogy task In the word analogy task, we complete the sentence <font color='brown'>"*a* is to *b* as *c* is to **____**"</font>. An example is <font color='brown'> '*man* is to *woman* as *king* is to *queen*' </font>. In detail, we are trying to find a word *d*, such that the associated word vectors $e_a, e_b, e_c, e_d$ are related in the following manner: $e_b - e_a \approx e_d - e_c$. We will measure the similarity between $e_b - e_a$ and $e_d - e_c$ using cosine similarity. **Exercise**: Complete the code below to be able to perform word analogies! ``` # GRADED FUNCTION: complete_analogy def complete_analogy(word_a, word_b, word_c, word_to_vec_map): """ Performs the word analogy task as explained above: a is to b as c is to ____. Arguments: word_a -- a word, string word_b -- a word, string word_c -- a word, string word_to_vec_map -- dictionary that maps words to their corresponding vectors. Returns: best_word -- the word such that v_b - v_a is close to v_best_word - v_c, as measured by cosine similarity """ # convert words to lower case word_a, word_b, word_c = word_a.lower(), word_b.lower(), word_c.lower() ### START CODE HERE ### # Get the word embeddings v_a, v_b and v_c (≈1-3 lines) e_a = word_to_vec_map.get(word_a) e_b = word_to_vec_map.get(word_b) e_c = word_to_vec_map.get(word_c) # e_a, e_b, e_c = word_to_vec_map[word_a], word_to_vec_map[word_b], word_to_vec_map[word_c] ### END CODE HERE ### words = word_to_vec_map.keys() max_cosine_sim = -100 # Initialize max_cosine_sim to a large negative number best_word = None # Initialize best_word with None, it will help keep track of the word to output # loop over the whole word vector set for w in words: # to avoid best_word being one of the input words, pass on them. if w in [word_a, word_b, word_c] : continue ### START CODE HERE ### # Compute cosine similarity between the vector (e_b - e_a) and the vector ((w's vector representation) - e_c) (≈1 line) cosine_sim = cosine_similarity(np.subtract(e_b,e_a), np.subtract(word_to_vec_map.get(w),e_c)) # If the cosine_sim is more than the max_cosine_sim seen so far, # then: set the new max_cosine_sim to the current cosine_sim and the best_word to the current word (≈3 lines) if cosine_sim > max_cosine_sim: max_cosine_sim = cosine_sim best_word = w ### END CODE HERE ### return best_word ``` Run the cell below to test your code, this may take 1-2 minutes. ``` triads_to_try = [('italy', 'italian', 'spain'), ('india', 'delhi', 'japan'), ('man', 'woman', 'boy'), ('small', 'smaller', 'large')] for triad in triads_to_try: print ('{} -> {} :: {} -> {}'.format( *triad, complete_analogy(*triad,word_to_vec_map))) ``` **Expected Output**: <table> <tr> <td> **italy -> italian** :: </td> <td> spain -> spanish </td> </tr> <tr> <td> **india -> delhi** :: </td> <td> japan -> tokyo </td> </tr> <tr> <td> **man -> woman ** :: </td> <td> boy -> girl </td> </tr> <tr> <td> **small -> smaller ** :: </td> <td> large -> larger </td> </tr> </table> Once you get the correct expected output, please feel free to modify the input cells above to test your own analogies. Try to find some other analogy pairs that do work, but also find some where the algorithm doesn't give the right answer: For example, you can try small->smaller as big->?. ### Congratulations! You've come to the end of this assignment. Here are the main points you should remember: - Cosine similarity a good way to compare similarity between pairs of word vectors. (Though L2 distance works too.) - For NLP applications, using a pre-trained set of word vectors from the internet is often a good way to get started. Even though you have finished the graded portions, we recommend you take a look too at the rest of this notebook. Congratulations on finishing the graded portions of this notebook! ## 3 - Debiasing word vectors (OPTIONAL/UNGRADED) In the following exercise, you will examine gender biases that can be reflected in a word embedding, and explore algorithms for reducing the bias. In addition to learning about the topic of debiasing, this exercise will also help hone your intuition about what word vectors are doing. This section involves a bit of linear algebra, though you can probably complete it even without being expert in linear algebra, and we encourage you to give it a shot. This portion of the notebook is optional and is not graded. Lets first see how the GloVe word embeddings relate to gender. You will first compute a vector $g = e_{woman}-e_{man}$, where $e_{woman}$ represents the word vector corresponding to the word *woman*, and $e_{man}$ corresponds to the word vector corresponding to the word *man*. The resulting vector $g$ roughly encodes the concept of "gender". (You might get a more accurate representation if you compute $g_1 = e_{mother}-e_{father}$, $g_2 = e_{girl}-e_{boy}$, etc. and average over them. But just using $e_{woman}-e_{man}$ will give good enough results for now.) ``` g = word_to_vec_map['woman'] - word_to_vec_map['man'] print(g) ``` Now, you will consider the cosine similarity of different words with $g$. Consider what a positive value of similarity means vs a negative cosine similarity. ``` print ('List of names and their similarities with constructed vector:') # girls and boys name name_list = ['john', 'marie', 'sophie', 'ronaldo', 'priya', 'rahul', 'danielle', 'reza', 'katy', 'yasmin'] for w in name_list: print (w, cosine_similarity(word_to_vec_map[w], g)) ``` As you can see, female first names tend to have a positive cosine similarity with our constructed vector $g$, while male first names tend to have a negative cosine similarity. This is not suprising, and the result seems acceptable. But let's try with some other words. ``` print('Other words and their similarities:') word_list = ['lipstick', 'guns', 'science', 'arts', 'literature', 'warrior','doctor', 'tree', 'receptionist', 'technology', 'fashion', 'teacher', 'engineer', 'pilot', 'computer', 'singer'] for w in word_list: print (w, cosine_similarity(word_to_vec_map[w], g)) ``` Do you notice anything surprising? It is astonishing how these results reflect certain unhealthy gender stereotypes. For example, "computer" is closer to "man" while "literature" is closer to "woman". Ouch! We'll see below how to reduce the bias of these vectors, using an algorithm due to [Boliukbasi et al., 2016](https://arxiv.org/abs/1607.06520). Note that some word pairs such as "actor"/"actress" or "grandmother"/"grandfather" should remain gender specific, while other words such as "receptionist" or "technology" should be neutralized, i.e. not be gender-related. You will have to treat these two type of words differently when debiasing. ### 3.1 - Neutralize bias for non-gender specific words The figure below should help you visualize what neutralizing does. If you're using a 50-dimensional word embedding, the 50 dimensional space can be split into two parts: The bias-direction $g$, and the remaining 49 dimensions, which we'll call $g_{\perp}$. In linear algebra, we say that the 49 dimensional $g_{\perp}$ is perpendicular (or "othogonal") to $g$, meaning it is at 90 degrees to $g$. The neutralization step takes a vector such as $e_{receptionist}$ and zeros out the component in the direction of $g$, giving us $e_{receptionist}^{debiased}$. Even though $g_{\perp}$ is 49 dimensional, given the limitations of what we can draw on a screen, we illustrate it using a 1 dimensional axis below. <img src="images/neutral.png" style="width:800px;height:300px;"> <caption><center> **Figure 2**: The word vector for "receptionist" represented before and after applying the neutralize operation. </center></caption> **Exercise**: Implement `neutralize()` to remove the bias of words such as "receptionist" or "scientist". Given an input embedding $e$, you can use the following formulas to compute $e^{debiased}$: $$e^{bias\_component} = \frac{e \cdot g}{||g||_2^2} * g\tag{2}$$ $$e^{debiased} = e - e^{bias\_component}\tag{3}$$ If you are an expert in linear algebra, you may recognize $e^{bias\_component}$ as the projection of $e$ onto the direction $g$. If you're not an expert in linear algebra, don't worry about this. <!-- **Reminder**: a vector $u$ can be split into two parts: its projection over a vector-axis $v_B$ and its projection over the axis orthogonal to $v$: $$u = u_B + u_{\perp}$$ where : $u_B = $ and $ u_{\perp} = u - u_B $ !--> ``` def neutralize(word, g, word_to_vec_map): """ Removes the bias of "word" by projecting it on the space orthogonal to the bias axis. This function ensures that gender neutral words are zero in the gender subspace. Arguments: word -- string indicating the word to debias g -- numpy-array of shape (50,), corresponding to the bias axis (such as gender) word_to_vec_map -- dictionary mapping words to their corresponding vectors. Returns: e_debiased -- neutralized word vector representation of the input "word" """ ### START CODE HERE ### # Select word vector representation of "word". Use word_to_vec_map. (≈ 1 line) e = word_to_vec_map[word] # Compute e_biascomponent using the formula give above. (≈ 1 line) e_biascomponent = (np.dot(e,g)/np.linalg.norm(g)**2)*g # Neutralize e by substracting e_biascomponent from it # e_debiased should be equal to its orthogonal projection. (≈ 1 line) e_debiased = e-e_biascomponent ### END CODE HERE ### return e_debiased e = "receptionist" print("cosine similarity between " + e + " and g, before neutralizing: ", cosine_similarity(word_to_vec_map["receptionist"], g)) e_debiased = neutralize("receptionist", g, word_to_vec_map) print("cosine similarity between " + e + " and g, after neutralizing: ", cosine_similarity(e_debiased, g)) ``` **Expected Output**: The second result is essentially 0, up to numerical roundof (on the order of $10^{-17}$). <table> <tr> <td> **cosine similarity between receptionist and g, before neutralizing:** : </td> <td> 0.330779417506 </td> </tr> <tr> <td> **cosine similarity between receptionist and g, after neutralizing:** : </td> <td> -3.26732746085e-17 </tr> </table> ### 3.2 - Equalization algorithm for gender-specific words Next, lets see how debiasing can also be applied to word pairs such as "actress" and "actor." Equalization is applied to pairs of words that you might want to have differ only through the gender property. As a concrete example, suppose that "actress" is closer to "babysit" than "actor." By applying neutralizing to "babysit" we can reduce the gender-stereotype associated with babysitting. But this still does not guarantee that "actor" and "actress" are equidistant from "babysit." The equalization algorithm takes care of this. The key idea behind equalization is to make sure that a particular pair of words are equi-distant from the 49-dimensional $g_\perp$. The equalization step also ensures that the two equalized steps are now the same distance from $e_{receptionist}^{debiased}$, or from any other work that has been neutralized. In pictures, this is how equalization works: <img src="images/equalize10.png" style="width:800px;height:400px;"> The derivation of the linear algebra to do this is a bit more complex. (See Bolukbasi et al., 2016 for details.) But the key equations are: $$ \mu = \frac{e_{w1} + e_{w2}}{2}\tag{4}$$ $$ \mu_{B} = \frac {\mu \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis} \tag{5}$$ $$\mu_{\perp} = \mu - \mu_{B} \tag{6}$$ $$ e_{w1B} = \frac {e_{w1} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis} \tag{7}$$ $$ e_{w2B} = \frac {e_{w2} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis} \tag{8}$$ $$e_{w1B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w1B}} - \mu_B} {|(e_{w1} - \mu_{\perp}) - \mu_B)|} \tag{9}$$ $$e_{w2B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w2B}} - \mu_B} {|(e_{w2} - \mu_{\perp}) - \mu_B)|} \tag{10}$$ $$e_1 = e_{w1B}^{corrected} + \mu_{\perp} \tag{11}$$ $$e_2 = e_{w2B}^{corrected} + \mu_{\perp} \tag{12}$$ **Exercise**: Implement the function below. Use the equations above to get the final equalized version of the pair of words. Good luck! ``` def equalize(pair, bias_axis, word_to_vec_map): """ Debias gender specific words by following the equalize method described in the figure above. Arguments: pair -- pair of strings of gender specific words to debias, e.g. ("actress", "actor") bias_axis -- numpy-array of shape (50,), vector corresponding to the bias axis, e.g. gender word_to_vec_map -- dictionary mapping words to their corresponding vectors Returns e_1 -- word vector corresponding to the first word e_2 -- word vector corresponding to the second word """ ### START CODE HERE ### # Step 1: Select word vector representation of "word". Use word_to_vec_map. (≈ 2 lines) w1, w2 = pair[0],pair[1] e_w1, e_w2 = word_to_vec_map[w1],word_to_vec_map[w2] # Step 2: Compute the mean of e_w1 and e_w2 (≈ 1 line) mu = (e_w1 + e_w2)/2 # Step 3: Compute the projections of mu over the bias axis and the orthogonal axis (≈ 2 lines) mu_B = (np.dot(mu,bias_axis)/np.linalg.norm(bias_axis)**2)*bias_axis mu_orth = mu-mu_B # Step 4: Use equations (7) and (8) to compute e_w1B and e_w2B (≈2 lines) e_w1B = (np.dot(e_w1,bias_axis)/np.linalg.norm(bias_axis)**2)*bias_axis e_w2B = (np.dot(e_w2,bias_axis)/np.linalg.norm(bias_axis)**2)*bias_axis # Step 5: Adjust the Bias part of e_w1B and e_w2B using the formulas (9) and (10) given above (≈2 lines) corrected_e_w1B = np.sqrt(np.abs(1-np.linalg.norm(mu_orth)**2))*((e_w1B - mu_B)/np.abs((e_w1-mu_orth)-mu_B)) corrected_e_w2B = np.sqrt(np.abs(1-np.linalg.norm(mu_orth)**2))*((e_w2B - mu_B)/np.abs((e_w2-mu_orth)-mu_B)) # Step 6: Debias by equalizing e1 and e2 to the sum of their corrected projections (≈2 lines) e1 = corrected_e_w1B + mu_orth e2 = corrected_e_w2B + mu_orth ### END CODE HERE ### return e1, e2 print("cosine similarities before equalizing:") print("cosine_similarity(word_to_vec_map[\"man\"], gender) = ", cosine_similarity(word_to_vec_map["man"], g)) print("cosine_similarity(word_to_vec_map[\"woman\"], gender) = ", cosine_similarity(word_to_vec_map["woman"], g)) print() e1, e2 = equalize(("man", "woman"), g, word_to_vec_map) print("cosine similarities after equalizing:") print("cosine_similarity(e1, gender) = ", cosine_similarity(e1, g)) print("cosine_similarity(e2, gender) = ", cosine_similarity(e2, g)) ``` **Expected Output**: cosine similarities before equalizing: <table> <tr> <td> **cosine_similarity(word_to_vec_map["man"], gender)** = </td> <td> -0.117110957653 </td> </tr> <tr> <td> **cosine_similarity(word_to_vec_map["woman"], gender)** = </td> <td> 0.356666188463 </td> </tr> </table> cosine similarities after equalizing: <table> <tr> <td> **cosine_similarity(u1, gender)** = </td> <td> -0.700436428931 </td> </tr> <tr> <td> **cosine_similarity(u2, gender)** = </td> <td> 0.700436428931 </td> </tr> </table> Please feel free to play with the input words in the cell above, to apply equalization to other pairs of words. These debiasing algorithms are very helpful for reducing bias, but are not perfect and do not eliminate all traces of bias. For example, one weakness of this implementation was that the bias direction $g$ was defined using only the pair of words _woman_ and _man_. As discussed earlier, if $g$ were defined by computing $g_1 = e_{woman} - e_{man}$; $g_2 = e_{mother} - e_{father}$; $g_3 = e_{girl} - e_{boy}$; and so on and averaging over them, you would obtain a better estimate of the "gender" dimension in the 50 dimensional word embedding space. Feel free to play with such variants as well. ### Congratulations You have come to the end of this notebook, and have seen a lot of the ways that word vectors can be used as well as modified. Congratulations on finishing this notebook! **References**: - The debiasing algorithm is from Bolukbasi et al., 2016, [Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings](https://papers.nips.cc/paper/6228-man-is-to-computer-programmer-as-woman-is-to-homemaker-debiasing-word-embeddings.pdf) - The GloVe word embeddings were due to Jeffrey Pennington, Richard Socher, and Christopher D. Manning. (https://nlp.stanford.edu/projects/glove/)
github_jupyter
# Introduction # In this exercise, you'll work through several applications of PCA to the [*Ames*](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data) dataset. Run this cell to set everything up! ``` # Setup feedback system from learntools.core import binder binder.bind(globals()) from learntools.feature_engineering_new.ex5 import * import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns from sklearn.decomposition import PCA from sklearn.feature_selection import mutual_info_regression from sklearn.model_selection import cross_val_score from xgboost import XGBRegressor # Set Matplotlib defaults plt.style.use("seaborn-whitegrid") plt.rc("figure", autolayout=True) plt.rc( "axes", labelweight="bold", labelsize="large", titleweight="bold", titlesize=14, titlepad=10, ) def apply_pca(X, standardize=True): # Standardize if standardize: X = (X - X.mean(axis=0)) / X.std(axis=0) # Create principal components pca = PCA() X_pca = pca.fit_transform(X) # Convert to dataframe component_names = [f"PC{i+1}" for i in range(X_pca.shape[1])] X_pca = pd.DataFrame(X_pca, columns=component_names) # Create loadings loadings = pd.DataFrame( pca.components_.T, # transpose the matrix of loadings columns=component_names, # so the columns are the principal components index=X.columns, # and the rows are the original features ) return pca, X_pca, loadings def plot_variance(pca, width=8, dpi=100): # Create figure fig, axs = plt.subplots(1, 2) n = pca.n_components_ grid = np.arange(1, n + 1) # Explained variance evr = pca.explained_variance_ratio_ axs[0].bar(grid, evr) axs[0].set( xlabel="Component", title="% Explained Variance", ylim=(0.0, 1.0) ) # Cumulative Variance cv = np.cumsum(evr) axs[1].plot(np.r_[0, grid], np.r_[0, cv], "o-") axs[1].set( xlabel="Component", title="% Cumulative Variance", ylim=(0.0, 1.0) ) # Set up figure fig.set(figwidth=8, dpi=100) return axs def make_mi_scores(X, y): X = X.copy() for colname in X.select_dtypes(["object", "category"]): X[colname], _ = X[colname].factorize() # All discrete features should now have integer dtypes discrete_features = [pd.api.types.is_integer_dtype(t) for t in X.dtypes] mi_scores = mutual_info_regression(X, y, discrete_features=discrete_features, random_state=0) mi_scores = pd.Series(mi_scores, name="MI Scores", index=X.columns) mi_scores = mi_scores.sort_values(ascending=False) return mi_scores def score_dataset(X, y, model=XGBRegressor()): # Label encoding for categoricals for colname in X.select_dtypes(["category", "object"]): X[colname], _ = X[colname].factorize() # Metric for Housing competition is RMSLE (Root Mean Squared Log Error) score = cross_val_score( model, X, y, cv=5, scoring="neg_mean_squared_log_error", ) score = -1 * score.mean() score = np.sqrt(score) return score df = pd.read_csv("../input/fe-course-data/ames.csv") ``` Let's choose a few features that are highly correlated with our target, `SalePrice`. ``` features = [ "GarageArea", "YearRemodAdd", "TotalBsmtSF", "GrLivArea", ] print("Correlation with SalePrice:\n") print(df[features].corrwith(df.SalePrice)) ``` We'll rely on PCA to untangle the correlational structure of these features and suggest relationships that might be usefully modeled with new features. Run this cell to apply PCA and extract the loadings. ``` X = df.copy() y = X.pop("SalePrice") X = X.loc[:, features] # `apply_pca`, defined above, reproduces the code from the tutorial pca, X_pca, loadings = apply_pca(X) print(loadings) ``` # 1) Interpret Component Loadings Look at the loadings for components `PC1` and `PC3`. Can you think of a description of what kind of contrast each component has captured? After you've thought about it, run the next cell for a solution. ``` # View the solution (Run this cell to receive credit!) q_1.check() ``` ------------------------------------------------------------------------------- Your goal in this question is to use the results of PCA to discover one or more new features that improve the performance of your model. One option is to create features inspired by the loadings, like we did in the tutorial. Another option is to use the components themselves as features (that is, add one or more columns of `X_pca` to `X`). # 2) Create New Features Add one or more new features to the dataset `X`. For a correct solution, get a validation score below 0.140 RMSLE. (If you get stuck, feel free to use the `hint` below!) ``` X = df.copy() y = X.pop("SalePrice") # YOUR CODE HERE: Add new features to X. # ____ score = score_dataset(X, y) print(f"Your score: {score:.5f} RMSLE") # Check your answer q_2.check() # Lines below will give you a hint or solution code #_COMMENT_IF(PROD)_ q_2.hint() #_COMMENT_IF(PROD)_ q_2.solution() #%%RM_IF(PROD)%% X = df.copy() y = X.pop("SalePrice") X["Feature1"] = X.GrLivArea - X.TotalBsmtSF score = score_dataset(X, y) print(f"Your score: {score:.5f} RMSLE") q_2.assert_check_failed() #%%RM_IF(PROD)%% # Solution 1: Inspired by loadings X = df.copy() y = X.pop("SalePrice") X["Feature1"] = X.GrLivArea + X.TotalBsmtSF X["Feature2"] = X.YearRemodAdd * X.TotalBsmtSF score = score_dataset(X, y) print(f"Your score: {score:.5f} RMSLE") # Solution 2: Uses components X = df.copy() y = X.pop("SalePrice") X = X.join(X_pca) score = score_dataset(X, y) print(f"Your score: {score:.5f} RMSLE") q_2.assert_check_passed() ``` ------------------------------------------------------------------------------- The next question explores a way you can use PCA to detect outliers in the dataset (meaning, data points that are unusually extreme in some way). Outliers can have a detrimental effect on model performance, so it's good to be aware of them in case you need to take corrective action. PCA in particular can show you anomalous *variation* which might not be apparent from the original features: neither small houses nor houses with large basements are unusual, but it is unusual for small houses to have large basements. That's the kind of thing a principal component can show you. Run the next cell to show distribution plots for each of the principal components you created above. ``` sns.catplot( y="value", col="variable", data=X_pca.melt(), kind='boxen', sharey=False, col_wrap=2, ); ``` As you can see, in each of the components there are several points lying at the extreme ends of the distributions -- outliers, that is. Now run the next cell to see those houses that sit at the extremes of a component: ``` # You can change PC1 to PC2, PC3, or PC4 component = "PC1" idx = X_pca[component].sort_values(ascending=False).index df.loc[idx, ["SalePrice", "Neighborhood", "SaleCondition"] + features] ``` # 3) Outlier Detection Do you notice any patterns in the extreme values? Does it seem like the outliers are coming from some special subset of the data? After you've thought about your answer, run the next cell for the solution and some discussion. ``` # View the solution (Run this cell to receive credit!) q_3.check() ``` # Keep Going # [**Apply target encoding**](#$NEXT_NOTEBOOK_URL$) to give a boost to categorical features.
github_jupyter
<img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/> # LinkedIn - Send posts feed to gsheet <a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/LinkedIn/LinkedIn_Send_posts_feed_to_gsheet.ipynb" target="_parent"><img src="https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg"/></a> **Tags:** #linkedin #profile #post #stats #naas_drivers #automation #content #googlesheets **Author:** [Florent Ravenel](https://www.linkedin.com/in/florent-ravenel/) ## Input ### Import libraries ``` from naas_drivers import linkedin, gsheet import naas import pandas as pd ``` ### Setup LinkedIn 👉 <a href='https://www.notion.so/LinkedIn-driver-Get-your-cookies-d20a8e7e508e42af8a5b52e33f3dba75'>How to get your cookies ?</a> ``` # Lindekin cookies LI_AT = "AQEDARCNSioDe6wmAAABfqF-HR4AAAF-xYqhHlYAtSu7EZZEpFer0UZF-GLuz2DNSz4asOOyCRxPGFjenv37irMObYYgxxxxxxx" JSESSIONID = "ajax:12XXXXXXXXXXXXXXXXX" # Linkedin profile url PROFILE_URL = "https://www.linkedin.com/in/xxxxxx/" # Number of posts updated in Gsheet (This avoid to requests the entire database) LIMIT = 10 ``` ### Setup your Google Sheet 👉 Get your spreadsheet URL<br> 👉 Share your gsheet with our service account to connect : naas-share@naas-gsheets.iam.gserviceaccount.com<br> 👉 Create your sheet before sending data into it ``` # Spreadsheet URL SPREADSHEET_URL = "https://docs.google.com/spreadsheets/d/XXXXXXXXXXXXXXXXXXXX" # Sheet name SHEET_NAME = "LK_POSTS_FEED" ``` ### Setup Naas ``` naas.scheduler.add(cron="0 8 * * *") #-> To delete your scheduler, please uncomment the line below and execute this cell # naas.scheduler.delete() ``` ## Model ### Get data from Google Sheet ``` df_gsheet = gsheet.connect(SPREADSHEET_URL).get(sheet_name=SHEET_NAME) df_gsheet ``` ### Get new posts and update last posts stats ``` def get_new_posts(df_gsheet, key, limit=LIMIT, sleep=False): posts = [] if len(df_gsheet) > 0: posts = df_gsheet[key].unique() else: df_posts_feed = linkedin.connect(LI_AT, JSESSIONID).profile.get_posts_feed(PROFILE_URL, limit=-1, sleep=sleep) return df_posts_feed # Get new df_posts_feed = linkedin.connect(LI_AT, JSESSIONID).profile.get_posts_feed(PROFILE_URL, limit=LIMIT, sleep=sleep) df_new = pd.concat([df_posts_feed, df_gsheet]).drop_duplicates(key, keep="first") return df_new df_new = get_new_posts(df_gsheet, "POST_URL", limit=LIMIT) df_new ``` ## Output ### Send to Google Sheet ``` gsheet.connect(SPREADSHEET_URL).send(df_new, sheet_name=SHEET_NAME, append=False) ```
github_jupyter
--- # **Product Backorders** --- ## Introduction A **product backorder** is a customer order that has not been fulfilled. Product backorder may be the result of strong sales performance (e.g. the product is in such high demand that production cannot keep up with sales). However, backorders can upset consumers, lead to canceled orders and decreased customer loyalty. Companies want to avoid backorders, but also avoid overstocking every product (leading to higher inventory costs). Hence, this project aims to develop a product that can predict if a product will go on backorder not. --- ## Problem Statement 1. What are the variables that lead to backorder? 2. What are the relationship between the varialbes? --- ## Hypothesis National inventory and sales performance are directly correlated with backorder. --- ## Objective 1. To identify the relationship between the attributes 2. To identify which attributes correlates most to backorder 3. To predict backorder by selecting relevant attributes --- ## Dataset From the [Backorders Wiki Page](https://github.com/AasthaMadan/Product-Backorders/wiki/Product-back-orders-prediction), we can find the information about the dataset * sku – Random ID for the product * national_inv – Current inventory level for the part * lead_time – Transit time for product (if available) * in_transit_qty – Amount of product in transit from source * forecast_3_month – Forecast sales for the next 3 months * forecast_6_month – Forecast sales for the next 6 months * forecast_9_month – Forecast sales for the next 9 months * sales_1_month – Sales quantity for the prior 1 month time period * sales_3_month – Sales quantity for the prior 3 month time period * sales_6_month – Sales quantity for the prior 6 month time period * sales_9_month – Sales quantity for the prior 9 month time period * min_bank – Minimum recommend amount to stock * potential_issue – Source issue for part identified * pieces_past_due – Parts overdue from source * perf_6_month_avg – Source performance for prior 6 month period * perf_12_month_avg – Source performance for prior 12 month period * local_bo_qty – Amount of stock orders overdue * deck_risk – Part risk flag * oe_constraint – Part risk flag * ppap_risk – Part risk flag * stop_auto_buy – Part risk flag * rev_stop – Part risk flag * went_on_backorder – Product actually went on backorder. --- # **Exploratory Data Analysis** Exploratory data analysis is the inital investigation of data so as to discover patterns, spot anomalies, and test hypothesis with the help of statistics. To do this, we will first import our datasets and merge them using the pandas library ``` import pandas as pd # Load train and test data train_df = pd.read_csv("drive/MyDrive/data_mining_portfolio/Kaggle_Training_Dataset_v2.csv") test_df = pd.read_csv("drive/MyDrive/data_mining_portfolio/Kaggle_Test_Dataset_v2.csv") # Merge both the datasets merged_df = pd.concat([train_df, test_df]) ``` Next, we can look at some properties of the dataset such as shape, data types, and part of the actual data itself. ``` # Size of dataset print("Shape:\n", merged_df.shape) # Look at the data types print("\nDatatypes:\n", merged_df.dtypes) # An initial look at the 1st 5 rows print("\nFirst 5:\n", merged_df.head()) # The last 5 rows print("\nLast5:\n", merged_df.tail()) # Count number of null values for each variable print("\nNulls:\n", merged_df.isnull().sum()) ``` We find out that: * We can see there's almost 2 million records with 23 different attributes, * 15 of these attributes are numerical * 8 of these attributes are non-numerical * lead_time has 115619 null values --- We now aggregate the dataset, first a summary of the overall dataset, then a summary with separating the classes. ``` # Select numerical parameters num_params = ['national_inv', 'lead_time', 'in_transit_qty', 'forecast_3_month', 'forecast_6_month', 'forecast_9_month', 'sales_1_month', 'sales_3_month', 'sales_6_month', 'sales_9_month', 'min_bank', 'pieces_past_due', 'perf_6_month_avg', 'perf_12_month_avg', 'local_bo_qty'] # Describe data print("\nSummary:\n", merged_df[num_params].describe().transpose()) # Pivot backorder print("\nBackorder:\n", merged_df.pivot_table(values=num_params,index=['went_on_backorder']).transpose()) # Class proportion for target variable print("\nProportion of Backorder before SMOTE:\n", merged_df['went_on_backorder'].value_counts(normalize=True)) ``` We find that overall: * The mean inventory of products is about 500 * The mean product in transit is 43 * The mean sales per month is 55 When separated by class: * The product that did not go on backorders, have high inventory, but also higher sales and quantity in transit. * The product that go on backorders, have low inventory, but also lower sales and quantity in transit. * **For products that go on backorder, the sales is higher than the inventory, whereas for products that do not go on backorder, the sales is lower than the inventory.** This confirms our hypothesis, in that national inventory and sales performance are directly correlated with backorder. --- Now we can construct a correlation matrix to see the correlation between each attributes. ``` import matplotlib.pyplot as plt import numpy as np # Correlation Matrix Plot of all variables varnames=list(merged_df)[1:] correlations = merged_df[varnames].corr() fig = plt.figure() ax = fig.add_subplot(111) cax = ax.matshow(correlations, vmin=-1, vmax=1) fig.colorbar(cax) ticks = np.arange(0,23,1) ax.set_xticks(ticks) ax.set_yticks(ticks) ax.set_xticklabels(varnames,rotation=90) ax.set_yticklabels(varnames) plt.show() ``` We can see that: * Sales and forecast variables are highly correlated. This means that, when doing our prediction, we don't have to use every attributes that correlate highly with each other. Using fewer attributes may speed up training time. --- ## Tableau EDA We can now perform more complex data analysis on Tableau. The first chart we will look at is **Real sales vs forecast**: ![](https://drive.google.com/uc?export=view&id=1ssA2Qwh7XkgwkKPI5bjAfJAAkbpfSH0M) We see that the prediction in the original dataset correlates highly with the real sales. We can investigate further by looking at the "yes" and "no" backorder products separately. ![](https://drive.google.com/uc?export=view&id=1YSdv74N5oC-8aDxpzRJ1AcJjlBlCY56a) For the "no" backorder products, the forecasted sales and the actual sales are the same. But for the "yes" backorder products, there is a disparity between the forecasted sales and the actual sales. **The actual sales are higher than the forecasted sales for backorder products.** --- # **Data Pre-processing** Data pre-processing is a data mining technique that involves transforming raw data into an understandable format. Real-world data is often inconsistent and incomplete, and we need to transform it into a format that our machine learning models can understand. First of all, we need to get rid of the null value sas there are many null values in the dataset. We also remove the 'SKU' column as it is the ID of the product and are not meaningful in any way. After that, we can compare the proportion of "Yes" and "No" backorder products: ``` from sklearn.preprocessing import normalize from imblearn.over_sampling import SMOTE # Replace NaN values in lead_time merged_df.lead_time = merged_df.lead_time.fillna(merged_df.lead_time.median()) # Change the -99 placeholder to NA for perf_6_month_avg and perf_12_month_avg merged_df['perf_6_month_avg'] = merged_df['perf_6_month_avg'].replace(-99, np.NaN) merged_df['perf_12_month_avg'] = merged_df['perf_12_month_avg'].replace(-99, np.NaN) # Drop rows with null values merged_df = merged_df.dropna() # Remove the sku column merged_df = merged_df.drop(["sku"], axis=1) # Class proportion for target variable print("\nProportion of Backorder before SMOTE:\n", merged_df['went_on_backorder'].value_counts(normalize=True)) ``` And find out that 98.13% of products are "Yes" backorder and only 1.87% are "No" backorder. --- Next, we want to transform the 'Yes' and 'No' values to '1' and '0', as some models are not able to work with non-numerical values. Also, we want to remove the records where forecast and sales are 0, because these products do not contribute to our prediction. ``` # Convert from non-numerical to numerical cat_params = ['potential_issue', 'deck_risk', 'oe_constraint', 'ppap_risk', 'stop_auto_buy', 'rev_stop', 'went_on_backorder'] for param in cat_params: merged_df[param] = (merged_df[param] == 'Yes').astype(int) # Remove records where forecast and sales are 0 attributes = ['forecast_3_month', 'forecast_6_month', 'forecast_9_month', 'sales_1_month', 'sales_3_month', 'sales_6_month', 'sales_9_month'] for attr in attributes: merged_df = merged_df.drop(merged_df[merged_df[attr] == 0].index) ``` As the data is still vastly unbalanced, we need to balance it somehow. We can do this by applying the SMOTE technique. After that, we can save it to a csv file for future use. ``` # SMOTE teachnique to balance dataset X = merged_df.drop(['went_on_backorder'], axis = 1) y = merged_df['went_on_backorder'] oversample = SMOTE() X, y = oversample.fit_resample(X, y) df = pd.concat([pd.DataFrame(X), pd.DataFrame(y)], axis=1) # Rename labels in final dataset labels = merged_df.columns df.columns = labels # Save to csv df.to_csv(r'data.csv') # Class proportion before SMOTE print("\nProportion of Backorder before SMOTE:\n", merged_df['went_on_backorder'].value_counts(normalize=True)) # Class proportion after SMOTE print("\nProportion of Backorder after SMOTE:'n", df['went_on_backorder'].value_counts(normalize=True)) ``` --- # **Descriptive Data Mining** Descriptive data mining is applying data mining techniques to determine the similarities in the data and to find existing patterns. We will apply 2 descriptive data mining techniques here: 1. A Priori Association Rules 2. K-Means Clustering --- ## Association Rules Association Rules calculate how frequent the items appear together --- ### RapidMiner We first run this on RapidMiner. The data is first discretized and transformed to Binomial before we process it. It should be noted that we are running FPGrowth in RapidMiner because it does not have A Priori. We use A Priori Association Rules on Colab because FPgrowth is unavailable here. However, they are very similar algorithms. The figure below shows the operators used in RapidMiner: ![](https://drive.google.com/uc?export=view&id=1TzFOGjPmjdhpcyIp5m4Wu2CZVgmtbQiy) And the two figures below shows the output: ![](https://drive.google.com/uc?export=view&id=1ZCb2ceXNirSBVqiW_AqT4beIq0WpxT3w) ![](https://drive.google.com/uc?export=view&id=1_EZf5p8_SjrZYzHy3qVvoLeX3y4J2_mM) We can conclude from the results that, when * oe_constraint * potential_issue * desk_risk is **False**, then rev_stop will most probably also be **False** Next, we export the data that we have processed here into a CSV that we can use on Colab. --- We can now use the data that is exported and run A Priori Association Rules on Colab. ``` !pip install -q mlxtend from mlxtend.frequent_patterns import apriori, association_rules import pandas as pd # Read data disc_df = pd.read_csv("drive/MyDrive/data_mining_portfolio/discretized.csv") # Analyze frequent itemsets and write to csv ap = apriori(disc_df, min_support=0.95, use_colnames=True) print("\nFrequent Itemsets:\n", ap) ap.to_csv('itemsets.csv') # Create association rules and write to csv rules = association_rules(ap, metric="confidence", min_threshold=0.8) print("\nAssociation Rules:\n", rules) rules.to_csv('rules.csv') ``` We managed to obtain very similar results, whereby when * oe_constraint * potential_issue * desk_risk is **False**, the rev_stop is also **False**. --- ## Clustering Clustering is an unsupervised machine learning algorithm that divide the population or data points into a number of groups such that data points in the same groups are more similar to other data points in the same group and dissimilar to the data points in other groups. It is basically a collection of objects on the basis of similarity and dissimilarity between them. The clustering algorithm we use is K-Means as it is one of the fastest clustering algorithms. We will use the Davies-Bouldin Index to measure its performance. A high Davies-Bouldin score means that the clusters are very similar to each other, whereas a low Davies-Bouldin score means that the clusters are well separated from each other. **In general, we want a low score.** --- ## RapidMiner We first run it on RapidMiner. We are only using the "Yes" backorder products as using the "No" backorder products will turn the task into a classification task. The task is repeated 5 times for 5 different number of clusters. Figure below shows the operators used in RapidMiner: ![](https://drive.google.com/uc?export=view&id=1G67TbF00e4_7DxKVIBIeGwqBAOS7CN83) Next, we can run it on Python, starting with 2 clusters. ``` from sklearn.cluster import KMeans from sklearn.metrics import davies_bouldin_score # Select only backorder data bo_df = df[df['went_on_backorder'] == 1] X = bo_df.drop(columns='went_on_backorder', axis=0) # Clustering KMmodel = KMeans(n_clusters=2) KMpred = KMmodel.fit_predict(X) KMlabels = KMmodel.labels_ KMbi = davies_bouldin_score(X, KMlabels) print("K-Means with 2 clusters") print("Davies-Bouldin Index:", KMbi) ``` We obtained a Davies-Bouldin score of 0.432. --- Now we try with 4 clusters. ``` # Clustering KMmodel = KMeans(n_clusters=4) KMpred = KMmodel.fit_predict(X) KMlabels = KMmodel.labels_ KMbi = davies_bouldin_score(X, KMlabels) print("K-Means with 4 clusters") print("Davies-Bouldin Index:", KMbi) ``` And obtained a score of 0.484 --- Finally, we test with 3 clusters ``` # Clustering KMmodel = KMeans(n_clusters=3) KMpred = KMmodel.fit_predict(X) KMlabels = KMmodel.labels_ KMbi = davies_bouldin_score(X, KMlabels) print("K-Means with 3 clusters") print("Davies-Bouldin Index:", KMbi) ``` And obtained a score of 0.315. This is the best result we have obtained so we will use this to cluster our data. --- The figure below shows the Davies-Bouldin Index for different clusters on RapidMiner and Colab. As we can see, the score on both platforms converged at k=3, which is a good sign that k=3 is the optimal number of clusters. ![](https://drive.google.com/uc?export=view&id=16JS_IQezVKCyqOCI3JMJuwLuLIsuAGFR) --- Finally, using k=3, we can analyse the properties of the different clusters. ``` # Add cluster column KM = X.copy() KM['cluster'] = pd.Series(KMpred, index=KM.index) # Separate into different clusters cl0 = KM.loc[KM['cluster'] == 0] cl1 = KM.loc[KM['cluster'] == 1] cl2 = KM.loc[KM['cluster'] == 2] # Find out number of instances in each cluster print("Cluster_0: ", cl0.shape) print("Cluster_1: ", cl1.shape) print("Cluster_2: ", cl2.shape) # Aggregate the different clusters cl0_mean = cl0.agg('mean').drop('cluster') cl1_mean = cl1.agg('mean').drop('cluster') cl2_mean = cl2.agg('mean').drop('cluster') pd.concat([cl0_mean, cl1_mean, cl2_mean], axis=1) ``` We find out that: * Most items are in the Cluster_0 * Cluster_1 and Cluster_2 are outliers * Cluster_0 has a low inventory at an average of 9.72, and a higher sales at 32.59. * Cluster_1 has a high inventory at 6501.40 and a lower sales at 2487. * Cluster_2 has a negative inventory at -380.10 and a positive sales at 861.34. **Cluster 0 and Cluster 2 confirms our results earlier that, a higher sales than inventory will result in a backorder.** On the other hand, Cluster_1 has a lower sales than inventory, however there are only small a small quantity of items that are in this cluster. Cluster_1 is the outlier. --- # **Predictive Data Mining** Predictive data mining allows us to predict events that has not happened yet. This type of data mining is done for the purpose of using business intelligence or other data to forecast or predict trends. This type of data mining can help business leaders make better decisions and can add value to the efforts of the analytics team. For this project, we are comparing Random Forest and Adaboost. Random forests are an ensemble learning method for classification. It works by constructing a multitude of Decision Trees at training time and outputting the class that is the mode of the classes. Adaboost is also an ensemble learning method, but can be used in conjunction with many other types of learning algorithms instead of just Decision Tree. Note that we do not use the forecast because they are predicted values, and we do not use min_bank because it is a 'recommended' value. We only want actual numbers for our prediction. --- ### RapidMiner First, we run this on RapidMiner. Using the default max_depth=10 for both Random Forest and AdaBoost, we calculate the performance on different train-test ratios, 80:20 and 70:30. The figure below shows our operators on RapidMiner: ![](https://drive.google.com/uc?export=view&id=19ekMULjfMtDli8MApstGBsRMt079vgvK) The figure below shows the tabulated results: ![](https://drive.google.com/uc?export=view&id=1sl5wHuQXs6UAgKh_It9BGlOnrtRYKFdl) We can see that, overall, Random Forest outperformed AdaBoost by as much as 10% in terms of precision. --- ## 30% Test Ratio We can now test in on Colab. Starting with 30% testing ratio and a max_depth=1 on Random Forest, ``` from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier from sklearn.metrics import precision_score, recall_score, accuracy_score from joblib import dump from time import perf_counter # selecting features that we want X = df.drop(columns=['went_on_backorder', 'forecast_3_month', 'forecast_6_month', 'forecast_9_month', 'perf_12_month_avg', 'sales_1_month', 'sales_3_month', 'sales_9_month', 'min_bank'], axis=0) Y = df['went_on_backorder'] # test size test_size = 0.3 # train test split X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=test_size, random_state=42) print("Train to test ratio:", 1-test_size, test_size) # training random forest start = perf_counter() RFmodel = RandomForestClassifier(max_depth=1) RFmodel.fit(X_train, Y_train) # testing random forest RFpred = RFmodel.predict(X_test) RFacc = round(accuracy_score(Y_test, RFpred) * 100, 2) RFprec = round(precision_score(Y_test, RFpred, average='weighted', zero_division=0) * 100, 2) RFrec = round(recall_score(Y_test, RFpred, average='weighted') * 100, 2) time_elapsed = perf_counter() - start print("Random Forest", "Accuracy", RFacc, "Precision:", RFprec, "Recall:", RFrec) print("Random Forest Time Elapsed: ", time_elapsed, " seconds.") ``` We obtained the following results: * Accuracy: 79.69% * Precision: 79.97% * Recall: 79.69% * Time elapsed: 15 seconds --- Now we use a max_depth=10 ``` # training random forest start = perf_counter() RFmodel = RandomForestClassifier(max_depth=10) RFmodel.fit(X_train, Y_train) # testing random forest RFpred = RFmodel.predict(X_test) RFacc = round(accuracy_score(Y_test, RFpred) * 100, 2) RFprec = round(precision_score(Y_test, RFpred, average='weighted', zero_division=0) * 100, 2) RFrec = round(recall_score(Y_test, RFpred, average='weighted') * 100, 2) time_elapsed = perf_counter() - start print("Random Forest", "Accuracy", RFacc, "Precision:", RFprec, "Recall:", RFrec) print("Random Forest Time Elapsed: ", time_elapsed, " seconds.") ``` And obtained the following results: * Accuracy: 90.64% * Precision: 90.77% * Recall: 90.64% * Time elapsed: 75 seconds We can see that by increase the number of max_depth, we greatly increased the time it took to train the model. --- We do the same thing again with max_depth=25 ``` # training random forest start = perf_counter() RFmodel = RandomForestClassifier(max_depth=25) RFmodel.fit(X_train, Y_train) # testing random forest RFpred = RFmodel.predict(X_test) RFacc = round(accuracy_score(Y_test, RFpred) * 100, 2) RFprec = round(precision_score(Y_test, RFpred, average='weighted', zero_division=0) * 100, 2) RFrec = round(recall_score(Y_test, RFpred, average='weighted') * 100, 2) time_elapsed = perf_counter() - start print("Random Forest", "Accuracy", RFacc, "Precision:", RFprec, "Recall:", RFrec) print("Random Forest Time Elapsed: ", time_elapsed, " seconds.") ``` And obtained the following results: * Accuracy: 97.76% * Precision: 97.77% * Recall: 97.76% * Time elapsed: 112 seconds This is the best result with Random Forest so far. However, we can not know if we are overfitting until we test our model on the data product --- Now, we can train our Adaboost model. First we use Naive Bayes, particularly the Gaussian Naive Bayes, as the base estimator. ``` from sklearn.naive_bayes import GaussianNB # training adaboost start = perf_counter() ABmodel = AdaBoostClassifier(base_estimator=GaussianNB()) ABmodel.fit(X_train, Y_train) # testing adaboost ABpred = ABmodel.predict(X_test) ABacc = round(accuracy_score(Y_test, ABpred) * 100, 2) ABprec = round(precision_score(Y_test, ABpred, average='weighted', zero_division=0) * 100, 2) ABrec = round(recall_score(Y_test, ABpred, average='weighted') * 100, 2) time_elapsed = perf_counter() - start print("AdaBoost", "Accuracy:", ABacc, "Precision:", ABprec, "Recall:", ABrec) print("AdaBoost Time Elapsed: ", time_elapsed, " seconds.") ``` We obtained the following results: * Accuracy: 65.98% * Precision: 68.65% * Recall: 65.98% * Time elapsed: 24 seconds This is not a good result, so we will not use this model --- Next, we try the ExtraTreeClassifier as the base estimator. ``` from sklearn.tree import ExtraTreeClassifier # training adaboost start = perf_counter() ABmodel = AdaBoostClassifier(base_estimator=ExtraTreeClassifier()) ABmodel.fit(X_train, Y_train) # testing adaboost ABpred = ABmodel.predict(X_test) ABacc = round(accuracy_score(Y_test, ABpred) * 100, 2) ABprec = round(precision_score(Y_test, ABpred, average='weighted', zero_division=0) * 100, 2) ABrec = round(recall_score(Y_test, ABpred, average='weighted') * 100, 2) time_elapsed = perf_counter() - start print("AdaBoost", "Accuracy:", ABacc, "Precision:", ABprec, "Recall:", ABrec) print("AdaBoost Time Elapsed: ", time_elapsed, " seconds.") ``` We obtained the following results: * Accuracy: 96.89% * Precision: 96.89% * Recall: 96.89% * Time elapsed: 44 seconds This is a good result, but we can do better. --- Now we can test the default base estimator, which is the Decision Tree if no parameters are given. ``` from sklearn.tree import DecisionTreeClassifier # training adaboost start = perf_counter() ABmodel = AdaBoostClassifier(base_estimator=DecisionTreeClassifier()) ABmodel.fit(X_train, Y_train) # testing adaboost ABpred = ABmodel.predict(X_test) ABacc = round(accuracy_score(Y_test, ABpred) * 100, 2) ABprec = round(precision_score(Y_test, ABpred, average='weighted', zero_division=0) * 100, 2) ABrec = round(recall_score(Y_test, ABpred, average='weighted') * 100, 2) time_elapsed = perf_counter() - start print("AdaBoost", "Accuracy:", ABacc, "Precision:", ABprec, "Recall:", ABrec) print("AdaBoost Time Elapsed: ", time_elapsed, " seconds.") ``` We obtained the following results: * Accuracy: 98.43% * Precision: 98.43% * Recall: 98.43% * Time elapsed: 215 seconds The accuracy, precision, and recall are very high, however it took much longer to train. --- Now that we have determined Decision Tree is a good base estimator, we can try it with max_depth=1. ``` from sklearn.tree import DecisionTreeClassifier # training adaboost start = perf_counter() ABmodel = AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=1)) ABmodel.fit(X_train, Y_train) # testing adaboost ABpred = ABmodel.predict(X_test) ABacc = round(accuracy_score(Y_test, ABpred) * 100, 2) ABprec = round(precision_score(Y_test, ABpred, average='weighted', zero_division=0) * 100, 2) ABrec = round(recall_score(Y_test, ABpred, average='weighted') * 100, 2) time_elapsed = perf_counter() - start print("AdaBoost", "Accuracy:", ABacc, "Precision:", ABprec, "Recall:", ABrec) print("AdaBoost Time Elapsed: ", time_elapsed, " seconds.") ``` We obtained the following results: * Accuracy: 87.32% * Precision: 87.35% * Recall: 87.32% * Time elapsed: 31 seconds The time elapsed is much lower, but the performance is still relatively good. --- Now we try with max_depth=25 ``` # training adaboost start = perf_counter() ABmodel = AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=25)) ABmodel.fit(X_train, Y_train) # testing adaboost ABpred = ABmodel.predict(X_test) ABacc = round(accuracy_score(Y_test, ABpred) * 100, 2) ABprec = round(precision_score(Y_test, ABpred, average='weighted', zero_division=0) * 100, 2) ABrec = round(recall_score(Y_test, ABpred, average='weighted') * 100, 2) time_elapsed = perf_counter() - start print("AdaBoost", "Accuracy:", ABacc, "Precision:", ABprec, "Recall:", ABrec) print("AdaBoost Time Elapsed: ", time_elapsed, " seconds.") ``` And obtained the following results: * Accuracy: 98.70% * Precision: 98.70% * Recall: 98.70% * Time elapsed: 195 seconds This is the longest time elapsed so far, but also the best performance in terms of accuracy, precision, and recall. Again, we will not know if the model is overfitting the data until we test it on our data product. --- ## 20% Test Ratio Now, we can try to run the same process with 20% testing ratio instead. Starting with Random Forest with max_depth=1, ``` # test size test_size = 0.2 # train test split X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=test_size, random_state=42) print("Train to test ratio:", 1-test_size, test_size) # training random forest start = perf_counter() RF_1 = RandomForestClassifier(max_depth=1) RF_1.fit(X_train, Y_train) # testing random forest RFpred = RF_1.predict(X_test) RFacc = round(accuracy_score(Y_test, RFpred) * 100, 2) RFprec = round(precision_score(Y_test, RFpred, average='weighted', zero_division=0) * 100, 2) RFrec = round(recall_score(Y_test, RFpred, average='weighted') * 100, 2) time_elapsed = perf_counter() - start print("Random Forest", "Accuracy", RFacc, "Precision:", RFprec, "Recall:", RFrec) print("Random Forest Time Elapsed: ", time_elapsed, " seconds.") ``` we obtained the results: * Accuracy: 78.69% * Precision: 78.83% * Recall: 78.69% * Time elapsed: 15 seconds The time elapsed is very low and the performance is relatively high. --- We do the same thing but with max_depth=10. ``` # training random forest start = perf_counter() RF_10 = RandomForestClassifier(max_depth=10) RF_10.fit(X_train, Y_train) # testing random forest RFpred = RF_10.predict(X_test) RFacc = round(accuracy_score(Y_test, RFpred) * 100, 2) RFprec = round(precision_score(Y_test, RFpred, average='weighted', zero_division=0) * 100, 2) RFrec = round(recall_score(Y_test, RFpred, average='weighted') * 100, 2) time_elapsed = perf_counter() - start print("Random Forest", "Accuracy", RFacc, "Precision:", RFprec, "Recall:", RFrec) print("Random Forest Time Elapsed: ", time_elapsed, " seconds.") ``` We obtained the results: * Accuracy: 90.76% * Precision: 90.90% * Recall: 90.76% * Time elapsed: 74 seconds This is a very good result. --- Now let's try max_depth=25 ``` # training random forest start = perf_counter() RF_25 = RandomForestClassifier(max_depth=25) RF_25.fit(X_train, Y_train) # testing random forest RFpred = RF_25.predict(X_test) RFacc = round(accuracy_score(Y_test, RFpred) * 100, 2) RFprec = round(precision_score(Y_test, RFpred, average='weighted', zero_division=0) * 100, 2) RFrec = round(recall_score(Y_test, RFpred, average='weighted') * 100, 2) time_elapsed = perf_counter() - start print("Random Forest", "Accuracy", RFacc, "Precision:", RFprec, "Recall:", RFrec) print("Random Forest Time Elapsed: ", time_elapsed, " seconds.") ``` We get the results: * Accuracy: 97.86% * Precision: 97.87% * Recall: 97.86% * Time elapsed: 114 seconds This is the best performance for our Random Forest. --- We do the same thing for Adaboost, starting with max_depth=1. ``` # training adaboost start = perf_counter() AB_1 = AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=1)) AB_1.fit(X_train, Y_train) # testing adaboost ABpred = AB_1.predict(X_test) ABacc = round(accuracy_score(Y_test, ABpred) * 100, 2) ABprec = round(precision_score(Y_test, ABpred, average='weighted', zero_division=0) * 100, 2) ABrec = round(recall_score(Y_test, ABpred, average='weighted') * 100, 2) time_elapsed = perf_counter() - start print("AdaBoost", "Accuracy:", ABacc, "Precision:", ABprec, "Recall:", ABrec) print("AdaBoost Time Elapsed: ", time_elapsed, " seconds.") ``` We obtained the results: * Accuracy: 87.38% * Precision: 87.41% * Recall: 87.38% * Time elapsed: 32 seconds We see that when the max_depth are the same, Adaboost outperformed Random Forest, contrary to our results from RapidMiner. Adaboost took twice as long as Random Forest to train. --- Now we can try max_depth=10 ``` # training adaboost start = perf_counter() AB_10 = AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=10)) AB_10.fit(X_train, Y_train) # testing adaboost ABpred = AB_10.predict(X_test) ABacc = round(accuracy_score(Y_test, ABpred) * 100, 2) ABprec = round(precision_score(Y_test, ABpred, average='weighted', zero_division=0) * 100, 2) ABrec = round(recall_score(Y_test, ABpred, average='weighted') * 100, 2) time_elapsed = perf_counter() - start print("AdaBoost", "Accuracy:", ABacc, "Precision:", ABprec, "Recall:", ABrec) print("AdaBoost Time Elapsed: ", time_elapsed, " seconds.") ``` We obtained the results: * Accuracy: 98.80% * Precision: 98.80% * Recall: 98.80% * Time elapsed: 219 seconds --- Finally, we try max_depth=25 ``` # training adaboost start = perf_counter() AB_25 = AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=25)) AB_25.fit(X_train, Y_train) # testing adaboost ABpred = AB_25.predict(X_test) ABacc = round(accuracy_score(Y_test, ABpred) * 100, 2) ABprec = round(precision_score(Y_test, ABpred, average='weighted', zero_division=0) * 100, 2) ABrec = round(recall_score(Y_test, ABpred, average='weighted') * 100, 2) time_elapsed = perf_counter() - start print("AdaBoost", "Accuracy:", ABacc, "Precision:", ABprec, "Recall:", ABrec) print("AdaBoost Time Elapsed: ", time_elapsed, " seconds.") ``` We obtained the results: * Accuracy: 98.81% * Precision: 98.81% * Recall: 98.81% * Time elapsed: 244 seconds Again, we see a higher slightly performance compared to Random Forest, and again the training took twice as long. --- We save both the Random Forest and the Adaboost forest, for max_depth=1, max_depth=10 and max_depth=25. This is because we want to test if any of these models are overfitting ``` # save model dump(RF_1, 'RandomForest_1.joblib') dump(RF_10, 'RandomForest_10.joblib') dump(RF_25, 'RandomForest_25.joblib') # save model dump(AB_1, 'Adaboost_1.joblib') dump(AB_10, 'Adaboost_10.joblib') dump(AB_25, 'Adaboost_25.joblib') ``` --- # **Results and Analysis** --- ## Exploratory Data Analysis From our exploratory data analysis, **we can see that there is indeed a correlation between national inventory, sales performance, and backorder**. Generally, when the sales performance exceeds the national inventory, the product becomes a backorder. Another thing we have learned from the data analysis is that, when the sales performance exceeds the forecasted sales, then there is a high probability that the product will also go on backorder. --- ## Association Rules With association rules, we can see the relationship between the attributes. The categorical attributes (oe_constraint, desk_risk, rev_stop) frequently go together. **When oe_constraint is false, then there is a high likelihood that rev_stop is also false, and vice versa.** --- ## Clustering With clustering, we are able to cluster similar instances together. Within the "Yes" backorder products, we see that cluster 0 is the majority cluster, whereas cluster 1 and cluster 2 can be considered as an outlier. **Cluster 0 is low in sales and inventory, cluster 1 is high in both sales and inventory, and cluster 2 is in between the two.** For cluster 0 and cluster 2, we see that the average sales do exceed the average inventory, which confirms our hypothesis. Cluster 1 contradicts our hypothesis, but it can be considered an outlier to the data. --- ## Predictive Data Mining After comparing 2 classification algorithms, we find that, in general, **Adaboost has the higher accuracy, precision, and recall than Random Forest**, when max_depth are the same. The result here on Colab is different to our results on RapidMiner. On RapidMiner, the performance of Random Forest is higher than Adaboost. I'm not certain why, and I can only hypothesize that it is due to a different implementation of the algorithms. Here is the tabulated results: ![](https://drive.google.com/uc?export=view&id=1c_dKXSM4crxS4BB-Wej9oz0_UmBXwPSG) If we look at the file sizes, Adaboost_1, Adaboost_10, Adaboost_25 have file sizes 31.1KB, 2.85MB, and 17.5MB respectively. On the other hand, RandomForest_1, RandomForest_10, and RandomForest_25 have file sizes 60KB, 6.84MB, and 156MB respectively. As for time taken to train the model, Adaboost_1, Adaboost_10, Adaboost_25 took 32, 219, and 244 seconds respectively. RandomForest_1, RandomForest_10, and RandomForest_25 took 15, 74, 114 seconds respectively. **Adaboost takes twice as long as Random Forest to train.** However, we won't know if any of these models are overfitting until we test it in the Data product. # **Data Product** The data product is built on streamlit because it allows us to rapidly prototype a data product without much coding. Our data product allows the user to manipulate the variables and predict if they will be a backorder or not. We will use these features to predict the backorder: * national_inv * lead_time * In_transit_qty * sales_6_month * perf_6_months_avg * potential_issue * pieces_past_due * local_bo_qty * deck_risk * oe_constraint * ppap_risk * stop_auto_buy * rev_stop ``` %%writefile app.py import pandas as pd import streamlit as st from joblib import load from PIL import Image DATA_PATH = 'data.csv' @st.cache def load_data(path): data = pd.read_csv(path) lowercase = lambda x: str(x).lower() data.rename(lowercase, axis='columns', inplace=True) return data data_load_state = st.text('Loading data...') df = load_data(DATA_PATH) data_load_state.text("Done loading data!") def main(): @st.cache def agg_data(df, mode): dat = df.agg([mode]) return dat data_agg_state = st.text('Aggregating data...') dfMin = agg_data(df, 'min') dfMax = agg_data(df, 'max') dfMedian = agg_data(df, 'median') dfMode = agg_data(df, 'mode') data_agg_state.text("Done aggregating data!") st.title('Product Backorder') st.sidebar.title("Features") quant_parameter_list = ['national_inv', 'lead_time', 'in_transit_qty', 'sales_1_month', 'pieces_past_due', 'perf_6_month_avg', 'local_bo_qty'] qual_parameter_list = ['potential_issue', 'deck_risk', 'oe_constraint', 'ppap_risk', 'stop_auto_buy', 'rev_stop'] parameter_input_values=[] values=[] model_select = st.selectbox(label='Select Classification Model', options=(('Adaboost_1', 'Adaboost_10','Adaboost_25', 'RandomForest_1', 'RandomForest_10', 'RandomForest_25'))) for parameter in quant_parameter_list: values = st.sidebar.slider(label=parameter, key=parameter, value=float(dfMedian[parameter]), min_value=float(dfMin[parameter]), max_value=float(dfMax[parameter]), step=0.1) parameter_input_values.append(values) for parameter in qual_parameter_list: ind = dfMode[parameter].iloc[0] values = st.sidebar.selectbox(label=parameter, key=parameter, index=int(ind), options=('Yes', 'No')) val = 1 if values == 'Yes' else 0 parameter_input_values.append(val) parameter_list = quant_parameter_list + qual_parameter_list input_variables=pd.DataFrame([parameter_input_values],columns=parameter_list) st.write('\n\n') if (model_select == "Adaboost_1"): model = load('Adaboost_1.joblib') elif (model_select == "Adaboost_10"): model = load('Adaboost_10.joblib') elif (model_select == "Adaboost_25"): model = load('Adaboost_25.joblib') elif (model_select == "RandomForest_1"): model = load('RandomForest_1.joblib') elif (model_select == "RandomForest_10"): model = load('RandomForest_10.joblib') elif (model_select == "RandomForest_25"): model = load('RandomForest_25.joblib') else: model = load('Adaboost_1.joblib') if st.button("Will the product be a backorder?"): prediction = model.predict(input_variables) pred = 'No' if prediction == 0 else 'Yes' st.text(pred) if __name__ == '__main__': main() ``` We install ngrok so we can run streamlit on Colab ``` !pip -q install streamlit !pip -q install pyngrok # Setup a tunnel to the streamlit port 8501 from pyngrok import ngrok public_url = ngrok.connect(port='8501') public_url !streamlit run --server.port 80 app.py & >/dev/null ``` We can now terminate streamlit and ngrok ``` !pgrep streamlit ngrok.kill() ``` # **Conclusion** We can test our model in our data product. By logical deduction and also confirmed by our data analysis earlier on, sales that exceed the national inventory will go on a backorder, and vice versa. So, first we test each model with **national_inv=100000 and sales=1**. The expected output is "No." These are the results obtained: * **Adaboost_1**: No * **Adaboost_10**: No * **Adaboost_25**: No * **RandomForest_1**: *Yes* * **RandomForest_10**: No * **RandomForest_25** : No We can see that the Random Forest model with max_depth=10 is misclassifying our product. Next, we test each model with **national_inv=100 and sales=100000**. The expected output is "Yes." These are the results obtained: * **Adaboost_1**: Yes * **Adaboost_10**: Yes * **Adaboost_25**: Yes * **RandomForest_1**: Yes * **RandomForest_10**: Yes * **RandomForest_25** : *No* Here is the results in tabular form: ![](https://drive.google.com/uc?export=view&id=1efOixdGeKGp70hwKrwku_9X0N2J-bShk) We can see that **all 3 of the Adaboost models got both test cases correct**. On the other hand, for Random Forest, **only RandomForest_10 got both cases correct**. Hence, we can conclude that for this dataset, in terms of accuracy and file sizes, Adaboost is the superior model to Random Forest. However, there is a trade-off, and that is the training time. **Adaboost takes twice as long as Random Forest to train.** Since there is not much difference between Adaboost_10 and Adaboost_25 in terms of accuracy, so we can assume that there is diminishing returns after max_depth=10. Thus, **Adaboost_10 is actually the better model out of the two**. We should also keep in mind that **accuracy alone is not enough to tell the effectiveness of a model**, and there are many factors that we should consider.
github_jupyter
# Regression Week 2: Multiple Regression (Interpretation) The goal of this first notebook is to explore multiple regression and feature engineering with existing graphlab functions. In this notebook you will use data on house sales in King County to predict prices using multiple regression. You will: * Use SFrames to do some feature engineering * Use built-in graphlab functions to compute the regression weights (coefficients/parameters) * Given the regression weights, predictors and outcome write a function to compute the Residual Sum of Squares * Look at coefficients and interpret their meanings * Evaluate multiple models via RSS # Fire up graphlab create ``` import graphlab ``` # Load in house sales data Dataset is from house sales in King County, the region where the city of Seattle, WA is located. ``` sales = graphlab.SFrame('kc_house_data.gl/') ``` # Split data into training and testing. We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you). ``` train_data,test_data = sales.random_split(.8,seed=0) ``` # Learning a multiple regression model Recall we can use the following code to learn a multiple regression model predicting 'price' based on the following features: example_features = ['sqft_living', 'bedrooms', 'bathrooms'] on training data with the following code: (Aside: We set validation_set = None to ensure that the results are always the same) ``` example_features = ['sqft_living', 'bedrooms', 'bathrooms'] example_model = graphlab.linear_regression.create(train_data, target = 'price', features = example_features, validation_set = None) ``` Now that we have fitted the model we can extract the regression weights (coefficients) as an SFrame as follows: ``` example_weight_summary = example_model.get("coefficients") print example_weight_summary ``` # Making Predictions In the gradient descent notebook we use numpy to do our regression. In this book we will use existing graphlab create functions to analyze multiple regressions. Recall that once a model is built we can use the .predict() function to find the predicted values for data we pass. For example using the example model above: ``` example_predictions = example_model.predict(train_data) print example_predictions[0] # should be 271789.505878 ``` # Compute RSS Now that we can make predictions given the model, let's write a function to compute the RSS of the model. Complete the function below to calculate RSS given the model, data, and the outcome. ``` def get_residual_sum_of_squares(model, data, outcome): # First get the predictions # Then compute the residuals/errors # Then square and add them up return(RSS) ``` Test your function by computing the RSS on TEST data for the example model: ``` rss_example_train = get_residual_sum_of_squares(example_model, test_data, test_data['price']) print rss_example_train # should be 2.7376153833e+14 ``` # Create some new features Although we often think of multiple regression as including multiple different features (e.g. # of bedrooms, squarefeet, and # of bathrooms) but we can also consider transformations of existing features e.g. the log of the squarefeet or even "interaction" features such as the product of bedrooms and bathrooms. You will use the logarithm function to create a new feature. so first you should import it from the math library. ``` from math import log ``` Next create the following 4 new features as column in both TEST and TRAIN data: * bedrooms_squared = bedrooms\*bedrooms * bed_bath_rooms = bedrooms\*bathrooms * log_sqft_living = log(sqft_living) * lat_plus_long = lat + long As an example here's the first one: ``` train_data['bedrooms_squared'] = train_data['bedrooms'].apply(lambda x: x**2) test_data['bedrooms_squared'] = test_data['bedrooms'].apply(lambda x: x**2) # create the remaining 3 features in both TEST and TRAIN data ``` * Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms. * bedrooms times bathrooms gives what's called an "interaction" feature. It is large when *both* of them are large. * Taking the log of squarefeet has the effect of bringing large values closer together and spreading out small values. * Adding latitude to longitude is totally non-sensical but we will do it anyway (you'll see why) **Quiz Question: What is the mean (arithmetic average) value of your 4 new features on TEST data? (round to 2 digits)** # Learning Multiple Models Now we will learn the weights for three (nested) models for predicting house prices. The first model will have the fewest features the second model will add one more feature and the third will add a few more: * Model 1: squarefeet, # bedrooms, # bathrooms, latitude & longitude * Model 2: add bedrooms\*bathrooms * Model 3: Add log squarefeet, bedrooms squared, and the (nonsensical) latitude + longitude ``` model_1_features = ['sqft_living', 'bedrooms', 'bathrooms', 'lat', 'long'] model_2_features = model_1_features + ['bed_bath_rooms'] model_3_features = model_2_features + ['bedrooms_squared', 'log_sqft_living', 'lat_plus_long'] ``` Now that you have the features, learn the weights for the three different models for predicting target = 'price' using graphlab.linear_regression.create() and look at the value of the weights/coefficients: ``` # Learn the three models: (don't forget to set validation_set = None) # Examine/extract each model's coefficients: ``` **Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 1?** **Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 2?** Think about what this means. # Comparing multiple models Now that you've learned three models and extracted the model weights we want to evaluate which model is best. First use your functions from earlier to compute the RSS on TRAINING Data for each of the three models. ``` # Compute the RSS on TRAINING data for each of the three models and record the values: ``` **Quiz Question: Which model (1, 2 or 3) has lowest RSS on TRAINING Data?** Is this what you expected? Now compute the RSS on on TEST data for each of the three models. ``` # Compute the RSS on TESTING data for each of the three models and record the values: ``` **Quiz Question: Which model (1, 2 or 3) has lowest RSS on TESTING Data?** Is this what you expected?Think about the features that were added to each model from the previous.
github_jupyter
``` import numpy as np import matplotlib import matplotlib.pyplot as plt def N_single_qubit_gates_req_Rot(N_system_qubits, set_size): return (2*N_system_qubits+1)*(set_size-1) def N_CNOT_gates_req_Rot(N_system_qubits, set_size): return 2*(N_system_qubits-1)*(set_size-1) def N_cV_gates_req_LCU(N_system_qubits, set_size): Na=np.ceil(np.log2(set_size)) return (N_system_qubits*((2**Na) -1)) *(set_size-1) def N_CNOT_gates_req_LCU(N_system_qubits, set_size): Na=np.ceil(np.log2(set_size)) return ((2**Na) -2) *(set_size-1) ## better # O(2 N_system) change of basis single qubit gates # O(2 [N_system-1]) CNOT gates # 2 * Hadamard gates # 1 m-controlled Tofolli gate! ## overall reduction = 16m-32 ## requiring (m-2) garbage bits --> ALWAYS PRESENT IN SYSTEM REGISTER!!! def N_single_qubit_gates_req_LCU_new_Decomp(N_system_qubits, set_size): change_of_basis = 2*N_system_qubits H_gates = 2 return (change_of_basis+H_gates)*(set_size-1) def N_CNOT_gates_req_LCU_new_Decomp(N_system_qubits, set_size): cnot_Gates = 2*(N_system_qubits-1) Na=np.ceil(np.log2(set_size)) ## perez gates N_perez_gates = 4*(Na-2) N_CNOT_in_perez = N_perez_gates*1 N_cV_gates_in_perez = N_perez_gates*3 if ((16*Na-32)!=(N_CNOT_in_perez+N_cV_gates_in_perez)).all(): raise ValueError('16m-32 is the expected decomposition!') # if np.array_equal((16*Na-32), (N_CNOT_in_perez+N_cV_gates_in_perez)): # raise ValueError('16m-32 is the expected decomposition!') return ((cnot_Gates+N_CNOT_in_perez)*(set_size-1)) , (N_cV_gates_in_perez*(set_size-1)) x_nsets=np.arange(2,200,1) # Data for plotting N_system_qubits=4 y_rot_single=N_single_qubit_gates_req_Rot(N_system_qubits, x_nsets) y_rot_CNOT = N_CNOT_gates_req_Rot(N_system_qubits, x_nsets) y_LCU_cV=N_cV_gates_req_LCU(N_system_qubits, x_nsets) y_LCU_CNOT = N_CNOT_gates_req_LCU(N_system_qubits, x_nsets) y_LCU_single_NEW=N_single_qubit_gates_req_LCU_new_Decomp(N_system_qubits, x_nsets) y_LCU_CNOT_NEW, y_LCU_cV_NEW = N_CNOT_gates_req_LCU_new_Decomp(N_system_qubits, x_nsets) %matplotlib notebook fig, ax = plt.subplots() ax.plot(x_nsets, y_rot_single, color='b', label='Single qubit gates - Sequence of Rotations') ax.plot(x_nsets, y_rot_CNOT, color='r', linestyle='--', label='CNOT gates - Sequence of Rotations') ax.plot(x_nsets, y_LCU_cV, color='g', label='c-$V$ and c-$V^{\dagger}$ gates - LCU') ax.plot(x_nsets, y_LCU_CNOT, color='k', label='CNOT gates - LCU', linestyle='--') ax.plot(x_nsets, y_LCU_single_NEW, color='yellow', label='Single qubit gates - LCU_new') ax.plot(x_nsets, y_LCU_CNOT_NEW, color='teal', linestyle='--', label='CNOT gates - LCU_new') ax.plot(x_nsets, y_LCU_cV_NEW, color='slategrey', label='cV gates - LCU_new') ax.set(xlabel='$|S_{l}|$ (size of clique)', ylabel='Number of gates') # ,title='Scaling of methods') ax.grid() plt.legend() # # http://akuederle.com/matplotlib-zoomed-up-inset # from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes, inset_axes # # axins = zoomed_inset_axes(ax, 40, loc='center') # zoom-factor: 2.5, location: upper-left # axins = inset_axes(ax, 2,1 , loc='center',bbox_to_anchor=(0.4, 0.55),bbox_transform=ax.figure.transFigure) # no zoom # axins.plot(x_nsets, y_rot_single, color='b') # axins.plot(x_nsets, y_rot_CNOT, color='r', linestyle='--') # axins.plot(x_nsets, y_LCU_cV, color='g') # axins.plot(x_nsets, y_LCU_CNOT, color='k', linestyle='--') # x1, x2, y1, y2 = 2, 5, 0, 50 # specify the limits # axins.set_xlim(x1, x2) # apply the x-limits # axins.set_ylim(y1, y2) # apply the y-limits # # axins.set_yticks(np.arange(0, 100, 20)) # plt.yticks(visible=True) # plt.xticks(visible=True) # from mpl_toolkits.axes_grid1.inset_locator import mark_inset # mark_inset(ax, axins, loc1=2, loc2=4, fc="none", ec="0.5") # loc here is which corner zoom goes to! # fig.savefig("test.png") plt.show() # %matplotlib notebook # fig, ax = plt.subplots() # ax.plot(x_nsets, y_rot_single, color='b', label='Single qubit gates - Sequence of Rotations') # ax.plot(x_nsets, y_rot_CNOT, color='r', label='CNOT gates - Sequence of Rotations') # ax.plot(x_nsets, y_LCU_single, color='g', label='c-$V$ and c-$V^{\dagger}$ gates - LCU') # ax.plot(x_nsets, y_LCU_CNOT, color='k', label='CNOT gates - LCU', linestyle='--') # ax.set(xlabel='$|S_{l}|$ (size of clique)', ylabel='Number of gates', # title='Scaling of methods') # ax.grid() # plt.legend() # # http://akuederle.com/matplotlib-zoomed-up-inset # from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes # axins = zoomed_inset_axes(ax, 40, loc='center') # zoom-factor: 2.5, location: upper-left # axins.plot(x_nsets, y_rot_single, color='b') # axins.plot(x_nsets, y_rot_CNOT, color='r') # axins.plot(x_nsets, y_LCU_single, color='g') # axins.plot(x_nsets, y_LCU_CNOT, color='k', linestyle='--') # x1, x2, y1, y2 = 2, 4, 0, 100 # specify the limits # axins.set_xlim(x1, x2) # apply the x-limits # axins.set_ylim(y1, y2) # apply the y-limits # # axins.set_yticks(np.arange(0, 100, 20)) # plt.yticks(visible=True) # plt.xticks(visible=True) # from mpl_toolkits.axes_grid1.inset_locator import mark_inset # mark_inset(ax, axins, loc1=2, loc2=4, fc="none", ec="0.5") # loc here is which corner zoom goes to! # # fig.savefig("test.png") # plt.show() # Data for plotting N_system_qubits=10 # < ---- CHANGED y_rot_single=N_single_qubit_gates_req_Rot(N_system_qubits, x_nsets) y_rot_CNOT = N_CNOT_gates_req_Rot(N_system_qubits, x_nsets) y_LCU_cV=N_cV_gates_req_LCU(N_system_qubits, x_nsets) y_LCU_CNOT = N_CNOT_gates_req_Rot(N_system_qubits, x_nsets) y_LCU_single_NEW=N_single_qubit_gates_req_LCU_new_Decomp(N_system_qubits, x_nsets) y_LCU_CNOT_NEW, y_LCU_cV_NEW = N_CNOT_gates_req_LCU_new_Decomp(N_system_qubits, x_nsets) %matplotlib notebook fig, ax = plt.subplots() ax.plot(x_nsets, y_rot_single, color='b', label='Single qubit gates - Sequence of Rotations') ax.plot(x_nsets, y_rot_CNOT, color='r', label='CNOT gates - Sequence of Rotations') ax.plot(x_nsets, y_LCU_cV, color='g', label='c-$V$ and c-$V^{\dagger}$ gates - LCU') ax.plot(x_nsets, y_LCU_CNOT, color='k', label='CNOT gates - LCU', linestyle='--') ax.plot(x_nsets, y_LCU_single_NEW, color='yellow', label='Single qubit gates - LCU_new') ax.plot(x_nsets, y_LCU_CNOT_NEW, color='teal', linestyle='--', label='CNOT gates - LCU_new') ax.plot(x_nsets, y_LCU_cV_NEW, color='slategrey', label='cV gates - LCU_new') ax.set(xlabel='$|S_{l}|$ (size of clique)', ylabel='Number of gates') # ,title='Scaling of methods') ax.grid() plt.legend() # # # http://akuederle.com/matplotlib-zoomed-up-inset # # from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes, inset_axes # # # axins = zoomed_inset_axes(ax, 40, loc='center') # zoom-factor: 2.5, location: upper-left # axins = inset_axes(ax, 2,1 , loc='center',bbox_to_anchor=(0.4, 0.55),bbox_transform=ax.figure.transFigure) # no zoom # axins.plot(x_nsets, y_rot_single, color='b') # axins.plot(x_nsets, y_rot_CNOT, color='r') # axins.plot(x_nsets, y_LCU_cV, color='g') # axins.plot(x_nsets, y_LCU_CNOT, color='k', linestyle='--') # x1, x2, y1, y2 = 2, 3, 0, 50 # specify the limits # axins.set_xlim(x1, x2) # apply the x-limits # axins.set_ylim(y1, y2) # apply the y-limits # axins.set_xticks(np.arange(2, 4, 1)) # plt.yticks(visible=True) # plt.xticks(visible=True) # from mpl_toolkits.axes_grid1.inset_locator import mark_inset # mark_inset(ax, axins, loc1=2, loc2=4, fc="none", ec="0.5") # loc here is which corner zoom goes to! # # fig.savefig("test.png") plt.show() # Data for plotting N_system_qubits=100 # < ---- CHANGED y_rot_single=N_single_qubit_gates_req_Rot(N_system_qubits, x_nsets) y_rot_CNOT = N_CNOT_gates_req_Rot(N_system_qubits, x_nsets) y_LCU_cV=N_cV_gates_req_LCU(N_system_qubits, x_nsets) y_LCU_CNOT = N_CNOT_gates_req_Rot(N_system_qubits, x_nsets) %matplotlib notebook fig, ax = plt.subplots() ax.plot(x_nsets, y_rot_single, color='b', label='Single qubit gates - Sequence of Rotations', linewidth=3) ax.plot(x_nsets, y_rot_CNOT, color='r', label='CNOT gates - Sequence of Rotations') ax.plot(x_nsets, y_LCU_cV, color='g', label='c-$V$ and c-$V^{\dagger}$ gates - LCU') ax.plot(x_nsets, y_LCU_CNOT, color='k', label='CNOT gates - LCU', linestyle='--') ax.set(xlabel='$|S_{l}|$ (size of clique)', ylabel='Number of gates') # ,title='Scaling of methods') ax.grid() plt.legend() # http://akuederle.com/matplotlib-zoomed-up-inset from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes, inset_axes # axins = zoomed_inset_axes(ax, 40, loc='center') # zoom-factor: 2.5, location: upper-left axins = inset_axes(ax, 2,1 , loc='center',bbox_to_anchor=(0.4, 0.55),bbox_transform=ax.figure.transFigure) # no zoom axins.plot(x_nsets, y_rot_single, color='b', linewidth=2) axins.plot(x_nsets, y_rot_CNOT, color='r') axins.plot(x_nsets, y_LCU_cV, color='g') axins.plot(x_nsets, y_LCU_CNOT, color='k', linestyle='--') x1, x2, y1, y2 = 1.5, 3, 90, 500 # specify the limits axins.set_xlim(x1, x2) # apply the x-limits axins.set_ylim(y1, y2) # apply the y-limits # axins.set_yticks(np.arange(0, 100, 20)) plt.yticks(visible=True) plt.xticks(visible=True) from mpl_toolkits.axes_grid1.inset_locator import mark_inset mark_inset(ax, axins, loc1=2, loc2=4, fc="none", ec="0.5") # loc here is which corner zoom goes to! # fig.savefig("test.png") plt.show() # Data for plotting N_system_qubits=1 y_rot_single=N_single_qubit_gates_req_Rot(N_system_qubits, x_nsets) y_rot_CNOT = N_CNOT_gates_req_Rot(N_system_qubits, x_nsets) y_LCU_cV=N_cV_gates_req_LCU(N_system_qubits, x_nsets) y_LCU_CNOT = N_CNOT_gates_req_LCU(N_system_qubits, x_nsets) %matplotlib notebook fig, ax = plt.subplots() ax.plot(x_nsets, y_rot_single, color='b', label='Single qubit gates - Sequence of Rotations') ax.plot(x_nsets, y_rot_CNOT, color='r', linestyle='--', label='CNOT gates - Sequence of Rotations') ax.plot(x_nsets, y_LCU_cV, color='g', label='c-$V$ and c-$V^{\dagger}$ gates - LCU') ax.plot(x_nsets, y_LCU_CNOT, color='k', label='CNOT gates - LCU', linestyle='--') ax.set(xlabel='$|S_{l}|$ (size of clique)', ylabel='Number of gates') # ,title='Scaling of methods') ax.grid() plt.legend() # http://akuederle.com/matplotlib-zoomed-up-inset from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes, inset_axes # axins = zoomed_inset_axes(ax, 40, loc='center') # zoom-factor: 2.5, location: upper-left axins = inset_axes(ax, 2,1 , loc='center',bbox_to_anchor=(0.4, 0.55),bbox_transform=ax.figure.transFigure) # no zoom axins.plot(x_nsets, y_rot_single, color='b') axins.plot(x_nsets, y_rot_CNOT, color='r', linestyle='--') axins.plot(x_nsets, y_LCU_cV, color='g') axins.plot(x_nsets, y_LCU_CNOT, color='k', linestyle='--') x1, x2, y1, y2 = 2, 3, -5, 10 # specify the limits axins.set_xlim(x1, x2) # apply the x-limits axins.set_ylim(y1, y2) # apply the y-limits # axins.set_yticks(np.arange(0, 100, 20)) axins.set_xticks(np.arange(2, 4, 1)) plt.yticks(visible=True) plt.xticks(visible=True) from mpl_toolkits.axes_grid1.inset_locator import mark_inset mark_inset(ax, axins, loc1=2, loc2=4, fc="none", ec="0.5") # loc here is which corner zoom goes to! # fig.savefig("test.png") plt.show() # Data for plotting N_system_qubits=5 x_nsets=2 print(N_single_qubit_gates_req_Rot(N_system_qubits, x_nsets)) print(N_CNOT_gates_req_Rot(N_system_qubits, x_nsets)) print('###') print(N_cV_gates_req_LCU(N_system_qubits, x_nsets)) print(N_CNOT_gates_req_LCU(N_system_qubits, x_nsets)) print(4) ### results for |S_l|=2 X_no_system_qubits=np.arange(1,11,1) x_nsets=2 y_rot_single=N_single_qubit_gates_req_Rot(X_no_system_qubits, x_nsets) y_rot_CNOT = N_CNOT_gates_req_Rot(X_no_system_qubits, x_nsets) y_LCU_cV=N_cV_gates_req_LCU(X_no_system_qubits, x_nsets) # y_LCU_CNOT = N_CNOT_gates_req_LCU(X_no_system_qubits, x_nsets) y_LCU_CNOT=np.zeros(len(X_no_system_qubits)) single_qubit_LCU_gates=np.array([4 for _ in range(len(X_no_system_qubits))]) %matplotlib notebook fig, ax = plt.subplots() ax.plot(X_no_system_qubits, y_rot_single, color='b', label='Single qubit gates - Sequence of Rotations') ax.plot(X_no_system_qubits, y_rot_CNOT, color='r', linestyle='-', label='CNOT gates - Sequence of Rotations') ax.plot(X_no_system_qubits, y_LCU_cV, color='g', label='single controlled $\sigma$ gates - LCU') ax.plot(X_no_system_qubits, y_LCU_CNOT, color='k', label='CNOT gates - LCU', linestyle='-') ax.plot(X_no_system_qubits, single_qubit_LCU_gates, color='m', label='Single qubit gates - LCU', linestyle='-') ax.set(xlabel='$N_{s}$', ylabel='Number of gates') # ,title='Scaling of methods') ax.set_xticks(X_no_system_qubits) ax.grid() plt.legend() # # http://akuederle.com/matplotlib-zoomed-up-inset # from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes, inset_axes # # axins = zoomed_inset_axes(ax, 40, loc='center') # zoom-factor: 2.5, location: upper-left # axins = inset_axes(ax, 2,1 , loc='center',bbox_to_anchor=(0.4, 0.55),bbox_transform=ax.figure.transFigure) # no zoom # axins.plot(x_nsets, y_rot_single, color='b') # axins.plot(x_nsets, y_rot_CNOT, color='r', linestyle='--') # axins.plot(x_nsets, y_LCU_cV, color='g') # axins.plot(x_nsets, y_LCU_CNOT, color='k', linestyle='--') # x1, x2, y1, y2 = 2, 3, -5, 10 # specify the limits # axins.set_xlim(x1, x2) # apply the x-limits # axins.set_ylim(y1, y2) # apply the y-limits # # axins.set_yticks(np.arange(0, 100, 20)) # axins.set_xticks(np.arange(2, 4, 1)) # plt.yticks(visible=True) # plt.xticks(visible=True) # from mpl_toolkits.axes_grid1.inset_locator import mark_inset # mark_inset(ax, axins, loc1=2, loc2=4, fc="none", ec="0.5") # loc here is which corner zoom goes to! # fig.savefig("test.png") plt.show() V = ((1j+1)/2)*np.array([[1,-1j],[-1j, 1]], dtype=complex) CNOT from functools import reduce zero=np.array([[1],[0]]) one=np.array([[0],[1]]) identity=np.eye(2) X=np.array([[0,1], [1,0]]) CNOT= np.kron(np.outer(one, one), X)+np.kron(np.outer(zero, zero), identity) ### I_one_V = reduce(np.kron, [identity, np.kron(np.outer(one, one), V)+np.kron(np.outer(zero, zero), identity)]) ### zero_zero=np.kron(zero,zero) zero_one=np.kron(zero,one) one_zero=np.kron(one,zero) one_one=np.kron(one,one) one_I_V = np.kron(np.outer(zero_zero, zero_zero), identity)+np.kron(np.outer(zero_one, zero_one), identity)+ \ np.kron(np.outer(one_zero, one_zero), V)+np.kron(np.outer(one_one, one_one), V) ### CNOT_I=reduce(np.kron, [CNOT, identity]) ## I_one_Vdag = reduce(np.kron, [identity, np.kron(np.outer(one, one), V.conj().transpose())+np.kron(np.outer(zero, zero), identity)]) ## perez_gate = reduce(np.multiply, [I_one_V, one_I_V, CNOT_I, I_one_Vdag]) ##check # peres = TOF(x0,x1,x2) CNOT(x0, x1) zero_zero=np.kron(zero,zero) zero_one=np.kron(zero,one) one_zero=np.kron(one,zero) one_one=np.kron(one,one) TOF = np.kron(np.outer(zero_zero, zero_zero), identity)+np.kron(np.outer(zero_one, zero_one), identity)+ \ np.kron(np.outer(one_zero, one_zero), identity)+np.kron(np.outer(one_one, one_one), X) CNOT_I = reduce(np.kron, [CNOT, identity]) checker = np.multiply(CNOT_I, TOF) checker==perez_gate print(perez_gate) ```
github_jupyter
<a href="https://colab.research.google.com/github/prithwis/KKolab/blob/main/KK_B2_Hadoop_and_Hive.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ![alt text](https://4.bp.blogspot.com/-gbL5nZDkpFQ/XScFYwoTEII/AAAAAAAAAGY/CcVb_HDLwvs2Brv5T4vSsUcz7O4r2Q79ACK4BGAYYCw/s1600/kk3-header00-beta.png)<br> <hr> [Prithwis Mukerjee](http://www.linkedin.com/in/prithwis)<br> #Hive with Hadoop This notebook has all the codes / commands required to install Hadoop and Hive <br> ##Acknowledgements Hadoop Installation from [Anjaly Sam's Github Repository](https://github.com/anjalysam/Hadoop) <br> Hive Installation from [PhoenixNAP](https://phoenixnap.com/kb/install-hive-on-ubuntu) website #1 Hadoop Hadoop is a pre-requisite for Hive <br> ## 1.1 Download, Install Hadoop ``` # The default JVM available at /usr/lib/jvm/java-11-openjdk-amd64/ works for Hadoop # But gives errors with Hive https://stackoverflow.com/questions/54037773/hive-exception-class-jdk-internal-loader-classloadersappclassloader-cannot # Hence this JVM needs to be installed !apt-get update > /dev/null !apt-get install openjdk-8-jdk-headless -qq > /dev/null # Download the latest version of Hadoop # Change the version number in this and subsequent cells # !wget https://downloads.apache.org/hadoop/common/hadoop-3.3.0/hadoop-3.3.0.tar.gz # Unzip it # the tar command with the -x flag to extract, -z to uncompress, -v for verbose output, and -f to specify that we’re extracting from a file !tar -xzf hadoop-3.3.0.tar.gz #copy hadoop file to user/local !mv hadoop-3.3.0/ /usr/local/ ``` ## 1.2 Set Environment Variables ``` #To find the default Java path !readlink -f /usr/bin/java | sed "s:bin/java::" !ls /usr/lib/jvm/ #To set java path, go to /usr/local/hadoop-3.3.0/etc/hadoop/hadoop-env.sh then #. . . export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64/ . . . #we have used a simpler alternative route using os.environ - it works import os os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" # default is changed #os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-11-openjdk-amd64/" os.environ["HADOOP_HOME"] = "/usr/local/hadoop-3.3.0/" !echo $PATH # Add Hadoop BIN to PATH # get current_path from output of previous command current_path = '/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin' new_path = current_path+':/usr/local/hadoop-3.3.0/bin/' os.environ["PATH"] = new_path ``` ## 1.3 Test Hadoop Installation ``` #Running Hadoop - Test RUN, not doing anything at all #!/usr/local/hadoop-3.3.0/bin/hadoop # UNCOMMENT the following line if you want to make sure that Hadoop is alive! #!hadoop # Testing Hadoop with PI generating sample program, should calculate value of pi = 3.14157500000000000000 # pi example #Uncomment the following line if you want to test Hadoop with pi example #!hadoop jar /usr/local/hadoop-3.3.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.0.jar pi 16 100000 ``` #2 Hive ## 2.1 Download, Install HIVE ``` # Download and Unzip the correct version and unzip !wget https://downloads.apache.org/hive/hive-3.1.2/apache-hive-3.1.2-bin.tar.gz !tar xzf apache-hive-3.1.2-bin.tar.gz ``` ## 2.2 Set Environment *Variables* ``` # Make sure that the version number is correct and is as downloaded os.environ["HIVE_HOME"] = "/content/apache-hive-3.1.2-bin" !echo $HIVE_HOME !echo $PATH # current_path is set from output of previous command current_path = '/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin:/usr/local/hadoop-3.3.0/bin/' new_path = current_path+':/content/apache-hive-3.1.2-bin/bin' os.environ["PATH"] = new_path !echo $PATH !echo $JAVA_HOME !echo $HADOOP_HOME !echo $HIVE_HOME ``` ## 2.3 Set up HDFS Directories ``` !hdfs dfs -mkdir /tmp !hdfs dfs -chmod g+w /tmp #!hdfs dfs -ls / !hdfs dfs -mkdir -p /content/warehouse !hdfs dfs -chmod g+w /content/warehouse #!hdfs dfs -ls /content/ ``` ## 2.4 Initialise HIVE - note and fix errors ``` # TYPE this command, do not copy and paste. Non printing characters cause havoc # There will be two errors, that we will fix # UNCOMMENT the following line if you WISH TO SEE the errors !schematool -initSchema -dbType derby ``` ### 2.4.1 Fix One Warning, One Error SLF4J is duplicate, need to locate them and remove one <br> Guava jar version is low ``` # locate multiple instances of slf4j ... !ls $HADOOP_HOME/share/hadoop/common/lib/*slf4j* !ls $HIVE_HOME/lib/*slf4j* # removed the logging jar from Hive, retaining the Hadoop jar !mv /content/apache-hive-3.1.2-bin/lib/log4j-slf4j-impl-2.10.0.jar ./ # guava jar needs to above v 20 # https://stackoverflow.com/questions/45247193/nosuchmethoderror-com-google-common-base-preconditions-checkargumentzljava-lan !ls $HIVE_HOME/lib/gu* # the one available with Hadoop is better, v 27 !ls $HADOOP_HOME/share/hadoop/hdfs/lib/gu* # Remove the Hive Guava and replace with Hadoop Guava !mv $HIVE_HOME/lib/guava-19.0.jar ./ !cp $HADOOP_HOME/share/hadoop/hdfs/lib/guava-27.0-jre.jar $HIVE_HOME/lib/ ``` ##2.5 Initialize HIVE ``` #Type this command, dont copy-paste # Non printing characters inside the command will give totally illogical errors !schematool -initSchema -dbType derby ``` ## 2.6 Test HIVE 1. Create database 2. Create table 3. Insert data 4. Retrieve data using command line options as [given here](https://cwiki.apache.org/confluence/display/hive/languagemanual+cli#). ``` !hive -e "create database if not exists praxisDB;" !hive -e "show databases" !hive -database praxisdb -e "create table if not exists emp (name string, age int)" !hive -database praxisdb -e "show tables" !hive -database praxisdb -e "insert into emp values ('naren', 70)" !hive -database praxisdb -e "insert into emp values ('aditya', 49)" !hive -database praxisdb -e "select * from emp" # Silent Mode !hive -S -database praxisdb -e "select * from emp" ``` ## 2.7 Bulk Data Load from CSV file ``` #drop table !hive -database praxisDB -e 'DROP table if exists eCommerce' #create table # Invoice Date is being treated as a STRING because input data is not correctly formatted !hive -database praxisDB -e " \ CREATE TABLE eCommerce ( \ InvoiceNo varchar(10), \ StockCode varchar(10), \ Description varchar(50), \ Quantity int, \ InvoiceDate string, \ UnitPrice decimal(6,2), \ CustomerID varchar(10), \ Country varchar(15) \ ) row format delimited fields terminated by ','; \ " !hive -database praxisdb -e "describe eCommerce" ``` This data may not be clean and may have commas embedded in the CSV file. To see how clearn this look at this notebook : [Spark SQLContext HiveContext](https://github.com/prithwis/KKolab/blob/main/KK_C1_SparkSQL_SQLContext_HiveContext.ipynb) ``` #Data as CSV file !gdown https://drive.google.com/uc?id=1JJH24ZZaiJrEKValD--UtyFcWl7UanwV # 2% data ~ 10K rows !gdown https://drive.google.com/uc?id=1g7mJ0v4fkERW0HWc1eq-SHs_jvQ0N2Oe # 100% data ~ 500K rows #remove the CRLF character from the end of the row if it exists !sed 's/\r//' /content/eCommerce_Full_2021.csv > datafile.csv #!sed 's/\r//' /content/eCommerce_02PC_2021.csv > datafile.csv # remove the first line containing headers from the file !sed -i -e "1d" datafile.csv !head datafile.csv # delete all rows from table !hive -database praxisdb -e 'TRUNCATE TABLE eCommerce' # LOAD !hive -database praxisdb -e "LOAD DATA LOCAL INPATH 'datafile.csv' INTO TABLE eCommerce" !hive -S -database praxisdb -e "select count(*) from eCommerce" !hive -S -database praxisdb -e "select * from eCommerce limit 30" ``` #Chronobooks <br> ![alt text](https://1.bp.blogspot.com/-lTiYBkU2qbU/X1er__fvnkI/AAAAAAAAjtE/GhDR3OEGJr4NG43fZPodrQD5kbxtnKebgCLcBGAsYHQ/s600/Footer2020-600x200.png)<hr> Chronotantra and Chronoyantra are two science fiction novels that explore the collapse of human civilisation on Earth and then its rebirth and reincarnation both on Earth as well as on the distant worlds of Mars, Titan and Enceladus. But is it the human civilisation that is being reborn? Or is it some other sentience that is revealing itself. If you have an interest in AI and found this material useful, you may consider buying these novels, in paperback or kindle, from [http://bit.ly/chronobooks](http://bit.ly/chronobooks)
github_jupyter
# what's the neuron yield across probes, experimenters and recording sites? Anne Urai & Nate Miska, 2020 ``` # GENERAL THINGS FOR COMPUTING AND PLOTTING import pandas as pd import numpy as np import os, sys, time import scipy as sp # visualisation import matplotlib.pyplot as plt import seaborn as sns # ibl specific things import datajoint as dj from ibl_pipeline import reference, subject, action, acquisition, data, behavior from ibl_pipeline.analyses import behavior as behavior_analysis ephys = dj.create_virtual_module('ephys', 'ibl_ephys') figpath = os.path.join(os.path.expanduser('~'), 'Data/Figures_IBL') ``` ## 1. neuron yield per lab and Npix probe over time Replicates https://github.com/int-brain-lab/analysis/blob/master/python/probe_performance_over_sessions.py using DJ ``` probe_insertions = ephys.ProbeInsertion * ephys.DefaultCluster.Metrics * subject.SubjectLab \ * (acquisition.SessionProject & 'session_project = "ibl_neuropixel_brainwide_01"') \ * behavior_analysis.SessionTrainingStatus probe_insertions = probe_insertions.proj('probe_serial_number', 'probe_model_name', 'lab_name', 'metrics', 'good_enough_for_brainwide_map', session_date='DATE(session_start_time)') clusts = probe_insertions.fetch(format='frame').reset_index() # put metrics into df columns from the blob (feature request: can these be added as attributes instead?) for kix, k in enumerate(['ks2_label']): tmp_var = [] for id, c in clusts.iterrows(): if k in c['metrics'].keys(): tmp = c['metrics'][k] else: tmp = np.nan tmp_var.append(tmp) clusts[k] = tmp_var # hofer and mrsic-flogel probes are shared clusts['lab_name'] = clusts['lab_name'].str.replace('mrsicflogellab','swclab') clusts['lab_name'] = clusts['lab_name'].str.replace('hoferlab','swclab') clusts.lab_name.unique() clusts['probe_name'] = clusts['lab_name'] + ', ' + clusts['probe_model_name'] + ': ' + clusts['probe_serial_number'] clusts_summ = clusts.groupby(['lab_name', 'probe_name', 'session_start_time', 'ks2_label'])['session_date'].count().reset_index() # use recording session number instead of date clusts_summ['recording'] = clusts_summ.groupby(['probe_name']).cumcount() + 1 sns.set(style="ticks", context="paper") g, axes = plt.subplots(6,6,figsize=(18,20)) for probe, ax in zip(clusts_summ.probe_name.unique(), axes.flatten()): df = clusts_summ[clusts_summ.probe_name==probe].groupby(['session_start_time','ks2_label']).session_date.sum() df.unstack().plot.barh(ax=ax, stacked=True, legend=False, colormap='Pastel2') ax.set_title(probe, fontsize=12) ax.axvline(x=60, color='seagreen', linestyle="--") ax.set_yticks([]) ax.set_ylabel('') ax.set_ylim([-1, np.max([max(ax.get_ylim()), 10])]) ax.set_xlim([0, 1000]) axes.flatten()[-1].set_axis_off() sns.despine(trim=True) plt.tight_layout() plt.xlabel('Number of KS2 neurons') plt.ylabel('Recording session') g.savefig(os.path.join(figpath, 'probe_yield_oversessions.pdf')) ``` # 2. what is the overall yield of sessions, neurons etc? ``` ## overall distribution of neurons per session g = sns.FacetGrid(data=clusts_summ, hue='ks2_label', palette='Set2') g.map(sns.distplot, "session_date", bins=np.arange(10, 500, 15), hist=True, rug=False, kde=False).add_legend() for ax in g.axes.flatten(): ax.axvline(x=60, color='seagreen', linestyle="--") g.set_xlabels('Number of KS2 neurons') g.set_ylabels('Number of sessions') g.savefig(os.path.join(figpath, 'probe_yield_allrecs.pdf')) print('TOTAL YIELD SO FAR:') clusts.groupby(['ks2_label'])['ks2_label'].count() ## overall distribution of neurons per session g = sns.FacetGrid(data=clusts_summ, hue='ks2_label', col_wrap=4, col='lab_name', palette='Set2') g.map(sns.distplot, "session_date", bins=np.arange(10, 500, 15), hist=True, rug=False, kde=False).add_legend() for ax in g.axes.flatten(): ax.axvline(x=60, color='seagreen', linestyle="--") g.set_xlabels('Number of KS2 neurons') g.set_ylabels('Number of sessions') #g.savefig(os.path.join(figpath, 'probe_yield_allrecs_perlab.pdf')) ## overall number of sessions that meet criteria for behavior and neural yield sessions = clusts.loc[clusts.ks2_label == 'good', :].groupby(['lab_name', 'subject_uuid', 'session_start_time', 'good_enough_for_brainwide_map'])['cluster_id'].count().reset_index() sessions['enough_neurons'] = (sessions['cluster_id'] > 60) ct = sessions.groupby(['good_enough_for_brainwide_map', 'enough_neurons'])['cluster_id'].count().reset_index() print('total nr of sessions: %d'%ct.cluster_id.sum()) pd.pivot_table(ct, columns=['good_enough_for_brainwide_map'], values=['cluster_id'], index=['enough_neurons']) #sessions.describe() # pd.pivot_table(df, values='cluster_id', index=['lab_name'], # columns=['enough_neurons'], aggfunc=np.sum) # check that this pandas wrangling is correct... ephys_sessions = acquisition.Session * subject.Subject * subject.SubjectLab \ * (acquisition.SessionProject & 'session_project = "ibl_neuropixel_brainwide_01"') \ * behavior_analysis.SessionTrainingStatus \ & ephys.ProbeInsertion & ephys.DefaultCluster.Metrics ephys_sessions = ephys_sessions.fetch(format='frame').reset_index() # ephys_sessions # ephys_sessions.groupby(['good_enough_for_brainwide_map'])['session_start_time'].describe() # which sessions do *not* show good enough behavior? ephys_sessions.loc[ephys_sessions.good_enough_for_brainwide_map == 0, :].groupby([ 'lab_name', 'subject_nickname', 'session_start_time'])['session_start_time'].unique() # per lab, what's the drop-out due to behavior? ephys_sessions['good_enough_for_brainwide_map'] = ephys_sessions['good_enough_for_brainwide_map'].astype(int) ephys_sessions.groupby(['lab_name'])['good_enough_for_brainwide_map'].describe() ephys_sessions['good_enough_for_brainwide_map'].describe() # per lab, what's the dropout due to yield? sessions['enough_neurons'] = sessions['enough_neurons'].astype(int) sessions.groupby(['lab_name'])['enough_neurons'].describe() ## also show the total number of neurons, only from good behavior sessions probe_insertions = ephys.ProbeInsertion * ephys.DefaultCluster.Metrics * subject.SubjectLab \ * (acquisition.SessionProject & 'session_project = "ibl_neuropixel_brainwide_01"') \ * (behavior_analysis.SessionTrainingStatus & 'good_enough_for_brainwide_map = 1') probe_insertions = probe_insertions.proj('probe_serial_number', 'probe_model_name', 'lab_name', 'metrics', 'good_enough_for_brainwide_map', session_date='DATE(session_start_time)') clusts = probe_insertions.fetch(format='frame').reset_index() # put metrics into df columns from the blob (feature request: can these be added as attributes instead?) for kix, k in enumerate(['ks2_label']): tmp_var = [] for id, c in clusts.iterrows(): if k in c['metrics'].keys(): tmp = c['metrics'][k] else: tmp = np.nan tmp_var.append(tmp) clusts[k] = tmp_var # hofer and mrsic-flogel probes are shared clusts['lab_name'] = clusts['lab_name'].str.replace('mrsicflogellab','swclab') clusts['lab_name'] = clusts['lab_name'].str.replace('hoferlab','swclab') clusts.lab_name.unique() clusts['probe_name'] = clusts['lab_name'] + ', ' + clusts['probe_model_name'] + ': ' + clusts['probe_serial_number'] clusts_summ = clusts.groupby(['lab_name', 'probe_name', 'session_start_time', 'ks2_label'])['session_date'].count().reset_index() # use recording session number instead of date clusts_summ['recording'] = clusts_summ.groupby(['probe_name']).cumcount() + 1 ## overall distribution of neurons per session g = sns.FacetGrid(data=clusts_summ, hue='ks2_label', palette='Set2') g.map(sns.distplot, "session_date", bins=np.arange(10, 500, 15), hist=True, rug=False, kde=False).add_legend() for ax in g.axes.flatten(): ax.axvline(x=60, color='seagreen', linestyle="--") g.set_xlabels('Number of KS2 neurons') g.set_ylabels('Number of sessions') g.savefig(os.path.join(figpath, 'probe_yield_allrecs_goodsessions.pdf')) print('TOTAL YIELD (from good sessions) SO FAR:') clusts.groupby(['ks2_label'])['ks2_label'].count() ``` ## 2. how does probe yield in the repeated site differ between mice/experimenters? ``` probes_rs = (ephys.ProbeTrajectory & 'insertion_data_source = "Planned"' & 'x BETWEEN -2400 AND -2100' & 'y BETWEEN -2100 AND -1900' & 'theta BETWEEN 14 AND 16') clust = ephys.DefaultCluster * ephys.DefaultCluster.Metrics * probes_rs * subject.SubjectLab() * subject.Subject() clust = clust.proj('cluster_amp', 'cluster_depth', 'firing_rate', 'subject_nickname', 'lab_name','metrics', 'x', 'y', 'theta', 'phi', 'depth') clusts = clust.fetch(format='frame').reset_index() clusts['col_name'] = clusts['lab_name'] + ', ' + clusts['subject_nickname'] # put metrics into df columns from the blob for kix, k in enumerate(clusts['metrics'][0].keys()): tmp_var = [] for id, c in clusts.iterrows(): if k in c['metrics'].keys(): tmp = c['metrics'][k] else: tmp = np.nan tmp_var.append(tmp) clusts[k] = tmp_var clusts sns.set(style="ticks", context="paper") g, axes = plt.subplots(1,1,figsize=(4,4)) df = clusts.groupby(['col_name', 'ks2_label']).ks2_label.count() df.unstack().plot.barh(ax=axes, stacked=True, legend=True, colormap='Pastel2') axes.axvline(x=60, color='seagreen', linestyle="--") axes.set_ylabel('') sns.despine(trim=True) plt.xlabel('Number of KS2 neurons') g.savefig(os.path.join(figpath, 'probe_yield_rs.pdf')) ## firing rate as a function of depth print('plotting') g = sns.FacetGrid(data=clusts, col='col_name', col_wrap=4, hue='ks2_label', palette='Pastel2', col_order=sorted(clusts.col_name.unique())) g.map(sns.scatterplot, "firing_rate", "cluster_depth", alpha=0.5).add_legend() g.set_titles('{col_name}') g.set_xlabels('Firing rate (spks/s)') g.set_ylabels('Depth') plt.tight_layout() sns.despine(trim=True) g.savefig(os.path.join(figpath, 'neurons_rsi_firingrate.pdf')) ```
github_jupyter
# Measurement Error Mitigation ## Introduction The measurement calibration is used to mitigate measurement errors. The main idea is to prepare all $2^n$ basis input states and compute the probability of measuring counts in the other basis states. From these calibrations, it is possible to correct the average results of another experiment of interest. This notebook gives examples for how to use the ``ignis.mitigation.measurement`` module. ``` # Import general libraries (needed for functions) import numpy as np import time # Import Qiskit classes import qiskit from qiskit import QuantumRegister, QuantumCircuit, ClassicalRegister, Aer from qiskit.providers.aer import noise from qiskit.tools.visualization import plot_histogram # Import measurement calibration functions from qiskit.ignis.mitigation.measurement import (complete_meas_cal, tensored_meas_cal, CompleteMeasFitter, TensoredMeasFitter) ``` ## 3 Qubit Example of the Calibration Matrices Assume that we would like to generate a calibration matrix for the 3 qubits Q2, Q3 and Q4 in a 5-qubit Quantum Register [Q0,Q1,Q2,Q3,Q4]. Since we have 3 qubits, there are $2^3=8$ possible quantum states. ## Generating Measurement Calibration Circuits First, we generate a list of measurement calibration circuits for the full Hilbert space. Each circuit creates a basis state. If there are $n=3$ qubits, then we get $2^3=8$ calibration circuits. The following function **complete_meas_cal** returns a list **meas_calibs** of `QuantumCircuit` objects containing the calibration circuits, and a list **state_labels** of the calibration state labels. The input to this function can be given in one of the following three forms: - **qubit_list:** A list of qubits to perform the measurement correction on, or: - **qr (QuantumRegister):** A quantum register, or: - **cr (ClassicalRegister):** A classical register. In addition, one can provide a string **circlabel**, which is added at the beginning of the circuit names for unique identification. For example, in our case, the input is a 5-qubit `QuantumRegister` containing the qubits Q2,Q3,Q4: ``` # Generate the calibration circuits qr = qiskit.QuantumRegister(5) qubit_list = [2,3,4] meas_calibs, state_labels = complete_meas_cal(qubit_list=qubit_list, qr=qr, circlabel='mcal') ``` Print the $2^3=8$ state labels (for the 3 qubits Q2,Q3,Q4): ``` state_labels ``` ## Computing the Calibration Matrix If we do not apply any noise, then the calibration matrix is expected to be the $8 \times 8$ identity matrix. ``` # Execute the calibration circuits without noise backend = qiskit.Aer.get_backend('qasm_simulator') job = qiskit.execute(meas_calibs, backend=backend, shots=1000) cal_results = job.result() # The calibration matrix without noise is the identity matrix meas_fitter = CompleteMeasFitter(cal_results, state_labels, circlabel='mcal') print(meas_fitter.cal_matrix) ``` Assume that we apply some noise model from Qiskit Aer to the 5 qubits, then the calibration matrix will have most of its mass on the main diagonal, with some additional 'noise'. Alternatively, we can execute the calibration circuits using an IBMQ provider. ``` # Generate a noise model for the 5 qubits noise_model = noise.NoiseModel() for qi in range(5): read_err = noise.errors.readout_error.ReadoutError([[0.9, 0.1],[0.25,0.75]]) noise_model.add_readout_error(read_err, [qi]) # Execute the calibration circuits backend = qiskit.Aer.get_backend('qasm_simulator') job = qiskit.execute(meas_calibs, backend=backend, shots=1000, noise_model=noise_model) cal_results = job.result() # Calculate the calibration matrix with the noise model meas_fitter = CompleteMeasFitter(cal_results, state_labels, qubit_list=qubit_list, circlabel='mcal') print(meas_fitter.cal_matrix) # Plot the calibration matrix meas_fitter.plot_calibration() ``` ## Analyzing the Results We would like to compute the total measurement fidelity, and the measurement fidelity for a specific qubit, for example, Q0. Since the on-diagonal elements of the calibration matrix are the probabilities of measuring state 'x' given preparation of state 'x', then the trace of this matrix is the average assignment fidelity. ``` # What is the measurement fidelity? print("Average Measurement Fidelity: %f" % meas_fitter.readout_fidelity()) # What is the measurement fidelity of Q0? print("Average Measurement Fidelity of Q0: %f" % meas_fitter.readout_fidelity( label_list = [['000','001','010','011'],['100','101','110','111']])) ``` ## Applying the Calibration We now perform another experiment and correct the measured results. ## Correct Measurement Noise on a 3Q GHZ State As an example, we start with the 3-qubit GHZ state on the qubits Q2,Q3,Q4: $$ \mid GHZ \rangle = \frac{\mid{000} \rangle + \mid{111} \rangle}{\sqrt{2}}$$ ``` # Make a 3Q GHZ state cr = ClassicalRegister(3) ghz = QuantumCircuit(qr, cr) ghz.h(qr[2]) ghz.cx(qr[2], qr[3]) ghz.cx(qr[3], qr[4]) ghz.measure(qr[2],cr[0]) ghz.measure(qr[3],cr[1]) ghz.measure(qr[4],cr[2]) ``` We now execute the calibration circuits (with the noise model above): ``` job = qiskit.execute([ghz], backend=backend, shots=5000, noise_model=noise_model) results = job.result() ``` We now compute the results without any error mitigation and with the mitigation, namely after applying the calibration matrix to the results. There are two fitting methods for applying the calibration (if no method is defined, then 'least_squares' is used). - **'pseudo_inverse'**, which is a direct inversion of the calibration matrix, - **'least_squares'**, which constrains to have physical probabilities. The raw data to be corrected can be given in a number of forms: - Form1: A counts dictionary from results.get_counts, - Form2: A list of counts of length=len(state_labels), - Form3: A list of counts of length=M*len(state_labels) where M is an integer (e.g. for use with the tomography data), - Form4: A qiskit Result (e.g. results as above). ``` # Results without mitigation raw_counts = results.get_counts() # Get the filter object meas_filter = meas_fitter.filter # Results with mitigation mitigated_results = meas_filter.apply(results) mitigated_counts = mitigated_results.get_counts(0) ``` We can now plot the results with and without error mitigation: ``` from qiskit.tools.visualization import * plot_histogram([raw_counts, mitigated_counts], legend=['raw', 'mitigated']) ``` ### Applying to a reduced subset of qubits Consider now that we want to correct a 2Q Bell state, but we have the 3Q calibration matrix. We can reduce the matrix and build a new mitigation object. ``` # Make a 2Q Bell state between Q2 and Q4 cr = ClassicalRegister(2) bell = QuantumCircuit(qr, cr) bell.h(qr[2]) bell.cx(qr[2], qr[4]) bell.measure(qr[2],cr[0]) bell.measure(qr[4],cr[1]) job = qiskit.execute([bell], backend=backend, shots=5000, noise_model=noise_model) results = job.result() #build a fitter from the subset meas_fitter_sub = meas_fitter.subset_fitter(qubit_sublist=[2,4]) #The calibration matrix is now in the space Q2/Q4 meas_fitter_sub.cal_matrix # Results without mitigation raw_counts = results.get_counts() # Get the filter object meas_filter_sub = meas_fitter_sub.filter # Results with mitigation mitigated_results = meas_filter_sub.apply(results) mitigated_counts = mitigated_results.get_counts(0) from qiskit.tools.visualization import * plot_histogram([raw_counts, mitigated_counts], legend=['raw', 'mitigated']) ``` ## Tensored mitigation The calibration can be simplified if the error is known to be local. By "local error" we mean that the error can be tensored to subsets of qubits. In this case, less than $2^n$ states are needed for the computation of the calibration matrix. Assume that the error acts locally on qubit 2 and the pair of qubits 3 and 4. Construct the calibration circuits by using the function `tensored_meas_cal`. Unlike before we need to explicitly divide the qubit list up into subset regions. ``` # Generate the calibration circuits qr = qiskit.QuantumRegister(5) mit_pattern = [[2],[3,4]] meas_calibs, state_labels = tensored_meas_cal(mit_pattern=mit_pattern, qr=qr, circlabel='mcal') ``` We now retrieve the names of the generated circuits. Note that in each label (of length 3), the least significant bit corresponds to qubit 2, the middle bit corresponds to qubit 3, and the most significant bit corresponds to qubit 4. ``` for circ in meas_calibs: print(circ.name) ``` Let us elaborate on the circuit names. We see that there are only four circuits, instead of eight. The total number of required circuits is $2^m$ where $m$ is the number of qubits in the larget subset (here $m=2$). Each basis state of qubits 3 and 4 appears exactly once. Only two basis states are required for qubit 2, so these are split equally across the four experiments. For example, state '0' of qubit 2 appears in state labels '000' and '010'. We now execute the calibration circuits on an Aer simulator, using the same noise model as before. This noise is in fact local to qubits 3 and 4 separately, but assume that we don't know it, and that we only know that it is local for qubit 2. ``` # Generate a noise model for the 5 qubits noise_model = noise.NoiseModel() for qi in range(5): read_err = noise.errors.readout_error.ReadoutError([[0.9, 0.1],[0.25,0.75]]) noise_model.add_readout_error(read_err, [qi]) # Execute the calibration circuits backend = qiskit.Aer.get_backend('qasm_simulator') job = qiskit.execute(meas_calibs, backend=backend, shots=5000, noise_model=noise_model) cal_results = job.result() meas_fitter = TensoredMeasFitter(cal_results, mit_pattern=mit_pattern) ``` The fitter provides two calibration matrices. One matrix is for qubit 2, and the other matrix is for qubits 3 and 4. ``` print(meas_fitter.cal_matrices) ``` We can look at the readout fidelities of the individual tensored components or qubits within a set: ``` #readout fidelity of Q2 print('Readout fidelity of Q2: %f'%meas_fitter.readout_fidelity(0)) #readout fidelity of Q3/Q4 print('Readout fidelity of Q3/4 space (e.g. mean assignment ' '\nfidelity of 00,10,01 and 11): %f'%meas_fitter.readout_fidelity(1)) #readout fidelity of Q3 print('Readout fidelity of Q3: %f'%meas_fitter.readout_fidelity(1,[['00','10'],['01','11']])) ``` Plot the individual calibration matrices: ``` # Plot the calibration matrix print('Q2 Calibration Matrix') meas_fitter.plot_calibration(0) print('Q3/Q4 Calibration Matrix') meas_fitter.plot_calibration(1) # Make a 3Q GHZ state cr = ClassicalRegister(3) ghz = QuantumCircuit(qr, cr) ghz.h(qr[2]) ghz.cx(qr[2], qr[3]) ghz.cx(qr[3], qr[4]) ghz.measure(qr[2],cr[0]) ghz.measure(qr[3],cr[1]) ghz.measure(qr[4],cr[2]) ``` We now execute the calibration circuits (with the noise model above): ``` job = qiskit.execute([ghz], backend=backend, shots=5000, noise_model=noise_model) results = job.result() # Results without mitigation raw_counts = results.get_counts() # Get the filter object meas_filter = meas_fitter.filter # Results with mitigation mitigated_results = meas_filter.apply(results) mitigated_counts = mitigated_results.get_counts(0) ``` Plot the raw vs corrected state: ``` meas_filter = meas_fitter.filter mitigated_results = meas_filter.apply(results) mitigated_counts = mitigated_results.get_counts(0) plot_histogram([raw_counts, mitigated_counts], legend=['raw', 'mitigated']) ``` As a check we should get the same answer if we build the full correction matrix from a tensor product of the subspace calibration matrices: ``` meas_calibs2, state_labels2 = complete_meas_cal([2,3,4]) meas_fitter2 = CompleteMeasFitter(None, state_labels2) meas_fitter2.cal_matrix = np.kron(meas_fitter.cal_matrices[1],meas_fitter.cal_matrices[0]) meas_filter2 = meas_fitter2.filter mitigated_results2 = meas_filter2.apply(results) mitigated_counts2 = mitigated_results2.get_counts(0) plot_histogram([raw_counts, mitigated_counts2], legend=['raw', 'mitigated']) ``` ## Running Aqua Algorithms with Measurement Error Mitigation To use measurement error mitigation when running quantum circuits as part of an Aqua algorithm, we need to include the respective measurement error fitting instance in the QuantumInstance. This object also holds the specifications for the chosen backend. In the following, we illustrate measurement error mitigation of Aqua algorithms on the example of searching the ground state of a Hamiltonian with VQE. First, we need to import the libraries that provide backends as well as the classes that are needed to run the algorithm. ``` # Import qiskit functions and libraries from qiskit import Aer, IBMQ from qiskit.circuit.library import TwoLocal from qiskit.aqua import QuantumInstance from qiskit.aqua.algorithms import VQE from qiskit.aqua.components.optimizers import COBYLA from qiskit.aqua.operators import X, Y, Z, I, CX, T, H, S, PrimitiveOp from qiskit.providers.aer import noise # Import error mitigation functions from qiskit.ignis.mitigation.measurement import CompleteMeasFitter ``` Then, we initialize the instances that are required to execute the algorithm. ``` # Initialize Hamiltonian h_op = (-1.0523732 * I^I) + \ (0.39793742 * I^Z) + \ (-0.3979374 * Z^I) + \ (-0.0112801 * Z^Z) + \ (0.18093119 * X^X) # Initialize trial state var_form = TwoLocal(h_op.num_qubits, ['ry', 'rz'], 'cz', reps=3, entanglement='full') # Initialize optimizer optimizer = COBYLA(maxiter=350) # Initialize algorithm to find the ground state vqe = VQE(h_op, var_form, optimizer) ``` Here, we choose the Aer `qasm_simulator` as backend and also add a custom noise model. The application of an actual quantum backend provided by IBMQ is outlined in the commented code. ``` # Generate a noise model noise_model = noise.NoiseModel() for qi in range(h_op.num_qubits): read_err = noise.errors.readout_error.ReadoutError([[0.8, 0.2],[0.1,0.9]]) noise_model.add_readout_error(read_err, [qi]) # Initialize the backend configuration using measurement error mitigation with a QuantumInstance qi_noise_model_qasm = QuantumInstance(backend=Aer.get_backend('qasm_simulator'), noise_model=noise_model, shots=1000, measurement_error_mitigation_cls=CompleteMeasFitter, measurement_error_mitigation_shots=1000) # Intialize your TOKEN and provider with # provider = IBMQ.get_provider(...) # qi_noise_model_ibmq = QuantumInstance(backend=provider = provider.get_backend(backend_name)), shots=8000, # measurement_error_mitigation_cls=CompleteMeasFitter, measurement_error_mitigation_shots=8000) ``` Finally, we can run the algorithm and check the results. ``` # Run the algorithm result = vqe.run(qi_noise_model_qasm) print(result) import qiskit.tools.jupyter %qiskit_version_table %qiskit_copyright ```
github_jupyter
# Generative Adversarial Network In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits! GANs were [first reported on](https://arxiv.org/abs/1406.2661) in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out: * [Pix2Pix](https://affinelayer.com/pixsrv/) * [CycleGAN](https://github.com/junyanz/CycleGAN) * [A whole list](https://github.com/wiseodd/generative-models) The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks _as close as possible_ to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator. ![GAN diagram](assets/gan_diagram.png) The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator. The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow. ``` %matplotlib inline import pickle as pkl import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data') ``` ## Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input `inputs_real` and the generator input `inputs_z`. We'll assign them the appropriate sizes for each of the networks. >**Exercise:** Finish the `model_inputs` function below. Create the placeholders for `inputs_real` and `inputs_z` using the input sizes `real_dim` and `z_dim` respectively. ``` def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real') inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z') return inputs_real, inputs_z ``` ## Generator network ![GAN Network](assets/gan_network.png) Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values. #### Variable Scope Here we need to use `tf.variable_scope` for two reasons. Firstly, we're going to make sure all the variable names start with `generator`. Similarly, we'll prepend `discriminator` to the discriminator variables. This will help out later when we're training the separate networks. We could just use `tf.name_scope` to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also _sample from it_ as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the `reuse` keyword for `tf.variable_scope` to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again. To use `tf.variable_scope`, you use a `with` statement: ```python with tf.variable_scope('scope_name', reuse=False): # code here ``` Here's more from [the TensorFlow documentation](https://www.tensorflow.org/programmers_guide/variable_scope#the_problem) to get another look at using `tf.variable_scope`. #### Leaky ReLU TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to `tf.maximum`. Typically, a parameter `alpha` sets the magnitude of the output for negative values. So, the output for negative input (`x`) values is `alpha*x`, and the output for positive `x` is `x`: $$ f(x) = max(\alpha * x, x) $$ #### Tanh Output The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1. >**Exercise:** Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the `reuse` keyword argument from the function to `tf.variable_scope`. ``` def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): ''' Build the generator network. Arguments --------- z : Input tensor for the generator out_dim : Shape of the generator output n_units : Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out, logits: ''' with tf.variable_scope('generator', reuse=reuse): # finish this # Hidden layer h1 = tf.layers.dense(z, n_units) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) # Logits and tanh output logits = tf.layers.dense(h1, out_dim) out = tf.tanh(logits) return out ``` ## Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. >**Exercise:** Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the `reuse` keyword argument from the function arguments to `tf.variable_scope`. ``` def discriminator(x, n_units=128, reuse=False, alpha=0.01): ''' Build the discriminator network. Arguments --------- x : Input tensor for the discriminator n_units: Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out, logits: ''' with tf.variable_scope('discriminator', reuse=reuse): # finish this # Hidden layer h1 = tf.layers.dense(x, n_units) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) logits = tf.layers.dense(h1, 1) out = tf.sigmoid(logits) return out, logits ``` ## Hyperparameters ``` # Size of input image to discriminator input_size = 784 # 28x28 MNIST images flattened # Size of latent vector to generator z_size = 100 # Sizes of hidden layers in generator and discriminator g_hidden_size = 128 d_hidden_size = 128 # Leak factor for leaky ReLU alpha = 0.01 # Label smoothing smooth = 0.1 ``` ## Build network Now we're building the network from the functions defined above. First is to get our inputs, `input_real, input_z` from `model_inputs` using the sizes of the input and z. Then, we'll create the generator, `generator(input_z, input_size)`. This builds the generator with the appropriate input and output sizes. Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as `g_model`. So the real data discriminator is `discriminator(input_real)` while the fake discriminator is `discriminator(g_model, reuse=True)`. >**Exercise:** Build the network from the functions you defined earlier. ``` tf.reset_default_graph() # Create our input placeholders input_real, input_z = model_inputs(input_size, z_size) # Generator network here g_model = generator(input_z, input_size, g_hidden_size, False, alpha) # g_model is the generator output # Disriminator network here d_model_real, d_logits_real = discriminator(input_real, d_hidden_size, False, alpha) d_model_fake, d_logits_fake = discriminator(g_model, d_hidden_size, True, alpha) ``` ## Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_loss_real + d_loss_fake`. The losses will by sigmoid cross-entropies, which we can get with `tf.nn.sigmoid_cross_entropy_with_logits`. We'll also wrap that in `tf.reduce_mean` to get the mean for all the images in the batch. So the losses will look something like ```python tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) ``` For the real image logits, we'll use `d_logits_real` which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter `smooth`. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like `labels = tf.ones_like(tensor) * (1 - smooth)` The discriminator loss for the fake data is similar. The logits are `d_logits_fake`, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Finally, the generator losses are using `d_logits_fake`, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images. >**Exercise:** Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator. ``` # Calculate losses real_labels = tf.ones_like(d_logits_real) d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=real_labels * (1 - smooth))) fake_labels = tf.zeros_like(d_logits_fake) d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=fake_labels)) d_loss = d_loss_real + d_loss_fake g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=real_labels)) ``` ## Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use `tf.trainable_variables()`. This creates a list of all the variables we've defined in our graph. For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with `generator`. So, we just need to iterate through the list from `tf.trainable_variables()` and keep variables that start with `generator`. Each variable object has an attribute `name` which holds the name of the variable as a string (`var.name == 'weights_0'` for instance). We can do something similar with the discriminator. All the variables in the discriminator start with `discriminator`. Then, in the optimizer we pass the variable lists to the `var_list` keyword argument of the `minimize` method. This tells the optimizer to only update the listed variables. Something like `tf.train.AdamOptimizer().minimize(loss, var_list=var_list)` will only train the variables in `var_list`. >**Exercise: ** Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using `AdamOptimizer`, create an optimizer for each network that update the network variables separately. ``` # Optimizers learning_rate = 0.002 # Get the trainable_variables, split into G and D parts t_vars = tf.trainable_variables() g_vars = [v for v in t_vars if v.name.startswith('generator')] d_vars = [v for v in t_vars if v.name.startswith('discriminator')] d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars) g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars) ``` ## Training ``` batch_size = 100 epochs = 100 samples = [] losses = [] saver = tf.train.Saver(var_list = g_vars) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images, reshape and rescale to pass to D batch_images = batch[0].reshape((batch_size, 784)) batch_images = batch_images*2 - 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z}) _ = sess.run(g_train_opt, feed_dict={input_real: batch_images, input_z: batch_z}) # At the end of each epoch, get the losses and print them out train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images}) train_loss_g = g_loss.eval({input_real: batch_images, input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) # Sample from generator as we're training for viewing afterwards sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) samples.append(gen_samples) saver.save(sess, './checkpoints/generator.ckpt') # Save training generator samples with open('train_samples.pkl', 'wb') as f: pkl.dump(samples, f) ``` ## Training loss Here we'll check out the training losses for the generator and discriminator. ``` %matplotlib inline import matplotlib.pyplot as plt fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator') plt.plot(losses.T[1], label='Generator') plt.title("Training Losses") plt.legend() ``` ## Generator samples from training Here we can view samples of images from the generator. First we'll look at images taken while training. ``` def view_samples(epoch, samples): fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) im = ax.imshow(img.reshape((28,28)), cmap='Greys_r') return fig, axes # Load samples from generator taken while training with open('train_samples.pkl', 'rb') as f: samples = pkl.load(f) ``` These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make. ``` _ = view_samples(-1, samples) ``` Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion! ``` rows, cols = 10, 6 fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True) for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes): for img, ax in zip(sample[::int(len(sample)/cols)], ax_row): ax.imshow(img.reshape((28,28)), cmap='Greys_r') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) ``` It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3. ## Sampling from the generator We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! ``` saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) view_samples(0, [gen_samples]) ```
github_jupyter
# FINN - Functional Verification of End-to-End Flow ----------------------------------------------------------------- **Important: This notebook depends on the tfc_end2end_example notebook, because we are using models that are available at intermediate steps in the end-to-end flow. So please make sure the needed .onnx files are generated to run this notebook.** In this notebook, we will show how to take the intermediate results of the end-to-end tfc example and verify their functionality with different methods. In the following picture you can see the section in the end-to-end flow about the *Simulation & Emulation Flows*. Besides the methods in this notebook, there is another one that is covered in the Jupyter notebook [tfc_end2end_example](tfc_end2end_example.ipynb): remote execution. The remote execution allows functional verification directly on the PYNQ board, for details please have a look at the mentioned Jupyter notebook. <img src="verification.png" alt="Drawing" style="width: 500px;"/> We will use the following helper functions, `showSrc` to show source code of FINN library calls and `showInNetron` to show the ONNX model at the current transformation step. The Netron displays are interactive, but they only work when running the notebook actively and not on GitHub (i.e. if you are viewing this on GitHub you'll only see blank squares). ``` from finn.util.basic import make_build_dir from finn.util.visualization import showSrc, showInNetron build_dir = "/workspace/finn" ``` To verify the simulations, a "golden" output is calculated as a reference. This is calculated directly from the Brevitas model using PyTorch, by running some example data from the MNIST dataset through the trained model. ``` from pkgutil import get_data import onnx import onnx.numpy_helper as nph import torch from finn.util.test import get_test_model_trained fc = get_test_model_trained("TFC", 1, 1) raw_i = get_data("finn.data", "onnx/mnist-conv/test_data_set_0/input_0.pb") input_tensor = onnx.load_tensor_from_string(raw_i) input_brevitas = torch.from_numpy(nph.to_array(input_tensor)).float() output_golden = fc.forward(input_brevitas).detach().numpy() output_golden ``` ## Simulation using Python <a id='simpy'></a> If an ONNX model consists of [standard ONNX](https://github.com/onnx/onnx/blob/master/docs/Operators.md) nodes and/or FINN custom operations that do not belong to the fpgadataflow (backend $\neq$ "fpgadataflow") this model can be checked for functionality using Python. To simulate a standard ONNX node [onnxruntime](https://github.com/microsoft/onnxruntime) is used. onnxruntime is an open source tool developed by Microsoft to run standard ONNX nodes. For the FINN custom op nodes execution functions are defined. The following is an example of the execution function of a XNOR popcount node. ``` from finn.custom_op.general.xnorpopcount import xnorpopcountmatmul showSrc(xnorpopcountmatmul) ``` The function contains a description of the behaviour in Python and can thus calculate the result of the node. This execution function and onnxruntime is used when `execute_onnx` from `onnx_exec` is applied to the model. The model is then simulated node by node and the result is stored in a context dictionary, which contains the values of each tensor at the end of the execution. To get the result, only the output tensor has to be extracted. The procedure is shown below. We take the model right before the nodes should be converted into HLS layers and generate an input tensor to pass to the execution function. The input tensor is generated from the Brevitas example inputs. ``` import numpy as np from finn.core.modelwrapper import ModelWrapper input_dict = {"global_in": nph.to_array(input_tensor)} model_for_sim = ModelWrapper(build_dir+"/tfc_w1a1_ready_for_hls_conversion.onnx") import finn.core.onnx_exec as oxe output_dict = oxe.execute_onnx(model_for_sim, input_dict) output_pysim = output_dict[list(output_dict.keys())[0]] if np.isclose(output_pysim, output_golden, atol=1e-3).all(): print("Results are the same!") else: print("The results are not the same!") ``` The result is compared with the theoretical "golden" value for verification. ## Simulation (cppsim) using C++ When dealing with HLS custom op nodes in FINN the simulation using Python is no longer sufficient. After the nodes have been converted to HLS layers, the simulation using C++ can be used. To do this, the input tensor is stored in an .npy file and C++ code is generated that reads the values from the .npy array, streams them to the corresponding finn-hlslib function and writes the result to a new .npy file. This in turn can be read in Python and processed in the FINN flow. For this example the model after setting the folding factors in the HLS layers is used, please be aware that this is not the full model, but the dataflow partition, so before executing at the end of this section we have to integrate the model back into the parent model. ``` model_for_cppsim = ModelWrapper(build_dir+"/tfc_w1_a1_set_folding_factors.onnx") ``` To generate the code for this simulation and to generate the executable two transformations are used: * `PrepareCppSim` which generates the C++ code for the corresponding hls layer * `CompileCppSim` which compules the C++ code and stores the path to the executable ``` from finn.transformation.fpgadataflow.prepare_cppsim import PrepareCppSim from finn.transformation.fpgadataflow.compile_cppsim import CompileCppSim from finn.transformation.general import GiveUniqueNodeNames model_for_cppsim = model_for_cppsim.transform(GiveUniqueNodeNames()) model_for_cppsim = model_for_cppsim.transform(PrepareCppSim()) model_for_cppsim = model_for_cppsim.transform(CompileCppSim()) ``` When we take a look at the model using netron, we can see that the transformations introduced new attributes. ``` model_for_cppsim.save(build_dir+"/tfc_w1_a1_for_cppsim.onnx") showInNetron(build_dir+"/tfc_w1_a1_for_cppsim.onnx") ``` The following node attributes have been added: * `code_gen_dir_cppsim` indicates the directory where the files for the simulation using C++ are stored * `executable_path` specifies the path to the executable We take now a closer look into the files that were generated: ``` from finn.custom_op.registry import getCustomOp fc0 = model_for_cppsim.graph.node[1] fc0w = getCustomOp(fc0) code_gen_dir = fc0w.get_nodeattr("code_gen_dir_cppsim") !ls {code_gen_dir} ``` Besides the .cpp file, the folder contains .h files with the weights and thresholds. The shell script contains the compile command and *node_model* is the executable generated by compilation. Comparing this with the `executable_path` node attribute, it can be seen that it specifies exactly the path to *node_model*. To simulate the model the execution mode(exec_mode) must be set to "cppsim". This is done using the transformation SetExecMode. ``` from finn.transformation.fpgadataflow.set_exec_mode import SetExecMode model_for_cppsim = model_for_cppsim.transform(SetExecMode("cppsim")) model_for_cppsim.save(build_dir+"/tfc_w1_a1_for_cppsim.onnx") ``` Before the model can be executed using `execute_onnx`, we integrate the child model in the parent model. The function reads then the `exec_mode` and writes the input into the correct directory in a .npy file. To be able to read this in C++, there is an additional .hpp file ([npy2apintstream.hpp](https://github.com/Xilinx/finn/blob/master/src/finn/data/cpp/npy2apintstream.hpp)) in FINN, which uses cnpy to read .npy files and convert them into streams, or to read a stream and write it into an .npy. [cnpy](https://github.com/rogersce/cnpy) is a helper to read and write .npy and .npz formates in C++. The result is again compared to the "golden" output. ``` parent_model = ModelWrapper(build_dir+"/tfc_w1_a1_dataflow_parent.onnx") sdp_node = parent_model.graph.node[2] child_model = build_dir + "/tfc_w1_a1_for_cppsim.onnx" getCustomOp(sdp_node).set_nodeattr("model", child_model) output_dict = oxe.execute_onnx(parent_model, input_dict) output_cppsim = output_dict[list(output_dict.keys())[0]] if np.isclose(output_cppsim, output_golden, atol=1e-3).all(): print("Results are the same!") else: print("The results are not the same!") ``` ## Emulation (rtlsim) using PyVerilator The emulation using [PyVerilator](https://github.com/maltanar/pyverilator) can be done after IP blocks are generated from the corresponding HLS layers. Pyverilator is a tool which makes it possible to simulate verilog files using verilator via a python interface. We have two ways to use rtlsim, one is to run the model node-by-node as with the simulation methods, but if the model is in the form of the dataflow partition, the part of the graph that consist of only HLS nodes could also be executed as whole. Because at the point where we want to grab and verify the model, the model is already in split form (parent graph consisting of non-hls layers and child graph consisting only of hls layers) we first have to reference the child graph within the parent graph. This is done using the node attribute `model` for the `StreamingDataflowPartition` node. First the procedure is shown, if the child graph has ip blocks corresponding to the individual layers, then the procedure is shown, if the child graph already has a stitched IP. ### Emulation of model node-by-node The child model is loaded and the `exec_mode` for each node is set. To prepare the node-by-node emulation the transformation `PrepareRTLSim` is applied to the child model. With this transformation the emulation files are created for each node and can be used directly when calling `execute_onnx()`. Each node has a new node attribute "rtlsim_so" after transformation, which contains the path to the corresponding emulation files. Then it is saved in a new .onnx file so that the changed model can be referenced in the parent model. ``` from finn.transformation.fpgadataflow.prepare_rtlsim import PrepareRTLSim from finn.transformation.fpgadataflow.prepare_ip import PrepareIP from finn.transformation.fpgadataflow.hlssynth_ip import HLSSynthIP test_fpga_part = "xc7z020clg400-1" target_clk_ns = 10 child_model = ModelWrapper(build_dir + "/tfc_w1_a1_set_folding_factors.onnx") child_model = child_model.transform(GiveUniqueNodeNames()) child_model = child_model.transform(PrepareIP(test_fpga_part, target_clk_ns)) child_model = child_model.transform(HLSSynthIP()) child_model = child_model.transform(SetExecMode("rtlsim")) child_model = child_model.transform(PrepareRTLSim()) child_model.save(build_dir + "/tfc_w1_a1_dataflow_child.onnx") ``` The next step is to load the parent model and set the node attribute `model` in the StreamingDataflowPartition node (`sdp_node`). Afterwards the `exec_mode` is set in the parent model in each node. ``` # parent model model_for_rtlsim = ModelWrapper(build_dir + "/tfc_w1_a1_dataflow_parent.onnx") # reference child model sdp_node = getCustomOp(model_for_rtlsim.graph.node[2]) sdp_node.set_nodeattr("model", build_dir + "/tfc_w1_a1_dataflow_child.onnx") model_for_rtlsim = model_for_rtlsim.transform(SetExecMode("rtlsim")) ``` Because the necessary files for the emulation are already generated in Jupyter notebook [tfc_end2end_example](tfc_end2end_example.ipynb), in the next step the execution of the model can be done directly. ``` output_dict = oxe.execute_onnx(model_for_rtlsim, input_dict) output_rtlsim = output_dict[list(output_dict.keys())[0]] if np.isclose(output_rtlsim, output_golden, atol=1e-3).all(): print("Results are the same!") else: print("The results are not the same!") ``` ### Emulation of stitched IP Here we use the same procedure. First the child model is loaded, but in contrast to the layer-by-layer emulation, the metadata property `exec_mode` is set to "rtlsim" for the whole child model. When the model is integrated and executed in the last step, the verilog files of the stitched IP of the child model are used. ``` from finn.transformation.fpgadataflow.insert_dwc import InsertDWC from finn.transformation.fpgadataflow.insert_fifo import InsertFIFO from finn.transformation.fpgadataflow.create_stitched_ip import CreateStitchedIP child_model = ModelWrapper(build_dir + "/tfc_w1_a1_dataflow_child.onnx") child_model = child_model.transform(InsertDWC()) child_model = child_model.transform(InsertFIFO()) child_model = child_model.transform(GiveUniqueNodeNames()) child_model = child_model.transform(PrepareIP(test_fpga_part, target_clk_ns)) child_model = child_model.transform(HLSSynthIP()) child_model = child_model.transform(CreateStitchedIP(test_fpga_part, target_clk_ns)) child_model = child_model.transform(PrepareRTLSim()) child_model.set_metadata_prop("exec_mode","rtlsim") child_model.save(build_dir + "/tfc_w1_a1_dataflow_child.onnx") # parent model model_for_rtlsim = ModelWrapper(build_dir + "/tfc_w1_a1_dataflow_parent.onnx") # reference child model sdp_node = getCustomOp(model_for_rtlsim.graph.node[2]) sdp_node.set_nodeattr("model", build_dir + "/tfc_w1_a1_dataflow_child.onnx") output_dict = oxe.execute_onnx(model_for_rtlsim, input_dict) output_rtlsim = output_dict[list(output_dict.keys())[0]] if np.isclose(output_rtlsim, output_golden, atol=1e-3).all(): print("Results are the same!") else: print("The results are not the same!") ```
github_jupyter
``` # dependencies and setup from bs4 import BeautifulSoup as bs from splinter import Browser import time import pandas as pd # NEED TO CHANGE THE PATH TO MATCH YOUR COMPUTER # showing the computer where to find the chromedriver executable_path = {"executable_path": "/usr/local/bin/chromedriver"} browser = Browser("chrome", **executable_path, headless=False) # Visit the NASA website to find the top mars news article mars_url = "https://mars.nasa.gov/news/" browser.visit(mars_url) time.sleep(1) html_mars_site = browser.html time.sleep(1) # Scrape page into Soup soup = bs(html_mars_site,"html.parser") time.sleep(1) # Find the the latest news title and headline text in soup news_title = soup.find("div",class_="content_title").text time.sleep(1) news_p = soup.find("div",class_="article_teaser_body").text time.sleep(1) # print the variables to check that we're pulling the right things print(f"The latest title is: {news_title}") print(f"With the description: {news_p}") ``` # Image ``` # visit mars website jpl_image_url = "https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars" jpl_image_url_base = "https://www.jpl.nasa.gov" browser.visit(jpl_image_url) time.sleep(1) browser.click_link_by_partial_text('FULL IMAGE') time.sleep(1) browser.click_link_by_partial_text('more info') time.sleep(1) # scrape page into Soup html_image_site = browser.html mars_image_soup = bs(html_image_site,"html.parser") # find the image in soup search_image = mars_image_soup.find(class_="main_image") featured_image_url = jpl_image_url_base + search_image["src"] print(featured_image_url) ``` # weather ``` # Visit the Twitter website to find the weather information tweet_weather_url = "https://twitter.com/marswxreport?lang=en" browser.visit(tweet_weather_url) time.sleep(1) # Scrape page into Soup html_weather_twitter = browser.html time.sleep(1) weather_soup = bs(html_weather_twitter,"html.parser") # Find the text in soup mars_weather = weather_soup.find("p",class_="TweetTextSize TweetTextSize--normal js-tweet-text tweet-text") ``` # Mars Facts ``` # Visit the Space Facts website to find Mars facts mars_facts_url = "https://space-facts.com/mars/" mars_facts = pd.read_html(mars_facts_url) facts_df = mars_facts[0] # Create a dataframe and add columns facts_df.columns = ['Description','Value'] facts_df.to_html(header=False, index=False) facts_df ``` # Hemisphere ``` # Establish list of image urls and urls needed for hemispheres list_of_img_urls = [] mars_hemisphere_url = "https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars" hemisphere_base_url = "https://astrogeology.usgs.gov" # Visit the Astrogeology website to find Mars facts browser.visit(mars_hemisphere_url) time.sleep(1) # Scrape page into Soup html_hemispheres = browser.html time.sleep(1) hemisphere_soup = bs(html_hemispheres, 'html.parser') time.sleep(1) # Find the text in soup items = hemisphere_soup.find_all('div', class_='item') time.sleep(1) # Create a loop to populate list of image urls for x in items: title = x.find("h3").text time.sleep(1) image_url_portion = x.find('a', class_='itemLink product-item')["href"] time.sleep(1) browser.visit(hemisphere_base_url + image_url_portion) time.sleep(1) image_url_portion_html = browser.html time.sleep(1) hemisphere_soup = bs(image_url_portion_html,"html.parser") time.sleep(1) complete_img_url = hemisphere_base_url + hemisphere_soup.find("img",class_="wide-image")["src"] time.sleep(1) list_of_img_urls.append({"title":title,"img_url":complete_img_url}) time.sleep(1) list_of_img_urls browser.quit() ```
github_jupyter
``` import tensorflow as tf from tensorflow.compat.v1 import ConfigProto from tensorflow.compat.v1 import InteractiveSession config = ConfigProto() config.gpu_options.per_process_gpu_memory_fraction = 0.5 config.gpu_options.allow_growth = True session = InteractiveSession(config=config) # import the libraries as shown below from tensorflow.keras.layers import Input, Lambda, Dense, Flatten from tensorflow.keras.models import Model from tensorflow.keras.applications.resnet50 import ResNet50 #from keras.applications.vgg16 import VGG16 from tensorflow.keras.applications.resnet50 import preprocess_input from tensorflow.keras.preprocessing import image from tensorflow.keras.preprocessing.image import ImageDataGenerator,load_img from tensorflow.keras.models import Sequential import numpy as np from glob import glob #import matplotlib.pyplot as plt # re-size all the images to this IMAGE_SIZE = [224, 224] train_path = 'Datasets/train' valid_path = 'Datasets/test' # Import the Vgg 16 library as shown below and add preprocessing layer to the front of VGG # Here we will be using imagenet weights resnet = ResNet50(input_shape=IMAGE_SIZE + [3], weights='imagenet', include_top=False) # don't train existing weights for layer in resnet.layers: layer.trainable = False # useful for getting number of output classes folders = glob('Datasets/train/*') # our layers - you can add more if you want x = Flatten()(resnet.output) prediction = Dense(len(folders), activation='softmax')(x) # create a model object model = Model(inputs=resnet.input, outputs=prediction) # view the structure of the model model.summary() # tell the model what cost and optimization method to use model.compile( loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'] ) # Use the Image Data Generator to import the images from the dataset from tensorflow.keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator(rescale = 1./255, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True) test_datagen = ImageDataGenerator(rescale = 1./255) # Make sure you provide the same target size as initialied for the image size training_set = train_datagen.flow_from_directory('Datasets/train', target_size = (224, 224), batch_size = 32, class_mode = 'categorical') test_set = test_datagen.flow_from_directory('Datasets/test', target_size = (224, 224), batch_size = 32, class_mode = 'categorical') # fit the model # Run the cell. It will take some time to execute r = model.fit_generator( training_set, validation_data=test_set, epochs=10, steps_per_epoch=len(training_set), validation_steps=len(test_set) ) import matplotlib.pyplot as plt # plot the loss plt.plot(r.history['loss'], label='train loss') plt.plot(r.history['val_loss'], label='val loss') plt.legend() plt.show() plt.savefig('LossVal_loss') # plot the accuracy plt.plot(r.history['accuracy'], label='train acc') plt.plot(r.history['val_accuracy'], label='val acc') plt.legend() plt.show() plt.savefig('AccVal_acc') from tensorflow.keras.models import load_model model.save('model_resnet50.h5') y_pred = model.predict(test_set) y_pred import numpy as np y_pred = np.argmax(y_pred, axis=1) y_pred from tensorflow.keras.models import load_model from tensorflow.keras.preprocessing import image model=load_model('model_inception.h5') from PIL import Image img_data = np.random.random(size=(100, 100, 3)) img = tf.keras.preprocessing.image.array_to_img(img_data) array = tf.keras.preprocessing.image.img_to_array(img) img_data img=image.load_img('Datasets/test/Covid/1-s2.0-S0929664620300449-gr2_lrg-a.jpg',target_size=(224,224)) x=image.img_to_array(img) x x.shape x=x/255 import numpy as np x=np.expand_dims(x,axis=0) img_data=preprocess_input(x) img_data.shape model.predict(img_data) a=np.argmax(model.predict(img_data), axis=1) a==0 ```
github_jupyter