text
stringlengths
81
112k
Infer the domain from a collection of terms. The algorithm for inferring domains is as follows: - If all input terms have a domain of GENERIC, the result is GENERIC. - If there is exactly one non-generic domain in the input terms, the result is that domain. - Otherwise, an AmbiguousDomain error is raised. Parameters ---------- terms : iterable[zipline.pipeline.term.Term] Returns ------- inferred : Domain or NotSpecified Raises ------ AmbiguousDomain Raised if more than one concrete domain is present in the input terms. def infer_domain(terms): """ Infer the domain from a collection of terms. The algorithm for inferring domains is as follows: - If all input terms have a domain of GENERIC, the result is GENERIC. - If there is exactly one non-generic domain in the input terms, the result is that domain. - Otherwise, an AmbiguousDomain error is raised. Parameters ---------- terms : iterable[zipline.pipeline.term.Term] Returns ------- inferred : Domain or NotSpecified Raises ------ AmbiguousDomain Raised if more than one concrete domain is present in the input terms. """ domains = {t.domain for t in terms} num_domains = len(domains) if num_domains == 0: return GENERIC elif num_domains == 1: return domains.pop() elif num_domains == 2 and GENERIC in domains: domains.remove(GENERIC) return domains.pop() else: # Remove GENERIC if it's present before raising. Showing it to the user # is confusing because it doesn't contribute to the error. domains.discard(GENERIC) raise AmbiguousDomain(sorted(domains, key=repr))
Given a date, align it to the calendar of the pipeline's domain. Parameters ---------- dt : pd.Timestamp Returns ------- pd.Timestamp def roll_forward(self, dt): """ Given a date, align it to the calendar of the pipeline's domain. Parameters ---------- dt : pd.Timestamp Returns ------- pd.Timestamp """ dt = pd.Timestamp(dt, tz='UTC') trading_days = self.all_sessions() try: return trading_days[trading_days.searchsorted(dt)] except IndexError: raise ValueError( "Date {} was past the last session for domain {}. " "The last session for this domain is {}.".format( dt.date(), self, trading_days[-1].date() ) )
Returns the date index and sid columns shared by a list of dataframes, ensuring they all match. Parameters ---------- frames : list[pd.DataFrame] A list of dataframes indexed by day, with a column per sid. Returns ------- days : np.array[datetime64[ns]] The days in these dataframes. sids : np.array[int64] The sids in these dataframes. Raises ------ ValueError If the dataframes passed are not all indexed by the same days and sids. def days_and_sids_for_frames(frames): """ Returns the date index and sid columns shared by a list of dataframes, ensuring they all match. Parameters ---------- frames : list[pd.DataFrame] A list of dataframes indexed by day, with a column per sid. Returns ------- days : np.array[datetime64[ns]] The days in these dataframes. sids : np.array[int64] The sids in these dataframes. Raises ------ ValueError If the dataframes passed are not all indexed by the same days and sids. """ if not frames: days = np.array([], dtype='datetime64[ns]') sids = np.array([], dtype='int64') return days, sids # Ensure the indices and columns all match. check_indexes_all_same( [frame.index for frame in frames], message='Frames have mistmatched days.', ) check_indexes_all_same( [frame.columns for frame in frames], message='Frames have mismatched sids.', ) return frames[0].index.values, frames[0].columns.values
Parameters ---------- frames : dict[str, pd.DataFrame] A dict mapping each OHLCV field to a dataframe with a row for each date and a column for each sid, as passed to write(). Returns ------- start_date_ixs : np.array[int64] The index of the first date with non-nan values, for each sid. end_date_ixs : np.array[int64] The index of the last date with non-nan values, for each sid. def compute_asset_lifetimes(frames): """ Parameters ---------- frames : dict[str, pd.DataFrame] A dict mapping each OHLCV field to a dataframe with a row for each date and a column for each sid, as passed to write(). Returns ------- start_date_ixs : np.array[int64] The index of the first date with non-nan values, for each sid. end_date_ixs : np.array[int64] The index of the last date with non-nan values, for each sid. """ # Build a 2D array (dates x sids), where an entry is True if all # fields are nan for the given day and sid. is_null_matrix = np.logical_and.reduce( [frames[field].isnull().values for field in FIELDS], ) if not is_null_matrix.size: empty = np.array([], dtype='int64') return empty, empty.copy() # Offset of the first null from the start of the input. start_date_ixs = is_null_matrix.argmin(axis=0) # Offset of the last null from the **end** of the input. end_offsets = is_null_matrix[::-1].argmin(axis=0) # Offset of the last null from the start of the input end_date_ixs = is_null_matrix.shape[0] - end_offsets - 1 return start_date_ixs, end_date_ixs
Write the OHLCV data for one country to the HDF5 file. Parameters ---------- country_code : str The ISO 3166 alpha-2 country code for this country. frames : dict[str, pd.DataFrame] A dict mapping each OHLCV field to a dataframe with a row for each date and a column for each sid. The dataframes need to have the same index and columns. scaling_factors : dict[str, float], optional A dict mapping each OHLCV field to a scaling factor, which is applied (as a multiplier) to the values of field to efficiently store them as uint32, while maintaining desired precision. These factors are written to the file as metadata, which is consumed by the reader to adjust back to the original float values. Default is None, in which case DEFAULT_SCALING_FACTORS is used. def write(self, country_code, frames, scaling_factors=None): """Write the OHLCV data for one country to the HDF5 file. Parameters ---------- country_code : str The ISO 3166 alpha-2 country code for this country. frames : dict[str, pd.DataFrame] A dict mapping each OHLCV field to a dataframe with a row for each date and a column for each sid. The dataframes need to have the same index and columns. scaling_factors : dict[str, float], optional A dict mapping each OHLCV field to a scaling factor, which is applied (as a multiplier) to the values of field to efficiently store them as uint32, while maintaining desired precision. These factors are written to the file as metadata, which is consumed by the reader to adjust back to the original float values. Default is None, in which case DEFAULT_SCALING_FACTORS is used. """ if scaling_factors is None: scaling_factors = DEFAULT_SCALING_FACTORS with self.h5_file(mode='a') as h5_file: # ensure that the file version has been written h5_file.attrs['version'] = VERSION country_group = h5_file.create_group(country_code) data_group = country_group.create_group(DATA) index_group = country_group.create_group(INDEX) lifetimes_group = country_group.create_group(LIFETIMES) # Note that this functions validates that all of the frames # share the same days and sids. days, sids = days_and_sids_for_frames(list(frames.values())) # Write sid and date indices. index_group.create_dataset(SID, data=sids) # h5py does not support datetimes, so they need to be stored # as integers. index_group.create_dataset(DAY, data=days.astype(np.int64)) log.debug( 'Wrote {} group to file {}', index_group.name, self._filename, ) # Write start and end dates for each sid. start_date_ixs, end_date_ixs = compute_asset_lifetimes(frames) lifetimes_group.create_dataset(START_DATE, data=start_date_ixs) lifetimes_group.create_dataset(END_DATE, data=end_date_ixs) if len(sids): chunks = (len(sids), min(self._date_chunk_size, len(days))) else: # h5py crashes if we provide chunks for empty data. chunks = None for field in FIELDS: frame = frames[field] # Sort rows by increasing sid, and columns by increasing date. frame.sort_index(inplace=True) frame.sort_index(axis='columns', inplace=True) data = coerce_to_uint32( frame.T.fillna(0).values, scaling_factors[field], ) dataset = data_group.create_dataset( field, compression='lzf', shuffle=True, data=data, chunks=chunks, ) dataset.attrs[SCALING_FACTOR] = scaling_factors[field] log.debug( 'Writing dataset {} to file {}', dataset.name, self._filename )
Parameters ---------- country_code : str The ISO 3166 alpha-2 country code for this country. data : iterable[tuple[int, pandas.DataFrame]] The data chunks to write. Each chunk should be a tuple of sid and the data for that asset. scaling_factors : dict[str, float], optional A dict mapping each OHLCV field to a scaling factor, which is applied (as a multiplier) to the values of field to efficiently store them as uint32, while maintaining desired precision. These factors are written to the file as metadata, which is consumed by the reader to adjust back to the original float values. Default is None, in which case DEFAULT_SCALING_FACTORS is used. def write_from_sid_df_pairs(self, country_code, data, scaling_factors=None): """ Parameters ---------- country_code : str The ISO 3166 alpha-2 country code for this country. data : iterable[tuple[int, pandas.DataFrame]] The data chunks to write. Each chunk should be a tuple of sid and the data for that asset. scaling_factors : dict[str, float], optional A dict mapping each OHLCV field to a scaling factor, which is applied (as a multiplier) to the values of field to efficiently store them as uint32, while maintaining desired precision. These factors are written to the file as metadata, which is consumed by the reader to adjust back to the original float values. Default is None, in which case DEFAULT_SCALING_FACTORS is used. """ data = list(data) if not data: empty_frame = pd.DataFrame( data=None, index=np.array([], dtype='datetime64[ns]'), columns=np.array([], dtype='int64'), ) return self.write( country_code, {f: empty_frame.copy() for f in FIELDS}, scaling_factors, ) sids, frames = zip(*data) ohlcv_frame = pd.concat(frames) # Repeat each sid for each row in its corresponding frame. sid_ix = np.repeat(sids, [len(f) for f in frames]) # Add id to the index, so the frame is indexed by (date, id). ohlcv_frame.set_index(sid_ix, append=True, inplace=True) frames = { field: ohlcv_frame[field].unstack() for field in FIELDS } return self.write(country_code, frames, scaling_factors)
Construct from an h5py.File and a country code. Parameters ---------- h5_file : h5py.File An HDF5 daily pricing file. country_code : str The ISO 3166 alpha-2 country code for the country to read. def from_file(cls, h5_file, country_code): """ Construct from an h5py.File and a country code. Parameters ---------- h5_file : h5py.File An HDF5 daily pricing file. country_code : str The ISO 3166 alpha-2 country code for the country to read. """ if h5_file.attrs['version'] != VERSION: raise ValueError( 'mismatched version: file is of version %s, expected %s' % ( h5_file.attrs['version'], VERSION, ), ) return cls(h5_file[country_code])
Construct from a file path and a country code. Parameters ---------- path : str The path to an HDF5 daily pricing file. country_code : str The ISO 3166 alpha-2 country code for the country to read. def from_path(cls, path, country_code): """ Construct from a file path and a country code. Parameters ---------- path : str The path to an HDF5 daily pricing file. country_code : str The ISO 3166 alpha-2 country code for the country to read. """ return cls.from_file(h5py.File(path), country_code)
Parameters ---------- columns : list of str 'open', 'high', 'low', 'close', or 'volume' start_date: Timestamp Beginning of the window range. end_date: Timestamp End of the window range. assets : list of int The asset identifiers in the window. Returns ------- list of np.ndarray A list with an entry per field of ndarrays with shape (minutes in range, sids) with a dtype of float64, containing the values for the respective field over start and end dt range. def load_raw_arrays(self, columns, start_date, end_date, assets): """ Parameters ---------- columns : list of str 'open', 'high', 'low', 'close', or 'volume' start_date: Timestamp Beginning of the window range. end_date: Timestamp End of the window range. assets : list of int The asset identifiers in the window. Returns ------- list of np.ndarray A list with an entry per field of ndarrays with shape (minutes in range, sids) with a dtype of float64, containing the values for the respective field over start and end dt range. """ self._validate_timestamp(start_date) self._validate_timestamp(end_date) start = start_date.asm8 end = end_date.asm8 date_slice = self._compute_date_range_slice(start, end) n_dates = date_slice.stop - date_slice.start # Create a buffer into which we'll read data from the h5 file. # Allocate an extra row of space that will always contain null values. # We'll use that space to provide "data" for entries in ``assets`` that # are unknown to us. full_buf = np.zeros((len(self.sids) + 1, n_dates), dtype=np.uint32) # We'll only read values into this portion of the read buf. mutable_buf = full_buf[:-1] # Indexer that converts an array aligned to self.sids (which is what we # pull from the h5 file) into an array aligned to ``assets``. # # Unknown assets will have an index of -1, which means they'll always # pull from the last row of the read buffer. We allocated an extra # empty row above so that these lookups will cause us to fill our # output buffer with "null" values. sid_selector = self._make_sid_selector(assets) out = [] for column in columns: # Zero the buffer to prepare to receive new data. mutable_buf.fill(0) dataset = self._country_group[DATA][column] # Fill the mutable portion of our buffer with data from the file. dataset.read_direct( mutable_buf, np.s_[:, date_slice], ) # Select data from the **full buffer**. Unknown assets will pull # from the last row, which is always empty. out.append(self._postprocessors[column](full_buf[sid_selector].T)) return out
Build an indexer mapping ``self.sids`` to ``assets``. Parameters ---------- assets : list[int] List of assets requested by a caller of ``load_raw_arrays``. Returns ------- index : np.array[int64] Index array containing the index in ``self.sids`` for each location in ``assets``. Entries in ``assets`` for which we don't have a sid will contain -1. It is caller's responsibility to handle these values correctly. def _make_sid_selector(self, assets): """ Build an indexer mapping ``self.sids`` to ``assets``. Parameters ---------- assets : list[int] List of assets requested by a caller of ``load_raw_arrays``. Returns ------- index : np.array[int64] Index array containing the index in ``self.sids`` for each location in ``assets``. Entries in ``assets`` for which we don't have a sid will contain -1. It is caller's responsibility to handle these values correctly. """ assets = np.array(assets) sid_selector = self.sids.searchsorted(assets) unknown = np.in1d(assets, self.sids, invert=True) sid_selector[unknown] = -1 return sid_selector
Validate that asset identifiers are contained in the daily bars. Parameters ---------- assets : array-like[int] The asset identifiers to validate. Raises ------ NoDataForSid If one or more of the provided asset identifiers are not contained in the daily bars. def _validate_assets(self, assets): """Validate that asset identifiers are contained in the daily bars. Parameters ---------- assets : array-like[int] The asset identifiers to validate. Raises ------ NoDataForSid If one or more of the provided asset identifiers are not contained in the daily bars. """ missing_sids = np.setdiff1d(assets, self.sids) if len(missing_sids): raise NoDataForSid( 'Assets not contained in daily pricing file: {}'.format( missing_sids ) )
Retrieve the value at the given coordinates. Parameters ---------- sid : int The asset identifier. dt : pd.Timestamp The timestamp for the desired data point. field : string The OHLVC name for the desired data point. Returns ------- value : float|int The value at the given coordinates, ``float`` for OHLC, ``int`` for 'volume'. Raises ------ NoDataOnDate If the given dt is not a valid market minute (in minute mode) or session (in daily mode) according to this reader's tradingcalendar. def get_value(self, sid, dt, field): """ Retrieve the value at the given coordinates. Parameters ---------- sid : int The asset identifier. dt : pd.Timestamp The timestamp for the desired data point. field : string The OHLVC name for the desired data point. Returns ------- value : float|int The value at the given coordinates, ``float`` for OHLC, ``int`` for 'volume'. Raises ------ NoDataOnDate If the given dt is not a valid market minute (in minute mode) or session (in daily mode) according to this reader's tradingcalendar. """ self._validate_assets([sid]) self._validate_timestamp(dt) sid_ix = self.sids.searchsorted(sid) dt_ix = self.dates.searchsorted(dt.asm8) value = self._postprocessors[field]( self._country_group[DATA][field][sid_ix, dt_ix] ) # When the value is nan, this dt may be outside the asset's lifetime. # If that's the case, the proper NoDataOnDate exception is raised. # Otherwise (when there's just a hole in the middle of the data), the # nan is returned. if np.isnan(value): if dt.asm8 < self.asset_start_dates[sid_ix]: raise NoDataBeforeDate() if dt.asm8 > self.asset_end_dates[sid_ix]: raise NoDataAfterDate() return value
Get the latest day on or before ``dt`` in which ``asset`` traded. If there are no trades on or before ``dt``, returns ``pd.NaT``. Parameters ---------- asset : zipline.asset.Asset The asset for which to get the last traded day. dt : pd.Timestamp The dt at which to start searching for the last traded day. Returns ------- last_traded : pd.Timestamp The day of the last trade for the given asset, using the input dt as a vantage point. def get_last_traded_dt(self, asset, dt): """ Get the latest day on or before ``dt`` in which ``asset`` traded. If there are no trades on or before ``dt``, returns ``pd.NaT``. Parameters ---------- asset : zipline.asset.Asset The asset for which to get the last traded day. dt : pd.Timestamp The dt at which to start searching for the last traded day. Returns ------- last_traded : pd.Timestamp The day of the last trade for the given asset, using the input dt as a vantage point. """ sid_ix = self.sids.searchsorted(asset.sid) # Used to get a slice of all dates up to and including ``dt``. dt_limit_ix = self.dates.searchsorted(dt.asm8, side='right') # Get the indices of all dates with nonzero volume. nonzero_volume_ixs = np.ravel( np.nonzero(self._country_group[DATA][VOLUME][sid_ix, :dt_limit_ix]) ) if len(nonzero_volume_ixs) == 0: return pd.NaT return pd.Timestamp(self.dates[nonzero_volume_ixs][-1], tz='UTC')
Construct from an h5py.File. Parameters ---------- h5_file : h5py.File An HDF5 daily pricing file. def from_file(cls, h5_file): """ Construct from an h5py.File. Parameters ---------- h5_file : h5py.File An HDF5 daily pricing file. """ return cls({ country: HDF5DailyBarReader.from_file(h5_file, country) for country in h5_file.keys() })
Parameters ---------- columns : list of str 'open', 'high', 'low', 'close', or 'volume' start_date: Timestamp Beginning of the window range. end_date: Timestamp End of the window range. assets : list of int The asset identifiers in the window. Returns ------- list of np.ndarray A list with an entry per field of ndarrays with shape (minutes in range, sids) with a dtype of float64, containing the values for the respective field over start and end dt range. def load_raw_arrays(self, columns, start_date, end_date, assets): """ Parameters ---------- columns : list of str 'open', 'high', 'low', 'close', or 'volume' start_date: Timestamp Beginning of the window range. end_date: Timestamp End of the window range. assets : list of int The asset identifiers in the window. Returns ------- list of np.ndarray A list with an entry per field of ndarrays with shape (minutes in range, sids) with a dtype of float64, containing the values for the respective field over start and end dt range. """ country_code = self._country_code_for_assets(assets) return self._readers[country_code].load_raw_arrays( columns, start_date, end_date, assets, )
Returns ------- sessions : DatetimeIndex All session labels (unioning the range for all assets) which the reader can provide. def sessions(self): """ Returns ------- sessions : DatetimeIndex All session labels (unioning the range for all assets) which the reader can provide. """ return pd.to_datetime( reduce( np.union1d, (reader.dates for reader in self._readers.values()), ), utc=True, )
Retrieve the value at the given coordinates. Parameters ---------- sid : int The asset identifier. dt : pd.Timestamp The timestamp for the desired data point. field : string The OHLVC name for the desired data point. Returns ------- value : float|int The value at the given coordinates, ``float`` for OHLC, ``int`` for 'volume'. Raises ------ NoDataOnDate If the given dt is not a valid market minute (in minute mode) or session (in daily mode) according to this reader's tradingcalendar. NoDataForSid If the given sid is not valid. def get_value(self, sid, dt, field): """ Retrieve the value at the given coordinates. Parameters ---------- sid : int The asset identifier. dt : pd.Timestamp The timestamp for the desired data point. field : string The OHLVC name for the desired data point. Returns ------- value : float|int The value at the given coordinates, ``float`` for OHLC, ``int`` for 'volume'. Raises ------ NoDataOnDate If the given dt is not a valid market minute (in minute mode) or session (in daily mode) according to this reader's tradingcalendar. NoDataForSid If the given sid is not valid. """ try: country_code = self._country_code_for_assets([sid]) except ValueError as exc: raise_from( NoDataForSid( 'Asset not contained in daily pricing file: {}'.format(sid) ), exc ) return self._readers[country_code].get_value(sid, dt, field)
Get the latest day on or before ``dt`` in which ``asset`` traded. If there are no trades on or before ``dt``, returns ``pd.NaT``. Parameters ---------- asset : zipline.asset.Asset The asset for which to get the last traded day. dt : pd.Timestamp The dt at which to start searching for the last traded day. Returns ------- last_traded : pd.Timestamp The day of the last trade for the given asset, using the input dt as a vantage point. def get_last_traded_dt(self, asset, dt): """ Get the latest day on or before ``dt`` in which ``asset`` traded. If there are no trades on or before ``dt``, returns ``pd.NaT``. Parameters ---------- asset : zipline.asset.Asset The asset for which to get the last traded day. dt : pd.Timestamp The dt at which to start searching for the last traded day. Returns ------- last_traded : pd.Timestamp The day of the last trade for the given asset, using the input dt as a vantage point. """ country_code = self._country_code_for_assets([asset.sid]) return self._readers[country_code].get_last_traded_dt(asset, dt)
Update dataframes in place to set indentifier columns as indices. For each input frame, if the frame has a column with the same name as its associated index column, set that column as the index. Otherwise, assume the index already contains identifiers. If frames are passed as None, they're ignored. def _normalize_index_columns_in_place(equities, equity_supplementary_mappings, futures, exchanges, root_symbols): """ Update dataframes in place to set indentifier columns as indices. For each input frame, if the frame has a column with the same name as its associated index column, set that column as the index. Otherwise, assume the index already contains identifiers. If frames are passed as None, they're ignored. """ for frame, column_name in ((equities, 'sid'), (equity_supplementary_mappings, 'sid'), (futures, 'sid'), (exchanges, 'exchange'), (root_symbols, 'root_symbol')): if frame is not None and column_name in frame: frame.set_index(column_name, inplace=True)
Takes in a symbol that may be delimited and splits it in to a company symbol and share class symbol. Also returns the fuzzy symbol, which is the symbol without any fuzzy characters at all. Parameters ---------- symbol : str The possibly-delimited symbol to be split Returns ------- company_symbol : str The company part of the symbol. share_class_symbol : str The share class part of a symbol. def split_delimited_symbol(symbol): """ Takes in a symbol that may be delimited and splits it in to a company symbol and share class symbol. Also returns the fuzzy symbol, which is the symbol without any fuzzy characters at all. Parameters ---------- symbol : str The possibly-delimited symbol to be split Returns ------- company_symbol : str The company part of the symbol. share_class_symbol : str The share class part of a symbol. """ # return blank strings for any bad fuzzy symbols, like NaN or None if symbol in _delimited_symbol_default_triggers: return '', '' symbol = symbol.upper() split_list = re.split( pattern=_delimited_symbol_delimiters_regex, string=symbol, maxsplit=1, ) # Break the list up in to its two components, the company symbol and the # share class symbol company_symbol = split_list[0] if len(split_list) > 1: share_class_symbol = split_list[1] else: share_class_symbol = '' return company_symbol, share_class_symbol
Generates an output dataframe from the given subset of user-provided data, the given column names, and the given default values. Parameters ---------- data_subset : DataFrame A DataFrame, usually from an AssetData object, that contains the user's input metadata for the asset type being processed defaults : dict A dict where the keys are the names of the columns of the desired output DataFrame and the values are a function from dataframe and column name to the default values to insert in the DataFrame if no user data is provided Returns ------- DataFrame A DataFrame containing all user-provided metadata, and default values wherever user-provided metadata was missing def _generate_output_dataframe(data_subset, defaults): """ Generates an output dataframe from the given subset of user-provided data, the given column names, and the given default values. Parameters ---------- data_subset : DataFrame A DataFrame, usually from an AssetData object, that contains the user's input metadata for the asset type being processed defaults : dict A dict where the keys are the names of the columns of the desired output DataFrame and the values are a function from dataframe and column name to the default values to insert in the DataFrame if no user data is provided Returns ------- DataFrame A DataFrame containing all user-provided metadata, and default values wherever user-provided metadata was missing """ # The columns provided. cols = set(data_subset.columns) desired_cols = set(defaults) # Drop columns with unrecognised headers. data_subset.drop(cols - desired_cols, axis=1, inplace=True) # Get those columns which we need but # for which no data has been supplied. for col in desired_cols - cols: # write the default value for any missing columns data_subset[col] = defaults[col](data_subset, col) return data_subset
Check that there are no cases where multiple symbols resolve to the same asset at the same time in the same country. Parameters ---------- df : pd.DataFrame The equity symbol mappings table. exchanges : pd.DataFrame The exchanges table. asset_exchange : pd.Series A series that maps sids to the exchange the asset is in. Raises ------ ValueError Raised when there are ambiguous symbol mappings. def _check_symbol_mappings(df, exchanges, asset_exchange): """Check that there are no cases where multiple symbols resolve to the same asset at the same time in the same country. Parameters ---------- df : pd.DataFrame The equity symbol mappings table. exchanges : pd.DataFrame The exchanges table. asset_exchange : pd.Series A series that maps sids to the exchange the asset is in. Raises ------ ValueError Raised when there are ambiguous symbol mappings. """ mappings = df.set_index('sid')[list(mapping_columns)].copy() mappings['country_code'] = exchanges['country_code'][ asset_exchange.loc[df['sid']] ].values ambigious = {} def check_intersections(persymbol): intersections = list(intersecting_ranges(map( from_tuple, zip(persymbol.start_date, persymbol.end_date), ))) if intersections: data = persymbol[ ['start_date', 'end_date'] ].astype('datetime64[ns]') # indent the dataframe string, also compute this early because # ``persymbol`` is a view and ``astype`` doesn't copy the index # correctly in pandas 0.22 msg_component = '\n '.join(str(data).splitlines()) ambigious[persymbol.name] = intersections, msg_component mappings.groupby(['symbol', 'country_code']).apply(check_intersections) if ambigious: raise ValueError( 'Ambiguous ownership for %d symbol%s, multiple assets held the' ' following symbols:\n%s' % ( len(ambigious), '' if len(ambigious) == 1 else 's', '\n'.join( '%s (%s):\n intersections: %s\n %s' % ( symbol, country_code, tuple(map(_format_range, intersections)), cs, ) for (symbol, country_code), (intersections, cs) in sorted( ambigious.items(), key=first, ), ), ) )
Split out the symbol: sid mappings from the raw data. Parameters ---------- df : pd.DataFrame The dataframe with multiple rows for each symbol: sid pair. exchanges : pd.DataFrame The exchanges table. Returns ------- asset_info : pd.DataFrame The asset info with one row per asset. symbol_mappings : pd.DataFrame The dataframe of just symbol: sid mappings. The index will be the sid, then there will be three columns: symbol, start_date, and end_date. def _split_symbol_mappings(df, exchanges): """Split out the symbol: sid mappings from the raw data. Parameters ---------- df : pd.DataFrame The dataframe with multiple rows for each symbol: sid pair. exchanges : pd.DataFrame The exchanges table. Returns ------- asset_info : pd.DataFrame The asset info with one row per asset. symbol_mappings : pd.DataFrame The dataframe of just symbol: sid mappings. The index will be the sid, then there will be three columns: symbol, start_date, and end_date. """ mappings = df[list(mapping_columns)] with pd.option_context('mode.chained_assignment', None): mappings['sid'] = mappings.index mappings.reset_index(drop=True, inplace=True) # take the most recent sid->exchange mapping based on end date asset_exchange = df[ ['exchange', 'end_date'] ].sort_values('end_date').groupby(level=0)['exchange'].nth(-1) _check_symbol_mappings(mappings, exchanges, asset_exchange) return ( df.groupby(level=0).apply(_check_asset_group), mappings, )
Convert a timeseries into an Int64Index of nanoseconds since the epoch. Parameters ---------- dt_series : pd.Series The timeseries to convert. Returns ------- idx : pd.Int64Index The index converted to nanoseconds since the epoch. def _dt_to_epoch_ns(dt_series): """Convert a timeseries into an Int64Index of nanoseconds since the epoch. Parameters ---------- dt_series : pd.Series The timeseries to convert. Returns ------- idx : pd.Int64Index The index converted to nanoseconds since the epoch. """ index = pd.to_datetime(dt_series.values) if index.tzinfo is None: index = index.tz_localize('UTC') else: index = index.tz_convert('UTC') return index.view(np.int64)
Checks for a version value in the version table. Parameters ---------- conn : sa.Connection The connection to use to perform the check. version_table : sa.Table The version table of the asset database expected_version : int The expected version of the asset database Raises ------ AssetDBVersionError If the version is in the table and not equal to ASSET_DB_VERSION. def check_version_info(conn, version_table, expected_version): """ Checks for a version value in the version table. Parameters ---------- conn : sa.Connection The connection to use to perform the check. version_table : sa.Table The version table of the asset database expected_version : int The expected version of the asset database Raises ------ AssetDBVersionError If the version is in the table and not equal to ASSET_DB_VERSION. """ # Read the version out of the table version_from_table = conn.execute( sa.select((version_table.c.version,)), ).scalar() # A db without a version is considered v0 if version_from_table is None: version_from_table = 0 # Raise an error if the versions do not match if (version_from_table != expected_version): raise AssetDBVersionError(db_version=version_from_table, expected_version=expected_version)
Inserts the version value in to the version table. Parameters ---------- conn : sa.Connection The connection to use to execute the insert. version_table : sa.Table The version table of the asset database version_value : int The version to write in to the database def write_version_info(conn, version_table, version_value): """ Inserts the version value in to the version table. Parameters ---------- conn : sa.Connection The connection to use to execute the insert. version_table : sa.Table The version table of the asset database version_value : int The version to write in to the database """ conn.execute(sa.insert(version_table, values={'version': version_value}))
Write asset metadata to a sqlite database in the format that it is stored in the assets db. Parameters ---------- equities : pd.DataFrame, optional The equity metadata. The columns for this dataframe are: symbol : str The ticker symbol for this equity. asset_name : str The full name for this asset. start_date : datetime The date when this asset was created. end_date : datetime, optional The last date we have trade data for this asset. first_traded : datetime, optional The first date we have trade data for this asset. auto_close_date : datetime, optional The date on which to close any positions in this asset. exchange : str The exchange where this asset is traded. The index of this dataframe should contain the sids. futures : pd.DataFrame, optional The future contract metadata. The columns for this dataframe are: symbol : str The ticker symbol for this futures contract. root_symbol : str The root symbol, or the symbol with the expiration stripped out. asset_name : str The full name for this asset. start_date : datetime, optional The date when this asset was created. end_date : datetime, optional The last date we have trade data for this asset. first_traded : datetime, optional The first date we have trade data for this asset. exchange : str The exchange where this asset is traded. notice_date : datetime The date when the owner of the contract may be forced to take physical delivery of the contract's asset. expiration_date : datetime The date when the contract expires. auto_close_date : datetime The date when the broker will automatically close any positions in this contract. tick_size : float The minimum price movement of the contract. multiplier: float The amount of the underlying asset represented by this contract. exchanges : pd.DataFrame, optional The exchanges where assets can be traded. The columns of this dataframe are: exchange : str The full name of the exchange. canonical_name : str The canonical name of the exchange. country_code : str The ISO 3166 alpha-2 country code of the exchange. root_symbols : pd.DataFrame, optional The root symbols for the futures contracts. The columns for this dataframe are: root_symbol : str The root symbol name. root_symbol_id : int The unique id for this root symbol. sector : string, optional The sector of this root symbol. description : string, optional A short description of this root symbol. exchange : str The exchange where this root symbol is traded. equity_supplementary_mappings : pd.DataFrame, optional Additional mappings from values of abitrary type to assets. chunk_size : int, optional The amount of rows to write to the SQLite table at once. This defaults to the default number of bind params in sqlite. If you have compiled sqlite3 with more bind or less params you may want to pass that value here. def write_direct(self, equities=None, equity_symbol_mappings=None, equity_supplementary_mappings=None, futures=None, exchanges=None, root_symbols=None, chunk_size=DEFAULT_CHUNK_SIZE): """Write asset metadata to a sqlite database in the format that it is stored in the assets db. Parameters ---------- equities : pd.DataFrame, optional The equity metadata. The columns for this dataframe are: symbol : str The ticker symbol for this equity. asset_name : str The full name for this asset. start_date : datetime The date when this asset was created. end_date : datetime, optional The last date we have trade data for this asset. first_traded : datetime, optional The first date we have trade data for this asset. auto_close_date : datetime, optional The date on which to close any positions in this asset. exchange : str The exchange where this asset is traded. The index of this dataframe should contain the sids. futures : pd.DataFrame, optional The future contract metadata. The columns for this dataframe are: symbol : str The ticker symbol for this futures contract. root_symbol : str The root symbol, or the symbol with the expiration stripped out. asset_name : str The full name for this asset. start_date : datetime, optional The date when this asset was created. end_date : datetime, optional The last date we have trade data for this asset. first_traded : datetime, optional The first date we have trade data for this asset. exchange : str The exchange where this asset is traded. notice_date : datetime The date when the owner of the contract may be forced to take physical delivery of the contract's asset. expiration_date : datetime The date when the contract expires. auto_close_date : datetime The date when the broker will automatically close any positions in this contract. tick_size : float The minimum price movement of the contract. multiplier: float The amount of the underlying asset represented by this contract. exchanges : pd.DataFrame, optional The exchanges where assets can be traded. The columns of this dataframe are: exchange : str The full name of the exchange. canonical_name : str The canonical name of the exchange. country_code : str The ISO 3166 alpha-2 country code of the exchange. root_symbols : pd.DataFrame, optional The root symbols for the futures contracts. The columns for this dataframe are: root_symbol : str The root symbol name. root_symbol_id : int The unique id for this root symbol. sector : string, optional The sector of this root symbol. description : string, optional A short description of this root symbol. exchange : str The exchange where this root symbol is traded. equity_supplementary_mappings : pd.DataFrame, optional Additional mappings from values of abitrary type to assets. chunk_size : int, optional The amount of rows to write to the SQLite table at once. This defaults to the default number of bind params in sqlite. If you have compiled sqlite3 with more bind or less params you may want to pass that value here. """ if equities is not None: equities = _generate_output_dataframe( equities, _direct_equities_defaults, ) if equity_symbol_mappings is None: raise ValueError( 'equities provided with no symbol mapping data', ) equity_symbol_mappings = _generate_output_dataframe( equity_symbol_mappings, _equity_symbol_mappings_defaults, ) _check_symbol_mappings( equity_symbol_mappings, exchanges, equities['exchange'], ) if equity_supplementary_mappings is not None: equity_supplementary_mappings = _generate_output_dataframe( equity_supplementary_mappings, _equity_supplementary_mappings_defaults, ) if futures is not None: futures = _generate_output_dataframe(_futures_defaults, futures) if exchanges is not None: exchanges = _generate_output_dataframe( exchanges.set_index('exchange'), _exchanges_defaults, ) if root_symbols is not None: root_symbols = _generate_output_dataframe( root_symbols, _root_symbols_defaults, ) # Set named identifier columns as indices, if provided. _normalize_index_columns_in_place( equities=equities, equity_supplementary_mappings=equity_supplementary_mappings, futures=futures, exchanges=exchanges, root_symbols=root_symbols, ) self._real_write( equities=equities, equity_symbol_mappings=equity_symbol_mappings, equity_supplementary_mappings=equity_supplementary_mappings, futures=futures, exchanges=exchanges, root_symbols=root_symbols, chunk_size=chunk_size, )
Write asset metadata to a sqlite database. Parameters ---------- equities : pd.DataFrame, optional The equity metadata. The columns for this dataframe are: symbol : str The ticker symbol for this equity. asset_name : str The full name for this asset. start_date : datetime The date when this asset was created. end_date : datetime, optional The last date we have trade data for this asset. first_traded : datetime, optional The first date we have trade data for this asset. auto_close_date : datetime, optional The date on which to close any positions in this asset. exchange : str The exchange where this asset is traded. The index of this dataframe should contain the sids. futures : pd.DataFrame, optional The future contract metadata. The columns for this dataframe are: symbol : str The ticker symbol for this futures contract. root_symbol : str The root symbol, or the symbol with the expiration stripped out. asset_name : str The full name for this asset. start_date : datetime, optional The date when this asset was created. end_date : datetime, optional The last date we have trade data for this asset. first_traded : datetime, optional The first date we have trade data for this asset. exchange : str The exchange where this asset is traded. notice_date : datetime The date when the owner of the contract may be forced to take physical delivery of the contract's asset. expiration_date : datetime The date when the contract expires. auto_close_date : datetime The date when the broker will automatically close any positions in this contract. tick_size : float The minimum price movement of the contract. multiplier: float The amount of the underlying asset represented by this contract. exchanges : pd.DataFrame, optional The exchanges where assets can be traded. The columns of this dataframe are: exchange : str The full name of the exchange. canonical_name : str The canonical name of the exchange. country_code : str The ISO 3166 alpha-2 country code of the exchange. root_symbols : pd.DataFrame, optional The root symbols for the futures contracts. The columns for this dataframe are: root_symbol : str The root symbol name. root_symbol_id : int The unique id for this root symbol. sector : string, optional The sector of this root symbol. description : string, optional A short description of this root symbol. exchange : str The exchange where this root symbol is traded. equity_supplementary_mappings : pd.DataFrame, optional Additional mappings from values of abitrary type to assets. chunk_size : int, optional The amount of rows to write to the SQLite table at once. This defaults to the default number of bind params in sqlite. If you have compiled sqlite3 with more bind or less params you may want to pass that value here. See Also -------- zipline.assets.asset_finder def write(self, equities=None, futures=None, exchanges=None, root_symbols=None, equity_supplementary_mappings=None, chunk_size=DEFAULT_CHUNK_SIZE): """Write asset metadata to a sqlite database. Parameters ---------- equities : pd.DataFrame, optional The equity metadata. The columns for this dataframe are: symbol : str The ticker symbol for this equity. asset_name : str The full name for this asset. start_date : datetime The date when this asset was created. end_date : datetime, optional The last date we have trade data for this asset. first_traded : datetime, optional The first date we have trade data for this asset. auto_close_date : datetime, optional The date on which to close any positions in this asset. exchange : str The exchange where this asset is traded. The index of this dataframe should contain the sids. futures : pd.DataFrame, optional The future contract metadata. The columns for this dataframe are: symbol : str The ticker symbol for this futures contract. root_symbol : str The root symbol, or the symbol with the expiration stripped out. asset_name : str The full name for this asset. start_date : datetime, optional The date when this asset was created. end_date : datetime, optional The last date we have trade data for this asset. first_traded : datetime, optional The first date we have trade data for this asset. exchange : str The exchange where this asset is traded. notice_date : datetime The date when the owner of the contract may be forced to take physical delivery of the contract's asset. expiration_date : datetime The date when the contract expires. auto_close_date : datetime The date when the broker will automatically close any positions in this contract. tick_size : float The minimum price movement of the contract. multiplier: float The amount of the underlying asset represented by this contract. exchanges : pd.DataFrame, optional The exchanges where assets can be traded. The columns of this dataframe are: exchange : str The full name of the exchange. canonical_name : str The canonical name of the exchange. country_code : str The ISO 3166 alpha-2 country code of the exchange. root_symbols : pd.DataFrame, optional The root symbols for the futures contracts. The columns for this dataframe are: root_symbol : str The root symbol name. root_symbol_id : int The unique id for this root symbol. sector : string, optional The sector of this root symbol. description : string, optional A short description of this root symbol. exchange : str The exchange where this root symbol is traded. equity_supplementary_mappings : pd.DataFrame, optional Additional mappings from values of abitrary type to assets. chunk_size : int, optional The amount of rows to write to the SQLite table at once. This defaults to the default number of bind params in sqlite. If you have compiled sqlite3 with more bind or less params you may want to pass that value here. See Also -------- zipline.assets.asset_finder """ if exchanges is None: exchange_names = [ df['exchange'] for df in (equities, futures, root_symbols) if df is not None ] if exchange_names: exchanges = pd.DataFrame({ 'exchange': pd.concat(exchange_names).unique(), }) data = self._load_data( equities if equities is not None else pd.DataFrame(), futures if futures is not None else pd.DataFrame(), exchanges if exchanges is not None else pd.DataFrame(), root_symbols if root_symbols is not None else pd.DataFrame(), ( equity_supplementary_mappings if equity_supplementary_mappings is not None else pd.DataFrame() ), ) self._real_write( equities=data.equities, equity_symbol_mappings=data.equities_mappings, equity_supplementary_mappings=data.equity_supplementary_mappings, futures=data.futures, root_symbols=data.root_symbols, exchanges=data.exchanges, chunk_size=chunk_size, )
Checks if any tables are present in the current assets database. Parameters ---------- txn : Transaction The open transaction to check in. Returns ------- has_tables : bool True if any tables are present, otherwise False. def _all_tables_present(self, txn): """ Checks if any tables are present in the current assets database. Parameters ---------- txn : Transaction The open transaction to check in. Returns ------- has_tables : bool True if any tables are present, otherwise False. """ conn = txn.connect() for table_name in asset_db_table_names: if txn.dialect.has_table(conn, table_name): return True return False
Connect to database and create tables. Parameters ---------- txn : sa.engine.Connection, optional The transaction to execute in. If this is not provided, a new transaction will be started with the engine provided. Returns ------- metadata : sa.MetaData The metadata that describes the new assets db. def init_db(self, txn=None): """Connect to database and create tables. Parameters ---------- txn : sa.engine.Connection, optional The transaction to execute in. If this is not provided, a new transaction will be started with the engine provided. Returns ------- metadata : sa.MetaData The metadata that describes the new assets db. """ with ExitStack() as stack: if txn is None: txn = stack.enter_context(self.engine.begin()) tables_already_exist = self._all_tables_present(txn) # Create the SQL tables if they do not already exist. metadata.create_all(txn, checkfirst=True) if tables_already_exist: check_version_info(txn, version_info, ASSET_DB_VERSION) else: write_version_info(txn, version_info, ASSET_DB_VERSION)
Returns a standard set of pandas.DataFrames: equities, futures, exchanges, root_symbols def _load_data(self, equities, futures, exchanges, root_symbols, equity_supplementary_mappings): """ Returns a standard set of pandas.DataFrames: equities, futures, exchanges, root_symbols """ # Set named identifier columns as indices, if provided. _normalize_index_columns_in_place( equities=equities, equity_supplementary_mappings=equity_supplementary_mappings, futures=futures, exchanges=exchanges, root_symbols=root_symbols, ) futures_output = self._normalize_futures(futures) equity_supplementary_mappings_output = ( self._normalize_equity_supplementary_mappings( equity_supplementary_mappings, ) ) exchanges_output = _generate_output_dataframe( data_subset=exchanges, defaults=_exchanges_defaults, ) equities_output, equities_mappings = self._normalize_equities( equities, exchanges_output, ) root_symbols_output = _generate_output_dataframe( data_subset=root_symbols, defaults=_root_symbols_defaults, ) return AssetData( equities=equities_output, equities_mappings=equities_mappings, futures=futures_output, exchanges=exchanges_output, root_symbols=root_symbols_output, equity_supplementary_mappings=equity_supplementary_mappings_output, )
Given an expression representing data to load, perform normalization and forward-filling and return the data, materialized. Only accepts data with a `sid` field. Parameters ---------- assets : pd.int64index the assets to load data for. data_query_cutoff_times : pd.DatetimeIndex The datetime when data should no longer be considered available for a session. expr : expr the expression representing the data to load. odo_kwargs : dict extra keyword arguments to pass to odo when executing the expression. checkpoints : expr, optional the expression representing the checkpointed data for `expr`. Returns ------- raw : pd.dataframe The result of computing expr and materializing the result as a dataframe. def load_raw_data(assets, data_query_cutoff_times, expr, odo_kwargs, checkpoints=None): """ Given an expression representing data to load, perform normalization and forward-filling and return the data, materialized. Only accepts data with a `sid` field. Parameters ---------- assets : pd.int64index the assets to load data for. data_query_cutoff_times : pd.DatetimeIndex The datetime when data should no longer be considered available for a session. expr : expr the expression representing the data to load. odo_kwargs : dict extra keyword arguments to pass to odo when executing the expression. checkpoints : expr, optional the expression representing the checkpointed data for `expr`. Returns ------- raw : pd.dataframe The result of computing expr and materializing the result as a dataframe. """ lower_dt, upper_dt = data_query_cutoff_times[[0, -1]] raw = ffill_query_in_range( expr, lower_dt, upper_dt, checkpoints=checkpoints, odo_kwargs=odo_kwargs, ) sids = raw[SID_FIELD_NAME] raw.drop( sids[~sids.isin(assets)].index, inplace=True ) return raw
Convert a tuple into a range with error handling. Parameters ---------- tup : tuple (len 2 or 3) The tuple to turn into a range. Returns ------- range : range The range from the tuple. Raises ------ ValueError Raised when the tuple length is not 2 or 3. def from_tuple(tup): """Convert a tuple into a range with error handling. Parameters ---------- tup : tuple (len 2 or 3) The tuple to turn into a range. Returns ------- range : range The range from the tuple. Raises ------ ValueError Raised when the tuple length is not 2 or 3. """ if len(tup) not in (2, 3): raise ValueError( 'tuple must contain 2 or 3 elements, not: %d (%r' % ( len(tup), tup, ), ) return range(*tup)
Convert a tuple into a range but pass ranges through silently. This is useful to ensure that input is a range so that attributes may be accessed with `.start`, `.stop` or so that containment checks are constant time. Parameters ---------- tup_or_range : tuple or range A tuple to pass to from_tuple or a range to return. Returns ------- range : range The input to convert to a range. Raises ------ ValueError Raised when the input is not a tuple or a range. ValueError is also raised if the input is a tuple whose length is not 2 or 3. def maybe_from_tuple(tup_or_range): """Convert a tuple into a range but pass ranges through silently. This is useful to ensure that input is a range so that attributes may be accessed with `.start`, `.stop` or so that containment checks are constant time. Parameters ---------- tup_or_range : tuple or range A tuple to pass to from_tuple or a range to return. Returns ------- range : range The input to convert to a range. Raises ------ ValueError Raised when the input is not a tuple or a range. ValueError is also raised if the input is a tuple whose length is not 2 or 3. """ if isinstance(tup_or_range, tuple): return from_tuple(tup_or_range) elif isinstance(tup_or_range, range): return tup_or_range raise ValueError( 'maybe_from_tuple expects a tuple or range, got %r: %r' % ( type(tup_or_range).__name__, tup_or_range, ), )
Check that the steps of ``a`` and ``b`` are both 1. Parameters ---------- a : range The first range to check. b : range The second range to check. Raises ------ ValueError Raised when either step is not 1. def _check_steps(a, b): """Check that the steps of ``a`` and ``b`` are both 1. Parameters ---------- a : range The first range to check. b : range The second range to check. Raises ------ ValueError Raised when either step is not 1. """ if a.step != 1: raise ValueError('a.step must be equal to 1, got: %s' % a.step) if b.step != 1: raise ValueError('b.step must be equal to 1, got: %s' % b.step)
Check if two ranges overlap. Parameters ---------- a : range The first range. b : range The second range. Returns ------- overlaps : bool Do these ranges overlap. Notes ----- This function does not support ranges with step != 1. def overlap(a, b): """Check if two ranges overlap. Parameters ---------- a : range The first range. b : range The second range. Returns ------- overlaps : bool Do these ranges overlap. Notes ----- This function does not support ranges with step != 1. """ _check_steps(a, b) return a.stop >= b.start and b.stop >= a.start
Merge two ranges with step == 1. Parameters ---------- a : range The first range. b : range The second range. def merge(a, b): """Merge two ranges with step == 1. Parameters ---------- a : range The first range. b : range The second range. """ _check_steps(a, b) return range(min(a.start, b.start), max(a.stop, b.stop))
helper for ``_group_ranges`` def _combine(n, rs): """helper for ``_group_ranges`` """ try: r, rs = peek(rs) except StopIteration: yield n return if overlap(n, r): yield merge(n, r) next(rs) for r in rs: yield r else: yield n for r in rs: yield r
Return any ranges that intersect. Parameters ---------- ranges : iterable[ranges] A sequence of ranges to check for intersections. Returns ------- intersections : iterable[ranges] A sequence of all of the ranges that intersected in ``ranges``. Examples -------- >>> ranges = [range(0, 1), range(2, 5), range(4, 7)] >>> list(intersecting_ranges(ranges)) [range(2, 5), range(4, 7)] >>> ranges = [range(0, 1), range(2, 3)] >>> list(intersecting_ranges(ranges)) [] >>> ranges = [range(0, 1), range(1, 2)] >>> list(intersecting_ranges(ranges)) [range(0, 1), range(1, 2)] def intersecting_ranges(ranges): """Return any ranges that intersect. Parameters ---------- ranges : iterable[ranges] A sequence of ranges to check for intersections. Returns ------- intersections : iterable[ranges] A sequence of all of the ranges that intersected in ``ranges``. Examples -------- >>> ranges = [range(0, 1), range(2, 5), range(4, 7)] >>> list(intersecting_ranges(ranges)) [range(2, 5), range(4, 7)] >>> ranges = [range(0, 1), range(2, 3)] >>> list(intersecting_ranges(ranges)) [] >>> ranges = [range(0, 1), range(1, 2)] >>> list(intersecting_ranges(ranges)) [range(0, 1), range(1, 2)] """ ranges = sorted(ranges, key=op.attrgetter('start')) return sorted_diff(ranges, group_ranges(ranges))
Returns a handle to data file. Creates containing directory, if needed. def get_data_filepath(name, environ=None): """ Returns a handle to data file. Creates containing directory, if needed. """ dr = data_root(environ) if not os.path.exists(dr): os.makedirs(dr) return os.path.join(dr, name)
Does `series_or_df` have data on or before first_date and on or after last_date? def has_data_for_dates(series_or_df, first_date, last_date): """ Does `series_or_df` have data on or before first_date and on or after last_date? """ dts = series_or_df.index if not isinstance(dts, pd.DatetimeIndex): raise TypeError("Expected a DatetimeIndex, but got %s." % type(dts)) first, last = dts[[0, -1]] return (first <= first_date) and (last >= last_date)
Load benchmark returns and treasury yield curves for the given calendar and benchmark symbol. Benchmarks are downloaded as a Series from IEX Trading. Treasury curves are US Treasury Bond rates and are downloaded from 'www.federalreserve.gov' by default. For Canadian exchanges, a loader for Canadian bonds from the Bank of Canada is also available. Results downloaded from the internet are cached in ~/.zipline/data. Subsequent loads will attempt to read from the cached files before falling back to redownload. Parameters ---------- trading_day : pandas.CustomBusinessDay, optional A trading_day used to determine the latest day for which we expect to have data. Defaults to an NYSE trading day. trading_days : pd.DatetimeIndex, optional A calendar of trading days. Also used for determining what cached dates we should expect to have cached. Defaults to the NYSE calendar. bm_symbol : str, optional Symbol for the benchmark index to load. Defaults to 'SPY', the ticker for the S&P 500, provided by IEX Trading. Returns ------- (benchmark_returns, treasury_curves) : (pd.Series, pd.DataFrame) Notes ----- Both return values are DatetimeIndexed with values dated to midnight in UTC of each stored date. The columns of `treasury_curves` are: '1month', '3month', '6month', '1year','2year','3year','5year','7year','10year','20year','30year' def load_market_data(trading_day=None, trading_days=None, bm_symbol='SPY', environ=None): """ Load benchmark returns and treasury yield curves for the given calendar and benchmark symbol. Benchmarks are downloaded as a Series from IEX Trading. Treasury curves are US Treasury Bond rates and are downloaded from 'www.federalreserve.gov' by default. For Canadian exchanges, a loader for Canadian bonds from the Bank of Canada is also available. Results downloaded from the internet are cached in ~/.zipline/data. Subsequent loads will attempt to read from the cached files before falling back to redownload. Parameters ---------- trading_day : pandas.CustomBusinessDay, optional A trading_day used to determine the latest day for which we expect to have data. Defaults to an NYSE trading day. trading_days : pd.DatetimeIndex, optional A calendar of trading days. Also used for determining what cached dates we should expect to have cached. Defaults to the NYSE calendar. bm_symbol : str, optional Symbol for the benchmark index to load. Defaults to 'SPY', the ticker for the S&P 500, provided by IEX Trading. Returns ------- (benchmark_returns, treasury_curves) : (pd.Series, pd.DataFrame) Notes ----- Both return values are DatetimeIndexed with values dated to midnight in UTC of each stored date. The columns of `treasury_curves` are: '1month', '3month', '6month', '1year','2year','3year','5year','7year','10year','20year','30year' """ if trading_day is None: trading_day = get_calendar('XNYS').day if trading_days is None: trading_days = get_calendar('XNYS').all_sessions first_date = trading_days[0] now = pd.Timestamp.utcnow() # we will fill missing benchmark data through latest trading date last_date = trading_days[trading_days.get_loc(now, method='ffill')] br = ensure_benchmark_data( bm_symbol, first_date, last_date, now, # We need the trading_day to figure out the close prior to the first # date so that we can compute returns for the first date. trading_day, environ, ) tc = ensure_treasury_data( bm_symbol, first_date, last_date, now, environ, ) # combine dt indices and reindex using ffill then bfill all_dt = br.index.union(tc.index) br = br.reindex(all_dt, method='ffill').fillna(method='bfill') tc = tc.reindex(all_dt, method='ffill').fillna(method='bfill') benchmark_returns = br[br.index.slice_indexer(first_date, last_date)] treasury_curves = tc[tc.index.slice_indexer(first_date, last_date)] return benchmark_returns, treasury_curves
Ensure we have benchmark data for `symbol` from `first_date` to `last_date` Parameters ---------- symbol : str The symbol for the benchmark to load. first_date : pd.Timestamp First required date for the cache. last_date : pd.Timestamp Last required date for the cache. now : pd.Timestamp The current time. This is used to prevent repeated attempts to re-download data that isn't available due to scheduling quirks or other failures. trading_day : pd.CustomBusinessDay A trading day delta. Used to find the day before first_date so we can get the close of the day prior to first_date. We attempt to download data unless we already have data stored at the data cache for `symbol` whose first entry is before or on `first_date` and whose last entry is on or after `last_date`. If we perform a download and the cache criteria are not satisfied, we wait at least one hour before attempting a redownload. This is determined by comparing the current time to the result of os.path.getmtime on the cache path. def ensure_benchmark_data(symbol, first_date, last_date, now, trading_day, environ=None): """ Ensure we have benchmark data for `symbol` from `first_date` to `last_date` Parameters ---------- symbol : str The symbol for the benchmark to load. first_date : pd.Timestamp First required date for the cache. last_date : pd.Timestamp Last required date for the cache. now : pd.Timestamp The current time. This is used to prevent repeated attempts to re-download data that isn't available due to scheduling quirks or other failures. trading_day : pd.CustomBusinessDay A trading day delta. Used to find the day before first_date so we can get the close of the day prior to first_date. We attempt to download data unless we already have data stored at the data cache for `symbol` whose first entry is before or on `first_date` and whose last entry is on or after `last_date`. If we perform a download and the cache criteria are not satisfied, we wait at least one hour before attempting a redownload. This is determined by comparing the current time to the result of os.path.getmtime on the cache path. """ filename = get_benchmark_filename(symbol) data = _load_cached_data(filename, first_date, last_date, now, 'benchmark', environ) if data is not None: return data # If no cached data was found or it was missing any dates then download the # necessary data. logger.info( ('Downloading benchmark data for {symbol!r} ' 'from {first_date} to {last_date}'), symbol=symbol, first_date=first_date - trading_day, last_date=last_date ) try: data = get_benchmark_returns(symbol) data.to_csv(get_data_filepath(filename, environ)) except (OSError, IOError, HTTPError): logger.exception('Failed to cache the new benchmark returns') raise if not has_data_for_dates(data, first_date, last_date): logger.warn( ("Still don't have expected benchmark data for {symbol!r} " "from {first_date} to {last_date} after redownload!"), symbol=symbol, first_date=first_date - trading_day, last_date=last_date ) return data
Ensure we have treasury data from treasury module associated with `symbol`. Parameters ---------- symbol : str Benchmark symbol for which we're loading associated treasury curves. first_date : pd.Timestamp First date required to be in the cache. last_date : pd.Timestamp Last date required to be in the cache. now : pd.Timestamp The current time. This is used to prevent repeated attempts to re-download data that isn't available due to scheduling quirks or other failures. We attempt to download data unless we already have data stored in the cache for `module_name` whose first entry is before or on `first_date` and whose last entry is on or after `last_date`. If we perform a download and the cache criteria are not satisfied, we wait at least one hour before attempting a redownload. This is determined by comparing the current time to the result of os.path.getmtime on the cache path. def ensure_treasury_data(symbol, first_date, last_date, now, environ=None): """ Ensure we have treasury data from treasury module associated with `symbol`. Parameters ---------- symbol : str Benchmark symbol for which we're loading associated treasury curves. first_date : pd.Timestamp First date required to be in the cache. last_date : pd.Timestamp Last date required to be in the cache. now : pd.Timestamp The current time. This is used to prevent repeated attempts to re-download data that isn't available due to scheduling quirks or other failures. We attempt to download data unless we already have data stored in the cache for `module_name` whose first entry is before or on `first_date` and whose last entry is on or after `last_date`. If we perform a download and the cache criteria are not satisfied, we wait at least one hour before attempting a redownload. This is determined by comparing the current time to the result of os.path.getmtime on the cache path. """ loader_module, filename, source = INDEX_MAPPING.get( symbol, INDEX_MAPPING['SPY'], ) first_date = max(first_date, loader_module.earliest_possible_date()) data = _load_cached_data(filename, first_date, last_date, now, 'treasury', environ) if data is not None: return data # If no cached data was found or it was missing any dates then download the # necessary data. logger.info( ('Downloading treasury data for {symbol!r} ' 'from {first_date} to {last_date}'), symbol=symbol, first_date=first_date, last_date=last_date ) try: data = loader_module.get_treasury_data(first_date, last_date) data.to_csv(get_data_filepath(filename, environ)) except (OSError, IOError, HTTPError): logger.exception('failed to cache treasury data') if not has_data_for_dates(data, first_date, last_date): logger.warn( ("Still don't have expected treasury data for {symbol!r} " "from {first_date} to {last_date} after redownload!"), symbol=symbol, first_date=first_date, last_date=last_date ) return data
Specialize a term if it's loadable. def maybe_specialize(term, domain): """Specialize a term if it's loadable. """ if isinstance(term, LoadableTerm): return term.specialize(domain) return term
Add a term and all its children to ``graph``. ``parents`` is the set of all the parents of ``term` that we've added so far. It is only used to detect dependency cycles. def _add_to_graph(self, term, parents): """ Add a term and all its children to ``graph``. ``parents`` is the set of all the parents of ``term` that we've added so far. It is only used to detect dependency cycles. """ if self._frozen: raise ValueError( "Can't mutate %s after construction." % type(self).__name__ ) # If we've seen this node already as a parent of the current traversal, # it means we have an unsatisifiable dependency. This should only be # possible if the term's inputs are mutated after construction. if term in parents: raise CyclicDependency(term) parents.add(term) self.graph.add_node(term) for dependency in term.dependencies: self._add_to_graph(dependency, parents) self.graph.add_edge(dependency, term) parents.remove(term)
Return a topologically-sorted iterator over the terms in ``self`` which need to be computed. def execution_order(self, refcounts): """ Return a topologically-sorted iterator over the terms in ``self`` which need to be computed. """ return iter(nx.topological_sort( self.graph.subgraph( {term for term, refcount in refcounts.items() if refcount > 0}, ), ))
Calculate initial refcounts for execution of this graph. Parameters ---------- initial_terms : iterable[Term] An iterable of terms that were pre-computed before graph execution. Each node starts with a refcount equal to its outdegree, and output nodes get one extra reference to ensure that they're still in the graph at the end of execution. def initial_refcounts(self, initial_terms): """ Calculate initial refcounts for execution of this graph. Parameters ---------- initial_terms : iterable[Term] An iterable of terms that were pre-computed before graph execution. Each node starts with a refcount equal to its outdegree, and output nodes get one extra reference to ensure that they're still in the graph at the end of execution. """ refcounts = self.graph.out_degree() for t in self.outputs.values(): refcounts[t] += 1 for t in initial_terms: self._decref_dependencies_recursive(t, refcounts, set()) return refcounts
Decrement terms recursively. Notes ----- This should only be used to build the initial workspace, after that we should use: :meth:`~zipline.pipeline.graph.TermGraph.decref_dependencies` def _decref_dependencies_recursive(self, term, refcounts, garbage): """ Decrement terms recursively. Notes ----- This should only be used to build the initial workspace, after that we should use: :meth:`~zipline.pipeline.graph.TermGraph.decref_dependencies` """ # Edges are tuple of (from, to). for parent, _ in self.graph.in_edges([term]): refcounts[parent] -= 1 # No one else depends on this term. Remove it from the # workspace to conserve memory. if refcounts[parent] == 0: garbage.add(parent) self._decref_dependencies_recursive(parent, refcounts, garbage)
Decrement in-edges for ``term`` after computation. Parameters ---------- term : zipline.pipeline.Term The term whose parents should be decref'ed. refcounts : dict[Term -> int] Dictionary of refcounts. Return ------ garbage : set[Term] Terms whose refcounts hit zero after decrefing. def decref_dependencies(self, term, refcounts): """ Decrement in-edges for ``term`` after computation. Parameters ---------- term : zipline.pipeline.Term The term whose parents should be decref'ed. refcounts : dict[Term -> int] Dictionary of refcounts. Return ------ garbage : set[Term] Terms whose refcounts hit zero after decrefing. """ garbage = set() # Edges are tuple of (from, to). for parent, _ in self.graph.in_edges([term]): refcounts[parent] -= 1 # No one else depends on this term. Remove it from the # workspace to conserve memory. if refcounts[parent] == 0: garbage.add(parent) return garbage
For all pairs (term, input) such that `input` is an input to `term`, compute a mapping:: (term, input) -> offset(term, input) where ``offset(term, input)`` is the number of rows that ``term`` should truncate off the raw array produced for ``input`` before using it. We compute this value as follows:: offset(term, input) = (extra_rows_computed(input) - extra_rows_computed(term) - requested_extra_rows(term, input)) Examples -------- Case 1 ~~~~~~ Factor A needs 5 extra rows of USEquityPricing.close, and Factor B needs 3 extra rows of the same. Factor A also requires 5 extra rows of USEquityPricing.high, which no other Factor uses. We don't require any extra rows of Factor A or Factor B We load 5 extra rows of both `price` and `high` to ensure we can service Factor A, and the following offsets get computed:: offset[Factor A, USEquityPricing.close] == (5 - 0) - 5 == 0 offset[Factor A, USEquityPricing.high] == (5 - 0) - 5 == 0 offset[Factor B, USEquityPricing.close] == (5 - 0) - 3 == 2 offset[Factor B, USEquityPricing.high] raises KeyError. Case 2 ~~~~~~ Factor A needs 5 extra rows of USEquityPricing.close, and Factor B needs 3 extra rows of Factor A, and Factor B needs 2 extra rows of USEquityPricing.close. We load 8 extra rows of USEquityPricing.close (enough to load 5 extra rows of Factor A), and the following offsets get computed:: offset[Factor A, USEquityPricing.close] == (8 - 3) - 5 == 0 offset[Factor B, USEquityPricing.close] == (8 - 0) - 2 == 6 offset[Factor B, Factor A] == (3 - 0) - 3 == 0 Notes ----- `offset(term, input) >= 0` for all valid pairs, since `input` must be an input to `term` if the pair appears in the mapping. This value is useful because we load enough rows of each input to serve all possible dependencies. However, for any given dependency, we only want to compute using the actual number of required extra rows for that dependency. We can do so by truncating off the first `offset` rows of the loaded data for `input`. See Also -------- :meth:`zipline.pipeline.graph.ExecutionPlan.offset` :meth:`zipline.pipeline.engine.ExecutionPlan.mask_and_dates_for_term` :meth:`zipline.pipeline.engine.SimplePipelineEngine._inputs_for_term` def offset(self): """ For all pairs (term, input) such that `input` is an input to `term`, compute a mapping:: (term, input) -> offset(term, input) where ``offset(term, input)`` is the number of rows that ``term`` should truncate off the raw array produced for ``input`` before using it. We compute this value as follows:: offset(term, input) = (extra_rows_computed(input) - extra_rows_computed(term) - requested_extra_rows(term, input)) Examples -------- Case 1 ~~~~~~ Factor A needs 5 extra rows of USEquityPricing.close, and Factor B needs 3 extra rows of the same. Factor A also requires 5 extra rows of USEquityPricing.high, which no other Factor uses. We don't require any extra rows of Factor A or Factor B We load 5 extra rows of both `price` and `high` to ensure we can service Factor A, and the following offsets get computed:: offset[Factor A, USEquityPricing.close] == (5 - 0) - 5 == 0 offset[Factor A, USEquityPricing.high] == (5 - 0) - 5 == 0 offset[Factor B, USEquityPricing.close] == (5 - 0) - 3 == 2 offset[Factor B, USEquityPricing.high] raises KeyError. Case 2 ~~~~~~ Factor A needs 5 extra rows of USEquityPricing.close, and Factor B needs 3 extra rows of Factor A, and Factor B needs 2 extra rows of USEquityPricing.close. We load 8 extra rows of USEquityPricing.close (enough to load 5 extra rows of Factor A), and the following offsets get computed:: offset[Factor A, USEquityPricing.close] == (8 - 3) - 5 == 0 offset[Factor B, USEquityPricing.close] == (8 - 0) - 2 == 6 offset[Factor B, Factor A] == (3 - 0) - 3 == 0 Notes ----- `offset(term, input) >= 0` for all valid pairs, since `input` must be an input to `term` if the pair appears in the mapping. This value is useful because we load enough rows of each input to serve all possible dependencies. However, for any given dependency, we only want to compute using the actual number of required extra rows for that dependency. We can do so by truncating off the first `offset` rows of the loaded data for `input`. See Also -------- :meth:`zipline.pipeline.graph.ExecutionPlan.offset` :meth:`zipline.pipeline.engine.ExecutionPlan.mask_and_dates_for_term` :meth:`zipline.pipeline.engine.SimplePipelineEngine._inputs_for_term` """ extra = self.extra_rows out = {} for term in self.graph: for dep, requested_extra_rows in term.dependencies.items(): specialized_dep = maybe_specialize(dep, self.domain) # How much bigger is the result for dep compared to term? size_difference = extra[specialized_dep] - extra[term] # Subtract the portion of that difference that was required by # term's lookback window. offset = size_difference - requested_extra_rows out[term, specialized_dep] = offset return out
A dict mapping `term` -> `# of extra rows to load/compute of `term`. Notes ---- This value depends on the other terms in the graph that require `term` **as an input**. This is not to be confused with `term.dependencies`, which describes how many additional rows of `term`'s inputs we need to load, and which is determined entirely by `Term` itself. Examples -------- Our graph contains the following terms: A = SimpleMovingAverage([USEquityPricing.high], window_length=5) B = SimpleMovingAverage([USEquityPricing.high], window_length=10) C = SimpleMovingAverage([USEquityPricing.low], window_length=8) To compute N rows of A, we need N + 4 extra rows of `high`. To compute N rows of B, we need N + 9 extra rows of `high`. To compute N rows of C, we need N + 7 extra rows of `low`. We store the following extra_row requirements: self.extra_rows[high] = 9 # Ensures that we can service B. self.extra_rows[low] = 7 See Also -------- :meth:`zipline.pipeline.graph.ExecutionPlan.offset` :meth:`zipline.pipeline.term.Term.dependencies` def extra_rows(self): """ A dict mapping `term` -> `# of extra rows to load/compute of `term`. Notes ---- This value depends on the other terms in the graph that require `term` **as an input**. This is not to be confused with `term.dependencies`, which describes how many additional rows of `term`'s inputs we need to load, and which is determined entirely by `Term` itself. Examples -------- Our graph contains the following terms: A = SimpleMovingAverage([USEquityPricing.high], window_length=5) B = SimpleMovingAverage([USEquityPricing.high], window_length=10) C = SimpleMovingAverage([USEquityPricing.low], window_length=8) To compute N rows of A, we need N + 4 extra rows of `high`. To compute N rows of B, we need N + 9 extra rows of `high`. To compute N rows of C, we need N + 7 extra rows of `low`. We store the following extra_row requirements: self.extra_rows[high] = 9 # Ensures that we can service B. self.extra_rows[low] = 7 See Also -------- :meth:`zipline.pipeline.graph.ExecutionPlan.offset` :meth:`zipline.pipeline.term.Term.dependencies` """ return { term: attrs['extra_rows'] for term, attrs in iteritems(self.graph.node) }
Ensure that we're going to compute at least N extra rows of `term`. def _ensure_extra_rows(self, term, N): """ Ensure that we're going to compute at least N extra rows of `term`. """ attrs = self.graph.node[term] attrs['extra_rows'] = max(N, attrs.get('extra_rows', 0))
Load mask and mask row labels for term. Parameters ---------- term : Term The term to load the mask and labels for. root_mask_term : Term The term that represents the root asset exists mask. workspace : dict[Term, any] The values that have been computed for each term. all_dates : pd.DatetimeIndex All of the dates that are being computed for in the pipeline. Returns ------- mask : np.ndarray The correct mask for this term. dates : np.ndarray The slice of dates for this term. def mask_and_dates_for_term(self, term, root_mask_term, workspace, all_dates): """ Load mask and mask row labels for term. Parameters ---------- term : Term The term to load the mask and labels for. root_mask_term : Term The term that represents the root asset exists mask. workspace : dict[Term, any] The values that have been computed for each term. all_dates : pd.DatetimeIndex All of the dates that are being computed for in the pipeline. Returns ------- mask : np.ndarray The correct mask for this term. dates : np.ndarray The slice of dates for this term. """ mask = term.mask mask_offset = self.extra_rows[mask] - self.extra_rows[term] # This offset is computed against root_mask_term because that is what # determines the shape of the top-level dates array. dates_offset = ( self.extra_rows[root_mask_term] - self.extra_rows[term] ) return workspace[mask][mask_offset:], all_dates[dates_offset:]
Make sure that we've specialized all loadable terms in the graph. def _assert_all_loadable_terms_specialized_to(self, domain): """Make sure that we've specialized all loadable terms in the graph. """ for term in self.graph.node: if isinstance(term, LoadableTerm): assert term.domain is domain
Make an extension for an AdjustedArrayWindow specialization. def window_specialization(typename): """Make an extension for an AdjustedArrayWindow specialization.""" return Extension( 'zipline.lib._{name}window'.format(name=typename), ['zipline/lib/_{name}window.pyx'.format(name=typename)], depends=['zipline/lib/_windowtemplate.pxi'], )
Read a requirements.txt file, expressed as a path relative to Zipline root. Returns requirements with the pinned versions as lower bounds if `strict_bounds` is falsey. def read_requirements(path, strict_bounds, conda_format=False, filter_names=None): """ Read a requirements.txt file, expressed as a path relative to Zipline root. Returns requirements with the pinned versions as lower bounds if `strict_bounds` is falsey. """ real_path = join(dirname(abspath(__file__)), path) with open(real_path) as f: reqs = _filter_requirements(f.readlines(), filter_names=filter_names, filter_sys_version=not conda_format) if not strict_bounds: reqs = map(_with_bounds, reqs) if conda_format: reqs = map(_conda_format, reqs) return list(reqs)
Normalize a time. If the time is tz-naive, assume it is UTC. def ensure_utc(time, tz='UTC'): """ Normalize a time. If the time is tz-naive, assume it is UTC. """ if not time.tzinfo: time = time.replace(tzinfo=pytz.timezone(tz)) return time.replace(tzinfo=pytz.utc)
Builds the offset argument for event rules. def _build_offset(offset, kwargs, default): """ Builds the offset argument for event rules. """ if offset is None: if not kwargs: return default # use the default. else: return _td_check(datetime.timedelta(**kwargs)) elif kwargs: raise ValueError('Cannot pass kwargs and an offset') elif isinstance(offset, datetime.timedelta): return _td_check(offset) else: raise TypeError("Must pass 'hours' and/or 'minutes' as keywords")
Builds the date argument for event rules. def _build_date(date, kwargs): """ Builds the date argument for event rules. """ if date is None: if not kwargs: raise ValueError('Must pass a date or kwargs') else: return datetime.date(**kwargs) elif kwargs: raise ValueError('Cannot pass kwargs and a date') else: return date
Builds the time argument for event rules. def _build_time(time, kwargs): """ Builds the time argument for event rules. """ tz = kwargs.pop('tz', 'UTC') if time: if kwargs: raise ValueError('Cannot pass kwargs and a time') else: return ensure_utc(time, tz) elif not kwargs: raise ValueError('Must pass a time or kwargs') else: return datetime.time(**kwargs)
A preprocessor that coerces integral floats to ints. Receipt of non-integral floats raises a TypeError. def lossless_float_to_int(funcname, func, argname, arg): """ A preprocessor that coerces integral floats to ints. Receipt of non-integral floats raises a TypeError. """ if not isinstance(arg, float): return arg arg_as_int = int(arg) if arg == arg_as_int: warnings.warn( "{f} expected an int for argument {name!r}, but got float {arg}." " Coercing to int.".format( f=funcname, name=argname, arg=arg, ), ) return arg_as_int raise TypeError(arg)
Constructs an event rule from the factory api. def make_eventrule(date_rule, time_rule, cal, half_days=True): """ Constructs an event rule from the factory api. """ _check_if_not_called(date_rule) _check_if_not_called(time_rule) if half_days: inner_rule = date_rule & time_rule else: inner_rule = date_rule & time_rule & NotHalfDay() opd = OncePerDay(rule=inner_rule) # This is where a scheduled function's rule is associated with a calendar. opd.cal = cal return opd
Adds an event to the manager. def add_event(self, event, prepend=False): """ Adds an event to the manager. """ if prepend: self._events.insert(0, event) else: self._events.append(event)
Calls the callable only when the rule is triggered. def handle_data(self, context, data, dt): """ Calls the callable only when the rule is triggered. """ if self.rule.should_trigger(dt): self.callback(context, data)
Composes the two rules with a lazy composer. def should_trigger(self, dt): """ Composes the two rules with a lazy composer. """ return self.composer( self.first.should_trigger, self.second.should_trigger, dt )
Given a date, find that day's open and period end (open + offset). def calculate_dates(self, dt): """ Given a date, find that day's open and period end (open + offset). """ period_start, period_close = self.cal.open_and_close_for_session( self.cal.minute_to_session_label(dt), ) # Align the market open and close times here with the execution times # used by the simulation clock. This ensures that scheduled functions # trigger at the correct times. self._period_start = self.cal.execution_time_from_open(period_start) self._period_close = self.cal.execution_time_from_close(period_close) self._period_end = self._period_start + self.offset - self._one_minute
Given a dt, find that day's close and period start (close - offset). def calculate_dates(self, dt): """ Given a dt, find that day's close and period start (close - offset). """ period_end = self.cal.open_and_close_for_session( self.cal.minute_to_session_label(dt), )[1] # Align the market close time here with the execution time used by the # simulation clock. This ensures that scheduled functions trigger at # the correct times. self._period_end = self.cal.execution_time_from_close(period_end) self._period_start = self._period_end - self.offset self._period_close = self._period_end
Drops any record where a value would not fit into a uint32. Parameters ---------- df : pd.DataFrame The dataframe to winsorise. invalid_data_behavior : {'warn', 'raise', 'ignore'} What to do when data is outside the bounds of a uint32. *columns : iterable[str] The names of the columns to check. Returns ------- truncated : pd.DataFrame ``df`` with values that do not fit into a uint32 zeroed out. def winsorise_uint32(df, invalid_data_behavior, column, *columns): """Drops any record where a value would not fit into a uint32. Parameters ---------- df : pd.DataFrame The dataframe to winsorise. invalid_data_behavior : {'warn', 'raise', 'ignore'} What to do when data is outside the bounds of a uint32. *columns : iterable[str] The names of the columns to check. Returns ------- truncated : pd.DataFrame ``df`` with values that do not fit into a uint32 zeroed out. """ columns = list((column,) + columns) mask = df[columns] > UINT32_MAX if invalid_data_behavior != 'ignore': mask |= df[columns].isnull() else: # we are not going to generate a warning or error for this so just use # nan_to_num df[columns] = np.nan_to_num(df[columns]) mv = mask.values if mv.any(): if invalid_data_behavior == 'raise': raise ValueError( '%d values out of bounds for uint32: %r' % ( mv.sum(), df[mask.any(axis=1)], ), ) if invalid_data_behavior == 'warn': warnings.warn( 'Ignoring %d values because they are out of bounds for' ' uint32: %r' % ( mv.sum(), df[mask.any(axis=1)], ), stacklevel=3, # one extra frame for `expect_element` ) df[mask] = 0 return df
Parameters ---------- data : iterable[tuple[int, pandas.DataFrame or bcolz.ctable]] The data chunks to write. Each chunk should be a tuple of sid and the data for that asset. assets : set[int], optional The assets that should be in ``data``. If this is provided we will check ``data`` against the assets and provide better progress information. show_progress : bool, optional Whether or not to show a progress bar while writing. invalid_data_behavior : {'warn', 'raise', 'ignore'}, optional What to do when data is encountered that is outside the range of a uint32. Returns ------- table : bcolz.ctable The newly-written table. def write(self, data, assets=None, show_progress=False, invalid_data_behavior='warn'): """ Parameters ---------- data : iterable[tuple[int, pandas.DataFrame or bcolz.ctable]] The data chunks to write. Each chunk should be a tuple of sid and the data for that asset. assets : set[int], optional The assets that should be in ``data``. If this is provided we will check ``data`` against the assets and provide better progress information. show_progress : bool, optional Whether or not to show a progress bar while writing. invalid_data_behavior : {'warn', 'raise', 'ignore'}, optional What to do when data is encountered that is outside the range of a uint32. Returns ------- table : bcolz.ctable The newly-written table. """ ctx = maybe_show_progress( ( (sid, self.to_ctable(df, invalid_data_behavior)) for sid, df in data ), show_progress=show_progress, item_show_func=self.progress_bar_item_show_func, label=self.progress_bar_message, length=len(assets) if assets is not None else None, ) with ctx as it: return self._write_internal(it, assets)
Read CSVs as DataFrames from our asset map. Parameters ---------- asset_map : dict[int -> str] A mapping from asset id to file path with the CSV data for that asset show_progress : bool Whether or not to show a progress bar while writing. invalid_data_behavior : {'warn', 'raise', 'ignore'} What to do when data is encountered that is outside the range of a uint32. def write_csvs(self, asset_map, show_progress=False, invalid_data_behavior='warn'): """Read CSVs as DataFrames from our asset map. Parameters ---------- asset_map : dict[int -> str] A mapping from asset id to file path with the CSV data for that asset show_progress : bool Whether or not to show a progress bar while writing. invalid_data_behavior : {'warn', 'raise', 'ignore'} What to do when data is encountered that is outside the range of a uint32. """ read = partial( read_csv, parse_dates=['day'], index_col='day', dtype=self._csv_dtypes, ) return self.write( ((asset, read(path)) for asset, path in iteritems(asset_map)), assets=viewkeys(asset_map), show_progress=show_progress, invalid_data_behavior=invalid_data_behavior, )
Internal implementation of write. `iterator` should be an iterator yielding pairs of (asset, ctable). def _write_internal(self, iterator, assets): """ Internal implementation of write. `iterator` should be an iterator yielding pairs of (asset, ctable). """ total_rows = 0 first_row = {} last_row = {} calendar_offset = {} # Maps column name -> output carray. columns = { k: carray(array([], dtype=uint32_dtype)) for k in US_EQUITY_PRICING_BCOLZ_COLUMNS } earliest_date = None sessions = self._calendar.sessions_in_range( self._start_session, self._end_session ) if assets is not None: @apply def iterator(iterator=iterator, assets=set(assets)): for asset_id, table in iterator: if asset_id not in assets: raise ValueError('unknown asset id %r' % asset_id) yield asset_id, table for asset_id, table in iterator: nrows = len(table) for column_name in columns: if column_name == 'id': # We know what the content of this column is, so don't # bother reading it. columns['id'].append( full((nrows,), asset_id, dtype='uint32'), ) continue columns[column_name].append(table[column_name]) if earliest_date is None: earliest_date = table["day"][0] else: earliest_date = min(earliest_date, table["day"][0]) # Bcolz doesn't support ints as keys in `attrs`, so convert # assets to strings for use as attr keys. asset_key = str(asset_id) # Calculate the index into the array of the first and last row # for this asset. This allows us to efficiently load single # assets when querying the data back out of the table. first_row[asset_key] = total_rows last_row[asset_key] = total_rows + nrows - 1 total_rows += nrows table_day_to_session = compose( self._calendar.minute_to_session_label, partial(Timestamp, unit='s', tz='UTC'), ) asset_first_day = table_day_to_session(table['day'][0]) asset_last_day = table_day_to_session(table['day'][-1]) asset_sessions = sessions[ sessions.slice_indexer(asset_first_day, asset_last_day) ] assert len(table) == len(asset_sessions), ( 'Got {} rows for daily bars table with first day={}, last ' 'day={}, expected {} rows.\n' 'Missing sessions: {}\n' 'Extra sessions: {}'.format( len(table), asset_first_day.date(), asset_last_day.date(), len(asset_sessions), asset_sessions.difference( to_datetime( np.array(table['day']), unit='s', utc=True, ) ).tolist(), to_datetime( np.array(table['day']), unit='s', utc=True, ).difference(asset_sessions).tolist(), ) ) # Calculate the number of trading days between the first date # in the stored data and the first date of **this** asset. This # offset used for output alignment by the reader. calendar_offset[asset_key] = sessions.get_loc(asset_first_day) # This writes the table to disk. full_table = ctable( columns=[ columns[colname] for colname in US_EQUITY_PRICING_BCOLZ_COLUMNS ], names=US_EQUITY_PRICING_BCOLZ_COLUMNS, rootdir=self._filename, mode='w', ) full_table.attrs['first_trading_day'] = ( earliest_date if earliest_date is not None else iNaT ) full_table.attrs['first_row'] = first_row full_table.attrs['last_row'] = last_row full_table.attrs['calendar_offset'] = calendar_offset full_table.attrs['calendar_name'] = self._calendar.name full_table.attrs['start_session_ns'] = self._start_session.value full_table.attrs['end_session_ns'] = self._end_session.value full_table.flush() return full_table
Compute the raw row indices to load for each asset on a query for the given dates after applying a shift. Parameters ---------- start_idx : int Index of first date for which we want data. end_idx : int Index of last date for which we want data. assets : pandas.Int64Index Assets for which we want to compute row indices Returns ------- A 3-tuple of (first_rows, last_rows, offsets): first_rows : np.array[intp] Array with length == len(assets) containing the index of the first row to load for each asset in `assets`. last_rows : np.array[intp] Array with length == len(assets) containing the index of the last row to load for each asset in `assets`. offset : np.array[intp] Array with length == (len(asset) containing the index in a buffer of length `dates` corresponding to the first row of each asset. The value of offset[i] will be 0 if asset[i] existed at the start of a query. Otherwise, offset[i] will be equal to the number of entries in `dates` for which the asset did not yet exist. def _compute_slices(self, start_idx, end_idx, assets): """ Compute the raw row indices to load for each asset on a query for the given dates after applying a shift. Parameters ---------- start_idx : int Index of first date for which we want data. end_idx : int Index of last date for which we want data. assets : pandas.Int64Index Assets for which we want to compute row indices Returns ------- A 3-tuple of (first_rows, last_rows, offsets): first_rows : np.array[intp] Array with length == len(assets) containing the index of the first row to load for each asset in `assets`. last_rows : np.array[intp] Array with length == len(assets) containing the index of the last row to load for each asset in `assets`. offset : np.array[intp] Array with length == (len(asset) containing the index in a buffer of length `dates` corresponding to the first row of each asset. The value of offset[i] will be 0 if asset[i] existed at the start of a query. Otherwise, offset[i] will be equal to the number of entries in `dates` for which the asset did not yet exist. """ # The core implementation of the logic here is implemented in Cython # for efficiency. return _compute_row_slices( self._first_rows, self._last_rows, self._calendar_offsets, start_idx, end_idx, assets, )
Get the colname from daily_bar_table and read all of it into memory, caching the result. Parameters ---------- colname : string A name of a OHLCV carray in the daily_bar_table Returns ------- array (uint32) Full read array of the carray in the daily_bar_table with the given colname. def _spot_col(self, colname): """ Get the colname from daily_bar_table and read all of it into memory, caching the result. Parameters ---------- colname : string A name of a OHLCV carray in the daily_bar_table Returns ------- array (uint32) Full read array of the carray in the daily_bar_table with the given colname. """ try: col = self._spot_cols[colname] except KeyError: col = self._spot_cols[colname] = self._table[colname] return col
Parameters ---------- sid : int The asset identifier. day : datetime64-like Midnight of the day for which data is requested. Returns ------- int Index into the data tape for the given sid and day. Raises a NoDataOnDate exception if the given day and sid is before or after the date range of the equity. def sid_day_index(self, sid, day): """ Parameters ---------- sid : int The asset identifier. day : datetime64-like Midnight of the day for which data is requested. Returns ------- int Index into the data tape for the given sid and day. Raises a NoDataOnDate exception if the given day and sid is before or after the date range of the equity. """ try: day_loc = self.sessions.get_loc(day) except Exception: raise NoDataOnDate("day={0} is outside of calendar={1}".format( day, self.sessions)) offset = day_loc - self._calendar_offsets[sid] if offset < 0: raise NoDataBeforeDate( "No data on or before day={0} for sid={1}".format( day, sid)) ix = self._first_rows[sid] + offset if ix > self._last_rows[sid]: raise NoDataAfterDate( "No data on or after day={0} for sid={1}".format( day, sid)) return ix
Parameters ---------- sid : int The asset identifier. day : datetime64-like Midnight of the day for which data is requested. colname : string The price field. e.g. ('open', 'high', 'low', 'close', 'volume') Returns ------- float The spot price for colname of the given sid on the given day. Raises a NoDataOnDate exception if the given day and sid is before or after the date range of the equity. Returns -1 if the day is within the date range, but the price is 0. def get_value(self, sid, dt, field): """ Parameters ---------- sid : int The asset identifier. day : datetime64-like Midnight of the day for which data is requested. colname : string The price field. e.g. ('open', 'high', 'low', 'close', 'volume') Returns ------- float The spot price for colname of the given sid on the given day. Raises a NoDataOnDate exception if the given day and sid is before or after the date range of the equity. Returns -1 if the day is within the date range, but the price is 0. """ ix = self.sid_day_index(sid, dt) price = self._spot_col(field)[ix] if field != 'volume': if price == 0: return nan else: return price * 0.001 else: return price
Construct and store a PipelineEngine from loader. If get_loader is None, constructs an ExplodingPipelineEngine def init_engine(self, get_loader): """ Construct and store a PipelineEngine from loader. If get_loader is None, constructs an ExplodingPipelineEngine """ if get_loader is not None: self.engine = SimplePipelineEngine( get_loader, self.asset_finder, self.default_pipeline_domain(self.trading_calendar), ) else: self.engine = ExplodingPipelineEngine()
Call self._initialize with `self` made available to Zipline API functions. def initialize(self, *args, **kwargs): """ Call self._initialize with `self` made available to Zipline API functions. """ with ZiplineAPI(self): self._initialize(self, *args, **kwargs)
If the clock property is not set, then create one based on frequency. def _create_clock(self): """ If the clock property is not set, then create one based on frequency. """ trading_o_and_c = self.trading_calendar.schedule.ix[ self.sim_params.sessions] market_closes = trading_o_and_c['market_close'] minutely_emission = False if self.sim_params.data_frequency == 'minute': market_opens = trading_o_and_c['market_open'] minutely_emission = self.sim_params.emission_rate == "minute" # The calendar's execution times are the minutes over which we # actually want to run the clock. Typically the execution times # simply adhere to the market open and close times. In the case of # the futures calendar, for example, we only want to simulate over # a subset of the full 24 hour calendar, so the execution times # dictate a market open time of 6:31am US/Eastern and a close of # 5:00pm US/Eastern. execution_opens = \ self.trading_calendar.execution_time_from_open(market_opens) execution_closes = \ self.trading_calendar.execution_time_from_close(market_closes) else: # in daily mode, we want to have one bar per session, timestamped # as the last minute of the session. execution_closes = \ self.trading_calendar.execution_time_from_close(market_closes) execution_opens = execution_closes # FIXME generalize these values before_trading_start_minutes = days_at_time( self.sim_params.sessions, time(8, 45), "US/Eastern" ) return MinuteSimulationClock( self.sim_params.sessions, execution_opens, execution_closes, before_trading_start_minutes, minute_emission=minutely_emission, )
Compute any pipelines attached with eager=True. def compute_eager_pipelines(self): """ Compute any pipelines attached with eager=True. """ for name, pipe in self._pipelines.items(): if pipe.eager: self.pipeline_output(name)
Run the algorithm. def run(self, data_portal=None): """Run the algorithm. """ # HACK: I don't think we really want to support passing a data portal # this late in the long term, but this is needed for now for backwards # compat downstream. if data_portal is not None: self.data_portal = data_portal self.asset_finder = data_portal.asset_finder elif self.data_portal is None: raise RuntimeError( "No data portal in TradingAlgorithm.run().\n" "Either pass a DataPortal to TradingAlgorithm() or to run()." ) else: assert self.asset_finder is not None, \ "Have data portal without asset_finder." # Create zipline and loop through simulated_trading. # Each iteration returns a perf dictionary try: perfs = [] for perf in self.get_generator(): perfs.append(perf) # convert perf dict to pandas dataframe daily_stats = self._create_daily_stats(perfs) self.analyze(daily_stats) finally: self.data_portal = None self.metrics_tracker = None return daily_stats
If there is a capital change for a given dt, this means the the change occurs before `handle_data` on the given dt. In the case of the change being a target value, the change will be computed on the portfolio value according to prices at the given dt `portfolio_value_adjustment`, if specified, will be removed from the portfolio_value of the cumulative performance when calculating deltas from target capital changes. def calculate_capital_changes(self, dt, emission_rate, is_interday, portfolio_value_adjustment=0.0): """ If there is a capital change for a given dt, this means the the change occurs before `handle_data` on the given dt. In the case of the change being a target value, the change will be computed on the portfolio value according to prices at the given dt `portfolio_value_adjustment`, if specified, will be removed from the portfolio_value of the cumulative performance when calculating deltas from target capital changes. """ try: capital_change = self.capital_changes[dt] except KeyError: return self._sync_last_sale_prices() if capital_change['type'] == 'target': target = capital_change['value'] capital_change_amount = ( target - ( self.portfolio.portfolio_value - portfolio_value_adjustment ) ) log.info('Processing capital change to target %s at %s. Capital ' 'change delta is %s' % (target, dt, capital_change_amount)) elif capital_change['type'] == 'delta': target = None capital_change_amount = capital_change['value'] log.info('Processing capital change of delta %s at %s' % (capital_change_amount, dt)) else: log.error("Capital change %s does not indicate a valid type " "('target' or 'delta')" % capital_change) return self.capital_change_deltas.update({dt: capital_change_amount}) self.metrics_tracker.capital_change(capital_change_amount) yield { 'capital_change': {'date': dt, 'type': 'cash', 'target': target, 'delta': capital_change_amount} }
Query the execution environment. Parameters ---------- field : {'platform', 'arena', 'data_frequency', 'start', 'end', 'capital_base', 'platform', '*'} The field to query. The options have the following meanings: arena : str The arena from the simulation parameters. This will normally be ``'backtest'`` but some systems may use this distinguish live trading from backtesting. data_frequency : {'daily', 'minute'} data_frequency tells the algorithm if it is running with daily data or minute data. start : datetime The start date for the simulation. end : datetime The end date for the simulation. capital_base : float The starting capital for the simulation. platform : str The platform that the code is running on. By default this will be the string 'zipline'. This can allow algorithms to know if they are running on the Quantopian platform instead. * : dict[str -> any] Returns all of the fields in a dictionary. Returns ------- val : any The value for the field queried. See above for more information. Raises ------ ValueError Raised when ``field`` is not a valid option. def get_environment(self, field='platform'): """Query the execution environment. Parameters ---------- field : {'platform', 'arena', 'data_frequency', 'start', 'end', 'capital_base', 'platform', '*'} The field to query. The options have the following meanings: arena : str The arena from the simulation parameters. This will normally be ``'backtest'`` but some systems may use this distinguish live trading from backtesting. data_frequency : {'daily', 'minute'} data_frequency tells the algorithm if it is running with daily data or minute data. start : datetime The start date for the simulation. end : datetime The end date for the simulation. capital_base : float The starting capital for the simulation. platform : str The platform that the code is running on. By default this will be the string 'zipline'. This can allow algorithms to know if they are running on the Quantopian platform instead. * : dict[str -> any] Returns all of the fields in a dictionary. Returns ------- val : any The value for the field queried. See above for more information. Raises ------ ValueError Raised when ``field`` is not a valid option. """ env = { 'arena': self.sim_params.arena, 'data_frequency': self.sim_params.data_frequency, 'start': self.sim_params.first_open, 'end': self.sim_params.last_close, 'capital_base': self.sim_params.capital_base, 'platform': self._platform } if field == '*': return env else: try: return env[field] except KeyError: raise ValueError( '%r is not a valid field for get_environment' % field, )
Fetch a csv from a remote url and register the data so that it is queryable from the ``data`` object. Parameters ---------- url : str The url of the csv file to load. pre_func : callable[pd.DataFrame -> pd.DataFrame], optional A callback to allow preprocessing the raw data returned from fetch_csv before dates are paresed or symbols are mapped. post_func : callable[pd.DataFrame -> pd.DataFrame], optional A callback to allow postprocessing of the data after dates and symbols have been mapped. date_column : str, optional The name of the column in the preprocessed dataframe containing datetime information to map the data. date_format : str, optional The format of the dates in the ``date_column``. If not provided ``fetch_csv`` will attempt to infer the format. For information about the format of this string, see :func:`pandas.read_csv`. timezone : tzinfo or str, optional The timezone for the datetime in the ``date_column``. symbol : str, optional If the data is about a new asset or index then this string will be the name used to identify the values in ``data``. For example, one may use ``fetch_csv`` to load data for VIX, then this field could be the string ``'VIX'``. mask : bool, optional Drop any rows which cannot be symbol mapped. symbol_column : str If the data is attaching some new attribute to each asset then this argument is the name of the column in the preprocessed dataframe containing the symbols. This will be used along with the date information to map the sids in the asset finder. country_code : str, optional Country code to use to disambiguate symbol lookups. **kwargs Forwarded to :func:`pandas.read_csv`. Returns ------- csv_data_source : zipline.sources.requests_csv.PandasRequestsCSV A requests source that will pull data from the url specified. def fetch_csv(self, url, pre_func=None, post_func=None, date_column='date', date_format=None, timezone=pytz.utc.zone, symbol=None, mask=True, symbol_column=None, special_params_checker=None, country_code=None, **kwargs): """Fetch a csv from a remote url and register the data so that it is queryable from the ``data`` object. Parameters ---------- url : str The url of the csv file to load. pre_func : callable[pd.DataFrame -> pd.DataFrame], optional A callback to allow preprocessing the raw data returned from fetch_csv before dates are paresed or symbols are mapped. post_func : callable[pd.DataFrame -> pd.DataFrame], optional A callback to allow postprocessing of the data after dates and symbols have been mapped. date_column : str, optional The name of the column in the preprocessed dataframe containing datetime information to map the data. date_format : str, optional The format of the dates in the ``date_column``. If not provided ``fetch_csv`` will attempt to infer the format. For information about the format of this string, see :func:`pandas.read_csv`. timezone : tzinfo or str, optional The timezone for the datetime in the ``date_column``. symbol : str, optional If the data is about a new asset or index then this string will be the name used to identify the values in ``data``. For example, one may use ``fetch_csv`` to load data for VIX, then this field could be the string ``'VIX'``. mask : bool, optional Drop any rows which cannot be symbol mapped. symbol_column : str If the data is attaching some new attribute to each asset then this argument is the name of the column in the preprocessed dataframe containing the symbols. This will be used along with the date information to map the sids in the asset finder. country_code : str, optional Country code to use to disambiguate symbol lookups. **kwargs Forwarded to :func:`pandas.read_csv`. Returns ------- csv_data_source : zipline.sources.requests_csv.PandasRequestsCSV A requests source that will pull data from the url specified. """ if country_code is None: country_code = self.default_fetch_csv_country_code( self.trading_calendar, ) # Show all the logs every time fetcher is used. csv_data_source = PandasRequestsCSV( url, pre_func, post_func, self.asset_finder, self.trading_calendar.day, self.sim_params.start_session, self.sim_params.end_session, date_column, date_format, timezone, symbol, mask, symbol_column, data_frequency=self.data_frequency, country_code=country_code, special_params_checker=special_params_checker, **kwargs ) # ingest this into dataportal self.data_portal.handle_extra_source(csv_data_source.df, self.sim_params) return csv_data_source
Adds an event to the algorithm's EventManager. Parameters ---------- rule : EventRule The rule for when the callback should be triggered. callback : callable[(context, data) -> None] The function to execute when the rule is triggered. def add_event(self, rule, callback): """Adds an event to the algorithm's EventManager. Parameters ---------- rule : EventRule The rule for when the callback should be triggered. callback : callable[(context, data) -> None] The function to execute when the rule is triggered. """ self.event_manager.add_event( zipline.utils.events.Event(rule, callback), )
Schedules a function to be called according to some timed rules. Parameters ---------- func : callable[(context, data) -> None] The function to execute when the rule is triggered. date_rule : EventRule, optional The rule for the dates to execute this function. time_rule : EventRule, optional The rule for the times to execute this function. half_days : bool, optional Should this rule fire on half days? calendar : Sentinel, optional Calendar used to reconcile date and time rules. See Also -------- :class:`zipline.api.date_rules` :class:`zipline.api.time_rules` def schedule_function(self, func, date_rule=None, time_rule=None, half_days=True, calendar=None): """Schedules a function to be called according to some timed rules. Parameters ---------- func : callable[(context, data) -> None] The function to execute when the rule is triggered. date_rule : EventRule, optional The rule for the dates to execute this function. time_rule : EventRule, optional The rule for the times to execute this function. half_days : bool, optional Should this rule fire on half days? calendar : Sentinel, optional Calendar used to reconcile date and time rules. See Also -------- :class:`zipline.api.date_rules` :class:`zipline.api.time_rules` """ # When the user calls schedule_function(func, <time_rule>), assume that # the user meant to specify a time rule but no date rule, instead of # a date rule and no time rule as the signature suggests if isinstance(date_rule, (AfterOpen, BeforeClose)) and not time_rule: warnings.warn('Got a time rule for the second positional argument ' 'date_rule. You should use keyword argument ' 'time_rule= when calling schedule_function without ' 'specifying a date_rule', stacklevel=3) date_rule = date_rule or date_rules.every_day() time_rule = ((time_rule or time_rules.every_minute()) if self.sim_params.data_frequency == 'minute' else # If we are in daily mode the time_rule is ignored. time_rules.every_minute()) # Check the type of the algorithm's schedule before pulling calendar # Note that the ExchangeTradingSchedule is currently the only # TradingSchedule class, so this is unlikely to be hit if calendar is None: cal = self.trading_calendar elif calendar is calendars.US_EQUITIES: cal = get_calendar('XNYS') elif calendar is calendars.US_FUTURES: cal = get_calendar('us_futures') else: raise ScheduleFunctionInvalidCalendar( given_calendar=calendar, allowed_calendars=( '[calendars.US_EQUITIES, calendars.US_FUTURES]' ), ) self.add_event( make_eventrule(date_rule, time_rule, cal, half_days), func, )
Create a specifier for a continuous contract. Parameters ---------- root_symbol_str : str The root symbol for the future chain. offset : int, optional The distance from the primary contract. Default is 0. roll_style : str, optional How rolls are determined. Default is 'volume'. adjustment : str, optional Method for adjusting lookback prices between rolls. Options are 'mul', 'add', and None. Default is 'mul'. Returns ------- continuous_future : ContinuousFuture The continuous future specifier. def continuous_future(self, root_symbol_str, offset=0, roll='volume', adjustment='mul'): """Create a specifier for a continuous contract. Parameters ---------- root_symbol_str : str The root symbol for the future chain. offset : int, optional The distance from the primary contract. Default is 0. roll_style : str, optional How rolls are determined. Default is 'volume'. adjustment : str, optional Method for adjusting lookback prices between rolls. Options are 'mul', 'add', and None. Default is 'mul'. Returns ------- continuous_future : ContinuousFuture The continuous future specifier. """ return self.asset_finder.create_continuous_future( root_symbol_str, offset, roll, adjustment, )
Lookup an Equity by its ticker symbol. Parameters ---------- symbol_str : str The ticker symbol for the equity to lookup. country_code : str or None, optional A country to limit symbol searches to. Returns ------- equity : Equity The equity that held the ticker symbol on the current symbol lookup date. Raises ------ SymbolNotFound Raised when the symbols was not held on the current lookup date. See Also -------- :func:`zipline.api.set_symbol_lookup_date` def symbol(self, symbol_str, country_code=None): """Lookup an Equity by its ticker symbol. Parameters ---------- symbol_str : str The ticker symbol for the equity to lookup. country_code : str or None, optional A country to limit symbol searches to. Returns ------- equity : Equity The equity that held the ticker symbol on the current symbol lookup date. Raises ------ SymbolNotFound Raised when the symbols was not held on the current lookup date. See Also -------- :func:`zipline.api.set_symbol_lookup_date` """ # If the user has not set the symbol lookup date, # use the end_session as the date for symbol->sid resolution. _lookup_date = self._symbol_lookup_date \ if self._symbol_lookup_date is not None \ else self.sim_params.end_session return self.asset_finder.lookup_symbol( symbol_str, as_of_date=_lookup_date, country_code=country_code, )
Lookup multuple Equities as a list. Parameters ---------- *args : iterable[str] The ticker symbols to lookup. country_code : str or None, optional A country to limit symbol searches to. Returns ------- equities : list[Equity] The equities that held the given ticker symbols on the current symbol lookup date. Raises ------ SymbolNotFound Raised when one of the symbols was not held on the current lookup date. See Also -------- :func:`zipline.api.set_symbol_lookup_date` def symbols(self, *args, **kwargs): """Lookup multuple Equities as a list. Parameters ---------- *args : iterable[str] The ticker symbols to lookup. country_code : str or None, optional A country to limit symbol searches to. Returns ------- equities : list[Equity] The equities that held the given ticker symbols on the current symbol lookup date. Raises ------ SymbolNotFound Raised when one of the symbols was not held on the current lookup date. See Also -------- :func:`zipline.api.set_symbol_lookup_date` """ return [self.symbol(identifier, **kwargs) for identifier in args]
Calculates how many shares/contracts to order based on the type of asset being ordered. def _calculate_order_value_amount(self, asset, value): """ Calculates how many shares/contracts to order based on the type of asset being ordered. """ # Make sure the asset exists, and that there is a last price for it. # FIXME: we should use BarData's can_trade logic here, but I haven't # yet found a good way to do that. normalized_date = normalize_date(self.datetime) if normalized_date < asset.start_date: raise CannotOrderDelistedAsset( msg="Cannot order {0}, as it started trading on" " {1}.".format(asset.symbol, asset.start_date) ) elif normalized_date > asset.end_date: raise CannotOrderDelistedAsset( msg="Cannot order {0}, as it stopped trading on" " {1}.".format(asset.symbol, asset.end_date) ) else: last_price = \ self.trading_client.current_data.current(asset, "price") if np.isnan(last_price): raise CannotOrderDelistedAsset( msg="Cannot order {0} on {1} as there is no last " "price for the security.".format(asset.symbol, self.datetime) ) if tolerant_equals(last_price, 0): zero_message = "Price of 0 for {psid}; can't infer value".format( psid=asset ) if self.logger: self.logger.debug(zero_message) # Don't place any order return 0 value_multiplier = asset.price_multiplier return value / (last_price * value_multiplier)
Place an order. Parameters ---------- asset : Asset The asset that this order is for. amount : int The amount of shares to order. If ``amount`` is positive, this is the number of shares to buy or cover. If ``amount`` is negative, this is the number of shares to sell or short. limit_price : float, optional The limit price for the order. stop_price : float, optional The stop price for the order. style : ExecutionStyle, optional The execution style for the order. Returns ------- order_id : str or None The unique identifier for this order, or None if no order was placed. Notes ----- The ``limit_price`` and ``stop_price`` arguments provide shorthands for passing common execution styles. Passing ``limit_price=N`` is equivalent to ``style=LimitOrder(N)``. Similarly, passing ``stop_price=M`` is equivalent to ``style=StopOrder(M)``, and passing ``limit_price=N`` and ``stop_price=M`` is equivalent to ``style=StopLimitOrder(N, M)``. It is an error to pass both a ``style`` and ``limit_price`` or ``stop_price``. See Also -------- :class:`zipline.finance.execution.ExecutionStyle` :func:`zipline.api.order_value` :func:`zipline.api.order_percent` def order(self, asset, amount, limit_price=None, stop_price=None, style=None): """Place an order. Parameters ---------- asset : Asset The asset that this order is for. amount : int The amount of shares to order. If ``amount`` is positive, this is the number of shares to buy or cover. If ``amount`` is negative, this is the number of shares to sell or short. limit_price : float, optional The limit price for the order. stop_price : float, optional The stop price for the order. style : ExecutionStyle, optional The execution style for the order. Returns ------- order_id : str or None The unique identifier for this order, or None if no order was placed. Notes ----- The ``limit_price`` and ``stop_price`` arguments provide shorthands for passing common execution styles. Passing ``limit_price=N`` is equivalent to ``style=LimitOrder(N)``. Similarly, passing ``stop_price=M`` is equivalent to ``style=StopOrder(M)``, and passing ``limit_price=N`` and ``stop_price=M`` is equivalent to ``style=StopLimitOrder(N, M)``. It is an error to pass both a ``style`` and ``limit_price`` or ``stop_price``. See Also -------- :class:`zipline.finance.execution.ExecutionStyle` :func:`zipline.api.order_value` :func:`zipline.api.order_percent` """ if not self._can_order_asset(asset): return None amount, style = self._calculate_order(asset, amount, limit_price, stop_price, style) return self.blotter.order(asset, amount, style)
Helper method for validating parameters to the order API function. Raises an UnsupportedOrderParameters if invalid arguments are found. def validate_order_params(self, asset, amount, limit_price, stop_price, style): """ Helper method for validating parameters to the order API function. Raises an UnsupportedOrderParameters if invalid arguments are found. """ if not self.initialized: raise OrderDuringInitialize( msg="order() can only be called from within handle_data()" ) if style: if limit_price: raise UnsupportedOrderParameters( msg="Passing both limit_price and style is not supported." ) if stop_price: raise UnsupportedOrderParameters( msg="Passing both stop_price and style is not supported." ) for control in self.trading_controls: control.validate(asset, amount, self.portfolio, self.get_datetime(), self.trading_client.current_data)
Helper method for converting deprecated limit_price and stop_price arguments into ExecutionStyle instances. This function assumes that either style == None or (limit_price, stop_price) == (None, None). def __convert_order_params_for_blotter(asset, limit_price, stop_price, style): """ Helper method for converting deprecated limit_price and stop_price arguments into ExecutionStyle instances. This function assumes that either style == None or (limit_price, stop_price) == (None, None). """ if style: assert (limit_price, stop_price) == (None, None) return style if limit_price and stop_price: return StopLimitOrder(limit_price, stop_price, asset=asset) if limit_price: return LimitOrder(limit_price, asset=asset) if stop_price: return StopOrder(stop_price, asset=asset) else: return MarketOrder()
Place an order by desired value rather than desired number of shares. Parameters ---------- asset : Asset The asset that this order is for. value : float If the requested asset exists, the requested value is divided by its price to imply the number of shares to transact. If the Asset being ordered is a Future, the 'value' calculated is actually the exposure, as Futures have no 'value'. value > 0 :: Buy/Cover value < 0 :: Sell/Short limit_price : float, optional The limit price for the order. stop_price : float, optional The stop price for the order. style : ExecutionStyle The execution style for the order. Returns ------- order_id : str The unique identifier for this order. Notes ----- See :func:`zipline.api.order` for more information about ``limit_price``, ``stop_price``, and ``style`` See Also -------- :class:`zipline.finance.execution.ExecutionStyle` :func:`zipline.api.order` :func:`zipline.api.order_percent` def order_value(self, asset, value, limit_price=None, stop_price=None, style=None): """Place an order by desired value rather than desired number of shares. Parameters ---------- asset : Asset The asset that this order is for. value : float If the requested asset exists, the requested value is divided by its price to imply the number of shares to transact. If the Asset being ordered is a Future, the 'value' calculated is actually the exposure, as Futures have no 'value'. value > 0 :: Buy/Cover value < 0 :: Sell/Short limit_price : float, optional The limit price for the order. stop_price : float, optional The stop price for the order. style : ExecutionStyle The execution style for the order. Returns ------- order_id : str The unique identifier for this order. Notes ----- See :func:`zipline.api.order` for more information about ``limit_price``, ``stop_price``, and ``style`` See Also -------- :class:`zipline.finance.execution.ExecutionStyle` :func:`zipline.api.order` :func:`zipline.api.order_percent` """ if not self._can_order_asset(asset): return None amount = self._calculate_order_value_amount(asset, value) return self.order(asset, amount, limit_price=limit_price, stop_price=stop_price, style=style)
Sync the last sale prices on the metrics tracker to a given datetime. Parameters ---------- dt : datetime The time to sync the prices to. Notes ----- This call is cached by the datetime. Repeated calls in the same bar are cheap. def _sync_last_sale_prices(self, dt=None): """Sync the last sale prices on the metrics tracker to a given datetime. Parameters ---------- dt : datetime The time to sync the prices to. Notes ----- This call is cached by the datetime. Repeated calls in the same bar are cheap. """ if dt is None: dt = self.datetime if dt != self._last_sync_time: self.metrics_tracker.sync_last_sale_prices( dt, self.data_portal, ) self._last_sync_time = dt
Callback triggered by the simulation loop whenever the current dt changes. Any logic that should happen exactly once at the start of each datetime group should happen here. def on_dt_changed(self, dt): """ Callback triggered by the simulation loop whenever the current dt changes. Any logic that should happen exactly once at the start of each datetime group should happen here. """ self.datetime = dt self.blotter.set_date(dt)
Returns the current simulation datetime. Parameters ---------- tz : tzinfo or str, optional The timezone to return the datetime in. This defaults to utc. Returns ------- dt : datetime The current simulation datetime converted to ``tz``. def get_datetime(self, tz=None): """ Returns the current simulation datetime. Parameters ---------- tz : tzinfo or str, optional The timezone to return the datetime in. This defaults to utc. Returns ------- dt : datetime The current simulation datetime converted to ``tz``. """ dt = self.datetime assert dt.tzinfo == pytz.utc, "Algorithm should have a utc datetime" if tz is not None: dt = dt.astimezone(tz) return dt
Set the slippage models for the simulation. Parameters ---------- us_equities : EquitySlippageModel The slippage model to use for trading US equities. us_futures : FutureSlippageModel The slippage model to use for trading US futures. See Also -------- :class:`zipline.finance.slippage.SlippageModel` def set_slippage(self, us_equities=None, us_futures=None): """Set the slippage models for the simulation. Parameters ---------- us_equities : EquitySlippageModel The slippage model to use for trading US equities. us_futures : FutureSlippageModel The slippage model to use for trading US futures. See Also -------- :class:`zipline.finance.slippage.SlippageModel` """ if self.initialized: raise SetSlippagePostInit() if us_equities is not None: if Equity not in us_equities.allowed_asset_types: raise IncompatibleSlippageModel( asset_type='equities', given_model=us_equities, supported_asset_types=us_equities.allowed_asset_types, ) self.blotter.slippage_models[Equity] = us_equities if us_futures is not None: if Future not in us_futures.allowed_asset_types: raise IncompatibleSlippageModel( asset_type='futures', given_model=us_futures, supported_asset_types=us_futures.allowed_asset_types, ) self.blotter.slippage_models[Future] = us_futures
Sets the commission models for the simulation. Parameters ---------- us_equities : EquityCommissionModel The commission model to use for trading US equities. us_futures : FutureCommissionModel The commission model to use for trading US futures. See Also -------- :class:`zipline.finance.commission.PerShare` :class:`zipline.finance.commission.PerTrade` :class:`zipline.finance.commission.PerDollar` def set_commission(self, us_equities=None, us_futures=None): """Sets the commission models for the simulation. Parameters ---------- us_equities : EquityCommissionModel The commission model to use for trading US equities. us_futures : FutureCommissionModel The commission model to use for trading US futures. See Also -------- :class:`zipline.finance.commission.PerShare` :class:`zipline.finance.commission.PerTrade` :class:`zipline.finance.commission.PerDollar` """ if self.initialized: raise SetCommissionPostInit() if us_equities is not None: if Equity not in us_equities.allowed_asset_types: raise IncompatibleCommissionModel( asset_type='equities', given_model=us_equities, supported_asset_types=us_equities.allowed_asset_types, ) self.blotter.commission_models[Equity] = us_equities if us_futures is not None: if Future not in us_futures.allowed_asset_types: raise IncompatibleCommissionModel( asset_type='futures', given_model=us_futures, supported_asset_types=us_futures.allowed_asset_types, ) self.blotter.commission_models[Future] = us_futures
Sets the order cancellation policy for the simulation. Parameters ---------- cancel_policy : CancelPolicy The cancellation policy to use. See Also -------- :class:`zipline.api.EODCancel` :class:`zipline.api.NeverCancel` def set_cancel_policy(self, cancel_policy): """Sets the order cancellation policy for the simulation. Parameters ---------- cancel_policy : CancelPolicy The cancellation policy to use. See Also -------- :class:`zipline.api.EODCancel` :class:`zipline.api.NeverCancel` """ if not isinstance(cancel_policy, CancelPolicy): raise UnsupportedCancelPolicy() if self.initialized: raise SetCancelPolicyPostInit() self.blotter.cancel_policy = cancel_policy