text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup Base MODFLOW Grid
Step2: Create the Gridgen Object
Step3: Add an Optional Active Domain
Step4: Refine the Grid
Step5: Plot the Gridgen Input
Step6: Build the Grid
Step7: Plot the Grid
Step8: Create a Flopy ModflowDisu Object
Step9: Intersect Features with the Grid
Step10: Plot Intersected Features
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import sys
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import flopy
from flopy.utils.gridgen import Gridgen
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
Lx = 100.
Ly = 100.
nlay = 2
nrow = 51
ncol = 51
delr = Lx / ncol
delc = Ly / nrow
h0 = 10
h1 = 5
top = h0
botm = np.zeros((nlay, nrow, ncol), dtype=np.float32)
botm[1, :, :] = -10.
ms = flopy.modflow.Modflow(rotation=-20.)
dis = flopy.modflow.ModflowDis(ms, nlay=nlay, nrow=nrow, ncol=ncol, delr=delr,
delc=delc, top=top, botm=botm)
model_ws = os.path.join('.', 'data')
g = Gridgen(dis, model_ws=model_ws)
# setup the active domain
adshp = os.path.join(model_ws, 'ad0')
adpoly = [[[(0, 0), (0, 60), (40, 80), (60, 0), (0, 0)]]]
# g.add_active_domain(adpoly, range(nlay))
x = Lx * np.random.random(10)
y = Ly * np.random.random(10)
wells = list(zip(x, y))
g.add_refinement_features(wells, 'point', 3, range(nlay))
rf0shp = os.path.join(model_ws, 'rf0')
river = [[[(-20, 10), (60, 60)]]]
g.add_refinement_features(river, 'line', 3, range(nlay))
rf1shp = os.path.join(model_ws, 'rf1')
g.add_refinement_features(adpoly, 'polygon', 1, range(nlay))
rf2shp = os.path.join(model_ws, 'rf2')
fig = plt.figure(figsize=(15, 15))
ax = fig.add_subplot(1, 1, 1, aspect='equal')
mm = flopy.plot.ModelMap(model=ms)
mm.plot_grid()
flopy.plot.plot_shapefile(rf2shp, ax=ax, facecolor='yellow', edgecolor='none')
flopy.plot.plot_shapefile(rf1shp, ax=ax, linewidth=10)
flopy.plot.plot_shapefile(rf0shp, ax=ax, facecolor='red', radius=1)
g.build(verbose=False)
fig = plt.figure(figsize=(15, 15))
ax = fig.add_subplot(1, 1, 1, aspect='equal')
g.plot(ax, linewidth=0.5)
flopy.plot.plot_shapefile(rf2shp, ax=ax, facecolor='yellow', edgecolor='none', alpha=0.2)
flopy.plot.plot_shapefile(rf1shp, ax=ax, linewidth=10, alpha=0.2)
flopy.plot.plot_shapefile(rf0shp, ax=ax, facecolor='red', radius=1, alpha=0.2)
mu = flopy.modflow.Modflow(model_ws=model_ws, modelname='mfusg')
disu = g.get_disu(mu)
disu.write_file()
# print(disu)
adpoly_intersect = g.intersect(adpoly, 'polygon', 0)
print(adpoly_intersect.dtype.names)
print(adpoly_intersect)
print(adpoly_intersect.nodenumber)
well_intersect = g.intersect(wells, 'point', 0)
print(well_intersect.dtype.names)
print(well_intersect)
print(well_intersect.nodenumber)
river_intersect = g.intersect(river, 'line', 0)
print(river_intersect.dtype.names)
# print(river_intersect)
# print(river_intersect.nodenumber)
a = np.zeros((g.nodes), dtype=np.int)
a[adpoly_intersect.nodenumber] = 1
a[well_intersect.nodenumber] = 2
a[river_intersect.nodenumber] = 3
fig = plt.figure(figsize=(15, 15))
ax = fig.add_subplot(1, 1, 1, aspect='equal')
g.plot(ax, a=a, masked_values=[0], edgecolor='none', cmap='jet')
flopy.plot.plot_shapefile(rf2shp, ax=ax, facecolor='yellow', alpha=0.25)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy.stats
amean = -0.0896
avar = 0.954
anobs = 40
bmean = 0.719
bvar = 11.87
bnobs = 50
_, p_value = scipy.stats.ttest_ind_from_stats(amean, np.sqrt(avar), anobs, bmean, np.sqrt(bvar), bnobs, equal_var=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This notebook is quite big and complex,
Step2: We'll print some of the search configuration options along the way to keep track of them.
Step3: We already created an OWSLib.fes filter before.
Step4: In the cell below we ask the catalog for all the returns that match the filter and have an OPeNDAP endpoint.
Step5: We found some models, and observations from NERACOOS there.
Step6: Now we can use pyoos collectors for NdbcSos,
Step7: and CoopsSos.
Step8: We will join all the observations into an uniform series, interpolated to 1-hour interval, for the model-data comparison.
Step9: In this next cell we will save the data for quicker access later.
Step10: Taking a quick look at the observations
Step11: Now it is time to loop the models we found above,
Step12: Next, we will match them with the nearest observed time-series. The max_dist=0.08 is in degrees, that is roughly 8 kilometers.
Step13: Now it is possible to compute some simple comparison metrics. First we'll calculate the model mean bias
Step14: And the root mean squared rrror of the deviations from the mean
Step15: The next 2 cells make the scores "pretty" for plotting.
Step16: The cells from [20] to [25] create a folium map with bokeh for the time-series at the observed points.
Step17: Here we use a dictionary with some models we expect to find so we can create a better legend for the plots. If any new models are found, we will use its filename in the legend as a default until we can go back and add a short name to our library.
|
<ASSISTANT_TASK:>
Python Code:
import warnings
# Suppresing warnings for a "pretty output."
warnings.simplefilter("ignore")
%%writefile config.yaml
# Specify a YYYY-MM-DD hh:mm:ss date or integer day offset.
# If both start and stop are offsets they will be computed relative to datetime.today() at midnight.
# Use the dates commented below to reproduce the last Boston Light Swim event forecast.
date:
start: -5 # 2016-8-16 00:00:00
stop: +4 # 2016-8-29 00:00:00
run_name: 'latest'
# Boston harbor.
region:
bbox: [-71.3, 42.03, -70.57, 42.63]
# Try the bounding box below to see how the notebook will behave for a different region.
#bbox: [-74.5, 40, -72., 41.5]
crs: 'urn:ogc:def:crs:OGC:1.3:CRS84'
sos_name: 'sea_water_temperature'
cf_names:
- sea_water_temperature
- sea_surface_temperature
- sea_water_potential_temperature
- equivalent_potential_temperature
- sea_water_conservative_temperature
- pseudo_equivalent_potential_temperature
units: 'celsius'
catalogs:
- https://data.ioos.us/csw
import os
import shutil
from datetime import datetime
from ioos_tools.ioos import parse_config
config = parse_config("config.yaml")
# Saves downloaded data into a temporary directory.
save_dir = os.path.abspath(config["run_name"])
if os.path.exists(save_dir):
shutil.rmtree(save_dir)
os.makedirs(save_dir)
fmt = "{:*^64}".format
print(fmt("Saving data inside directory {}".format(save_dir)))
print(fmt(" Run information "))
print("Run date: {:%Y-%m-%d %H:%M:%S}".format(datetime.utcnow()))
print("Start: {:%Y-%m-%d %H:%M:%S}".format(config["date"]["start"]))
print("Stop: {:%Y-%m-%d %H:%M:%S}".format(config["date"]["stop"]))
print(
"Bounding box: {0:3.2f}, {1:3.2f},"
"{2:3.2f}, {3:3.2f}".format(*config["region"]["bbox"])
)
def make_filter(config):
from owslib import fes
from ioos_tools.ioos import fes_date_filter
kw = dict(
wildCard="*", escapeChar="\\", singleChar="?", propertyname="apiso:AnyText"
)
or_filt = fes.Or(
[fes.PropertyIsLike(literal=("*%s*" % val), **kw) for val in config["cf_names"]]
)
not_filt = fes.Not([fes.PropertyIsLike(literal="GRIB-2", **kw)])
begin, end = fes_date_filter(config["date"]["start"], config["date"]["stop"])
bbox_crs = fes.BBox(config["region"]["bbox"], crs=config["region"]["crs"])
filter_list = [fes.And([bbox_crs, begin, end, or_filt, not_filt])]
return filter_list
filter_list = make_filter(config)
from ioos_tools.ioos import get_csw_records, service_urls
from owslib.csw import CatalogueServiceWeb
dap_urls = []
print(fmt(" Catalog information "))
for endpoint in config["catalogs"]:
print("URL: {}".format(endpoint))
try:
csw = CatalogueServiceWeb(endpoint, timeout=120)
except Exception as e:
print("{}".format(e))
continue
csw = get_csw_records(csw, filter_list, esn="full")
OPeNDAP = service_urls(csw.records, identifier="OPeNDAP:OPeNDAP")
odp = service_urls(
csw.records, identifier="urn:x-esri:specification:ServiceType:odp:url"
)
dap = OPeNDAP + odp
dap_urls.extend(dap)
print("Number of datasets available: {}".format(len(csw.records.keys())))
for rec, item in csw.records.items():
print("{}".format(item.title))
if dap:
print(fmt(" DAP "))
for url in dap:
print("{}.html".format(url))
print("\n")
# Get only unique endpoints.
dap_urls = list(set(dap_urls))
from ioos_tools.ioos import is_station
from timeout_decorator import TimeoutError
# Filter out some station endpoints.
non_stations = []
for url in dap_urls:
url = f"{url}#fillmismatch"
try:
if not is_station(url):
non_stations.append(url)
except (IOError, OSError, RuntimeError, TimeoutError) as e:
print("Could not access URL {}.html\n{!r}".format(url, e))
dap_urls = non_stations
print(fmt(" Filtered DAP "))
for url in dap_urls:
print("{}.html".format(url))
from pyoos.collectors.ndbc.ndbc_sos import NdbcSos
collector_ndbc = NdbcSos()
collector_ndbc.set_bbox(config["region"]["bbox"])
collector_ndbc.end_time = config["date"]["stop"]
collector_ndbc.start_time = config["date"]["start"]
collector_ndbc.variables = [config["sos_name"]]
ofrs = collector_ndbc.server.offerings
title = collector_ndbc.server.identification.title
print(fmt(" NDBC Collector offerings "))
print("{}: {} offerings".format(title, len(ofrs)))
import pandas as pd
from ioos_tools.ioos import collector2table
ndbc = collector2table(
collector=collector_ndbc, config=config, col="sea_water_temperature (C)"
)
if ndbc:
data = dict(
station_name=[s._metadata.get("station_name") for s in ndbc],
station_code=[s._metadata.get("station_code") for s in ndbc],
sensor=[s._metadata.get("sensor") for s in ndbc],
lon=[s._metadata.get("lon") for s in ndbc],
lat=[s._metadata.get("lat") for s in ndbc],
depth=[s._metadata.get("depth") for s in ndbc],
)
table = pd.DataFrame(data).set_index("station_code")
table
from pyoos.collectors.coops.coops_sos import CoopsSos
collector_coops = CoopsSos()
collector_coops.set_bbox(config["region"]["bbox"])
collector_coops.end_time = config["date"]["stop"]
collector_coops.start_time = config["date"]["start"]
collector_coops.variables = [config["sos_name"]]
ofrs = collector_coops.server.offerings
title = collector_coops.server.identification.title
print(fmt(" Collector offerings "))
print("{}: {} offerings".format(title, len(ofrs)))
coops = collector2table(
collector=collector_coops, config=config, col="sea_water_temperature (C)"
)
if coops:
data = dict(
station_name=[s._metadata.get("station_name") for s in coops],
station_code=[s._metadata.get("station_code") for s in coops],
sensor=[s._metadata.get("sensor") for s in coops],
lon=[s._metadata.get("lon") for s in coops],
lat=[s._metadata.get("lat") for s in coops],
depth=[s._metadata.get("depth") for s in coops],
)
table = pd.DataFrame(data).set_index("station_code")
table
data = ndbc + coops
index = pd.date_range(
start=config["date"]["start"].replace(tzinfo=None),
end=config["date"]["stop"].replace(tzinfo=None),
freq="1H",
)
# Preserve metadata with `reindex`.
observations = []
for series in data:
_metadata = series._metadata
series.index = series.index.tz_localize(None)
series.index = series.index.tz_localize(None)
obs = series.reindex(index=index, limit=1, method="nearest")
obs._metadata = _metadata
observations.append(obs)
import iris
from ioos_tools.tardis import series2cube
attr = dict(
featureType="timeSeries",
Conventions="CF-1.6",
standard_name_vocabulary="CF-1.6",
cdm_data_type="Station",
comment="Data from http://opendap.co-ops.nos.noaa.gov",
)
cubes = iris.cube.CubeList([series2cube(obs, attr=attr) for obs in observations])
outfile = os.path.join(save_dir, "OBS_DATA.nc")
iris.save(cubes, outfile)
%matplotlib inline
ax = pd.concat(data).plot(figsize=(11, 2.25))
from ioos_tools.ioos import get_model_name
from ioos_tools.tardis import get_surface, is_model, proc_cube, quick_load_cubes
from iris.exceptions import ConstraintMismatchError, CoordinateNotFoundError, MergeError
print(fmt(" Models "))
cubes = dict()
for k, url in enumerate(dap_urls):
print("\n[Reading url {}/{}]: {}".format(k + 1, len(dap_urls), url))
try:
cube = quick_load_cubes(url, config["cf_names"], callback=None, strict=True)
if is_model(cube):
cube = proc_cube(
cube,
bbox=config["region"]["bbox"],
time=(config["date"]["start"], config["date"]["stop"]),
units=config["units"],
)
else:
print("[Not model data]: {}".format(url))
continue
cube = get_surface(cube)
mod_name = get_model_name(url)
cubes.update({mod_name: cube})
except (
RuntimeError,
ValueError,
ConstraintMismatchError,
CoordinateNotFoundError,
IndexError,
) as e:
print("Cannot get cube for: {}\n{}".format(url, e))
import iris
from ioos_tools.tardis import (
add_station,
ensure_timeseries,
get_nearest_water,
make_tree,
remove_ssh,
)
from iris.pandas import as_series
for mod_name, cube in cubes.items():
fname = "{}.nc".format(mod_name)
fname = os.path.join(save_dir, fname)
print(fmt(" Downloading to file {} ".format(fname)))
try:
tree, lon, lat = make_tree(cube)
except CoordinateNotFoundError:
print("Cannot make KDTree for: {}".format(mod_name))
continue
# Get model series at observed locations.
raw_series = dict()
for obs in observations:
obs = obs._metadata
station = obs["station_code"]
try:
kw = dict(k=10, max_dist=0.08, min_var=0.01)
args = cube, tree, obs["lon"], obs["lat"]
try:
series, dist, idx = get_nearest_water(*args, **kw)
except RuntimeError as e:
print("Cannot download {!r}.\n{}".format(cube, e))
series = None
except ValueError:
status = "No Data"
print("[{}] {}".format(status, obs["station_name"]))
continue
if not series:
status = "Land "
else:
raw_series.update({station: series})
series = as_series(series)
status = "Water "
print("[{}] {}".format(status, obs["station_name"]))
if raw_series: # Save cube.
for station, cube in raw_series.items():
cube = add_station(cube, station)
cube = remove_ssh(cube)
try:
cube = iris.cube.CubeList(raw_series.values()).merge_cube()
except MergeError as e:
print(e)
ensure_timeseries(cube)
try:
iris.save(cube, fname)
except AttributeError:
# FIXME: we should patch the bad attribute instead of removing everything.
cube.attributes = {}
iris.save(cube, fname)
del cube
print("Finished processing [{}]".format(mod_name))
from ioos_tools.ioos import stations_keys
def rename_cols(df, config):
cols = stations_keys(config, key="station_name")
return df.rename(columns=cols)
from ioos_tools.ioos import load_ncs
from ioos_tools.skill_score import apply_skill, mean_bias
dfs = load_ncs(config)
df = apply_skill(dfs, mean_bias, remove_mean=False, filter_tides=False)
skill_score = dict(mean_bias=df.to_dict())
# Filter out stations with no valid comparison.
df.dropna(how="all", axis=1, inplace=True)
df = df.applymap("{:.2f}".format).replace("nan", "--")
from ioos_tools.skill_score import rmse
dfs = load_ncs(config)
df = apply_skill(dfs, rmse, remove_mean=True, filter_tides=False)
skill_score["rmse"] = df.to_dict()
# Filter out stations with no valid comparison.
df.dropna(how="all", axis=1, inplace=True)
df = df.applymap("{:.2f}".format).replace("nan", "--")
import pandas as pd
# Stringfy keys.
for key in skill_score.keys():
skill_score[key] = {str(k): v for k, v in skill_score[key].items()}
mean_bias = pd.DataFrame.from_dict(skill_score["mean_bias"])
mean_bias = mean_bias.applymap("{:.2f}".format).replace("nan", "--")
skill_score = pd.DataFrame.from_dict(skill_score["rmse"])
skill_score = skill_score.applymap("{:.2f}".format).replace("nan", "--")
import folium
from ioos_tools.ioos import get_coordinates
def make_map(bbox, **kw):
line = kw.pop("line", True)
layers = kw.pop("layers", True)
zoom_start = kw.pop("zoom_start", 5)
lon = (bbox[0] + bbox[2]) / 2
lat = (bbox[1] + bbox[3]) / 2
m = folium.Map(
width="100%", height="100%", location=[lat, lon], zoom_start=zoom_start
)
if layers:
url = "http://oos.soest.hawaii.edu/thredds/wms/hioos/satellite/dhw_5km"
w = folium.WmsTileLayer(
url,
name="Sea Surface Temperature",
fmt="image/png",
layers="CRW_SST",
attr="PacIOOS TDS",
overlay=True,
transparent=True,
)
w.add_to(m)
if line:
p = folium.PolyLine(
get_coordinates(bbox), color="#FF0000", weight=2, opacity=0.9,
)
p.add_to(m)
return m
bbox = config["region"]["bbox"]
m = make_map(bbox, zoom_start=11, line=True, layers=True)
all_obs = stations_keys(config)
from glob import glob
from operator import itemgetter
import iris
from folium.plugins import MarkerCluster
iris.FUTURE.netcdf_promote = True
big_list = []
for fname in glob(os.path.join(save_dir, "*.nc")):
if "OBS_DATA" in fname:
continue
cube = iris.load_cube(fname)
model = os.path.split(fname)[1].split("-")[-1].split(".")[0]
lons = cube.coord(axis="X").points
lats = cube.coord(axis="Y").points
stations = cube.coord("station_code").points
models = [model] * lons.size
lista = zip(models, lons.tolist(), lats.tolist(), stations.tolist())
big_list.extend(lista)
big_list.sort(key=itemgetter(3))
df = pd.DataFrame(big_list, columns=["name", "lon", "lat", "station"])
df.set_index("station", drop=True, inplace=True)
groups = df.groupby(df.index)
locations, popups = [], []
for station, info in groups:
sta_name = all_obs[station]
for lat, lon, name in zip(info.lat, info.lon, info.name):
locations.append([lat, lon])
popups.append(
"[{}]: {}".format(name.rstrip("fillmismatch").rstrip("#"), sta_name)
)
MarkerCluster(locations=locations, popups=popups, name="Cluster").add_to(m)
titles = {
"coawst_4_use_best": "COAWST_4",
"global": "HYCOM",
"NECOFS_GOM3_FORECAST": "NECOFS_GOM3",
"NECOFS_FVCOM_OCEAN_MASSBAY_FORECAST": "NECOFS_MassBay",
"OBS_DATA": "Observations",
}
from itertools import cycle
from bokeh.embed import file_html
from bokeh.models import HoverTool
from bokeh.palettes import Category20
from bokeh.plotting import figure
from bokeh.resources import CDN
from folium import IFrame
# Plot defaults.
colors = Category20[20]
colorcycler = cycle(colors)
tools = "pan,box_zoom,reset"
width, height = 750, 250
def make_plot(df, station):
p = figure(
toolbar_location="above",
x_axis_type="datetime",
width=width,
height=height,
tools=tools,
title=str(station),
)
for column, series in df.iteritems():
series.dropna(inplace=True)
if not series.empty:
if "OBS_DATA" not in column:
bias = mean_bias[str(station)][column]
skill = skill_score[str(station)][column]
line_color = next(colorcycler)
kw = dict(alpha=0.65, line_color=line_color)
else:
skill = bias = "NA"
kw = dict(alpha=1, color="crimson")
legend = f"{titles.get(column, column)}"
legend = legend.rstrip("fillmismatch").rstrip("#")
line = p.line(
x=series.index,
y=series.values,
legend=legend,
line_width=5,
line_cap="round",
line_join="round",
**kw,
)
p.add_tools(
HoverTool(
tooltips=[
("Name", "{}".format(titles.get(column, column))),
("Bias", bias),
("Skill", skill),
],
renderers=[line],
)
)
return p
def make_marker(p, station):
lons = stations_keys(config, key="lon")
lats = stations_keys(config, key="lat")
lon, lat = lons[station], lats[station]
html = file_html(p, CDN, station)
iframe = IFrame(html, width=width + 40, height=height + 80)
popup = folium.Popup(iframe, max_width=2650)
icon = folium.Icon(color="green", icon="stats")
marker = folium.Marker(location=[lat, lon], popup=popup, icon=icon)
return marker
dfs = load_ncs(config)
for station in dfs:
sta_name = all_obs[station]
df = dfs[station]
if df.empty:
continue
p = make_plot(df, station)
marker = make_marker(p, station)
marker.add_to(m)
folium.LayerControl().add_to(m)
m
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Functions inside another functions
Step2: Function references
Step3: With getTalk() we return the shout function and then with ("HellO"), we pass the argument at shout function
Step4: Decorator
Step5: null_decorator is a callable (it’s a function), it takes another callable as its input, and it returns the same input callable without modifying it.
Step6: More complex decorator
Step7: Decorating function that accept Arguments
Step8: Example Make a sandwich
Step9: Example HTML
Step10: Other Example
|
<ASSISTANT_TASK:>
Python Code:
def shout(word="yes"):
return word.capitalize()+"!"
shout()
# As an object, you can assign the function to a variable like any other object
scream = shout
# Notice we don't use parentheses: we are not calling the function,
# we are putting the function "shout" into the variable "scream".
# It means you can then call "shout" from "scream"
scream
scream()
#You can also remove the old name 'shout', and the function
# will still be accessible from 'scream'
del shout
print(shout())
scream()
def talk():
def whisper(word="yes"):
return word.lower() + '...'
print(whisper())
talk()
# But "whisper" DOES NOT EXIST outside "talk":
whisper()
def getTalk(kind="shout"):
# We define functions on the fly
def shout(word="yes"):
return word.capitalize()+"!"
def whisper(word="yes") :
return word.lower()+"...";
# Then we return one of them
if kind == "shout":
# We don't use "()", we are not calling the function,
# we are returning the function object
return shout
else:
return whisper
getTalk()
talk = getTalk()
talk
talk()
getTalk()("HellO")
def doSomethingBefore(func):
print("I do something before then I call the function you gave me")
print(func())
doSomethingBefore(scream)
def null_decorator(func):
print("I'm decorating")
return func
def greet():
return 'Hello!'
greeting= null_decorator(greet)
greeting()
@null_decorator
def greet():
return 'Hello!'
greet()
def uppercase(func):
def wrapper():
original_result = func()
modified_result = original_result.upper()
return modified_result
return wrapper
@uppercase
def greet():
return 'Hello!'
greet()
# Template
def proxy(func):
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
def trace(func):
def wrapper(*args, **kwargs):
print(f'TRACE: calling {func.__name__}() '
f'with {args}, {kwargs}')
original_result = func(*args, **kwargs)
print(f'TRACE: {func.__name__}() '
f'returned {original_result!r}')
return original_result
return wrapper
@trace
def say(name, line):
return f'{name}: {line}'
say('Jane', 'Hello, World')
def bread(func):
def wrapper():
print("</''''''\>")
func()
print("<\______/>")
return wrapper
def ingredients(func):
def wrapper():
print("#tomatoes#")
func()
print("~salad~")
return wrapper
@bread
@ingredients
def sandwich(food="--ham--"):
print(food)
sandwich()
# The decorator to make it bold
def makebold(fn):
# The new function the decorator returns
def wrapper():
# Insertion of some code before and after
return "<b>" + fn() + "</b>"
return wrapper
# The decorator to make it italic
def makeitalic(fn):
# The new function the decorator returns
def wrapper():
# Insertion of some code before and after
return "<i>" + fn() + "</i>"
return wrapper
@makebold
@makeitalic
def say():
return "hello"
say()
### Hard coding
# A decorator is a function that expects ANOTHER function as parameter
def my_shiny_new_decorator(a_function_to_decorate):
# Inside, the decorator defines a function on the fly: the wrapper.
# This function is going to be wrapped around the original function
# so it can execute code before and after it.
def the_wrapper_around_the_original_function():
# Put here the code you want to be executed BEFORE the original function is called
print("Before the function runs")
# Call the function here (using parentheses)
a_function_to_decorate()
# Put here the code you want to be executed AFTER the original function is called
print("After the function runs")
# At this point, "a_function_to_decorate" HAS NEVER BEEN EXECUTED.
# We return the wrapper function we have just created.
# The wrapper contains the function and the code to execute before and after. It’s ready to use!
return the_wrapper_around_the_original_function
# Now imagine you create a function you don't want to ever touch again.
def a_stand_alone_function():
print("I am a stand alone function, don't you dare modify me")
a_stand_alone_function()
#outputs: I am a stand alone function, don't you dare modify me
# Well, you can decorate it to extend its behavior.
# Just pass it to the decorator, it will wrap it dynamically in
# any code you want and return you a new function ready to be used:
a_stand_alone_function_decorated = my_shiny_new_decorator(a_stand_alone_function)
a_stand_alone_function_decorated()
#outputs:
#Before the function runs
#I am a stand alone function, don't you dare modify me
#After the function runs
@my_shiny_new_decorator
def another_stand_alone_function():
print("Leave me alone")
another_stand_alone_function()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Compute inverse solution
Step3: Decoding in sensor space using a logistic regression
Step4: To investigate weights, we need to retrieve the patterns of a fitted model
|
<ASSISTANT_TASK:>
Python Code:
# Author: Denis A. Engemann <denis.engemann@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Jean-Remi King <jeanremi.king@gmail.com>
#
# License: BSD (3-clause)
import os
import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import SelectKBest, f_classif
from sklearn.linear_model import LogisticRegression
import mne
from mne import io
from mne.datasets import sample
from mne.minimum_norm import apply_inverse_epochs, read_inverse_operator
from mne.decoding import (cross_val_multiscore, LinearModel, SlidingEstimator,
get_coef)
print(__doc__)
data_path = sample.data_path()
fname_fwd = data_path + 'MEG/sample/sample_audvis-meg-oct-6-fwd.fif'
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
subjects_dir = data_path + '/subjects'
subject = os.environ['SUBJECT'] = subjects_dir + '/sample'
os.environ['SUBJECTS_DIR'] = subjects_dir
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif'
label_names = 'Aud-rh', 'Vis-rh'
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
tmin, tmax = -0.2, 0.5
event_id = dict(aud_r=2, vis_r=4) # load contra-lateral conditions
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(0.1, None, fir_design='firwin')
events = mne.read_events(event_fname)
# Set up pick list: MEG - bad channels (modify to your needs)
raw.info['bads'] += ['MEG 2443'] # mark bads
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True,
exclude='bads')
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=None, preload=True,
reject=dict(grad=4000e-13, eog=150e-6),
decim=5) # decimate to save memory and increase speed
snr = 3.0
noise_cov = mne.read_cov(fname_cov)
inverse_operator = read_inverse_operator(fname_inv)
stcs = apply_inverse_epochs(epochs, inverse_operator,
lambda2=1.0 / snr ** 2, verbose=False,
method="dSPM", pick_ori="normal")
# Retrieve source space data into an array
X = np.array([stc.lh_data for stc in stcs]) # only keep left hemisphere
y = epochs.events[:, 2]
# prepare a series of classifier applied at each time sample
clf = make_pipeline(StandardScaler(), # z-score normalization
SelectKBest(f_classif, k=500), # select features for speed
LinearModel(LogisticRegression(C=1)))
time_decod = SlidingEstimator(clf, scoring='roc_auc')
# Run cross-validated decoding analyses:
scores = cross_val_multiscore(time_decod, X, y, cv=5, n_jobs=1)
# Plot average decoding scores of 5 splits
fig, ax = plt.subplots(1)
ax.plot(epochs.times, scores.mean(0), label='score')
ax.axhline(.5, color='k', linestyle='--', label='chance')
ax.axvline(0, color='k')
plt.legend()
# The fitting needs not be cross validated because the weights are based on
# the training sets
time_decod.fit(X, y)
# Retrieve patterns after inversing the z-score normalization step:
patterns = get_coef(time_decod, 'patterns_', inverse_transform=True)
stc = stcs[0] # for convenience, lookup parameters from first stc
vertices = [stc.lh_vertno, np.array([], int)] # empty array for right hemi
stc_feat = mne.SourceEstimate(np.abs(patterns), vertices=vertices,
tmin=stc.tmin, tstep=stc.tstep, subject='sample')
brain = stc_feat.plot(views=['lat'], transparent=True,
initial_time=0.1, time_unit='s')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Escrever uma descrição para documentar nossas funções logo abaixo da sua declaração nos permite extrair informações delas utilizando o comando help().
Step2: Integrando sistemas de equações diferenciais
Step3: Nós colocamos as os parâmetros r e k como variáveis padrão da nossa função g_x para que não seja preciso declarar esses valores quando invocarmos a função. Por exemplo
Step4: Mas podemos invocar nossa função g_x com até 3 argumentos, se quisermos alterar os valores de $r$ e $K$
Step5: Agora, vamos testar nosso código para integrar equações diferenciais com a equação do crescimento logístico
Step6: Os comandos abaixo plotam um gráfico do nosso sistema.
Step7: Modelo predador-presa de Lotka-Volterra
Step8: Observação importante. Na penúltima linha acima nós pegamos a lista vn que queremos retornar e a convertemos num vetor no Numpy. Nós fazemos isso para que, durante a etapa de integração do sistema pela função solveE, nós possamos multiplicar esse vetor pelo número dt. Por exemplo, se você pegar uma lista em Python e tentar multiplicá-la por um número, o seguinte acontece
Step9: Nós precisamos converter essa lista para um vetor do Numpy se não quisermos que nosso código forneça algum erro estranho
Step10: Abaixo nós integramos o sistema utilizando nossa função. Note que como condição inicial nós passamos uma lista com 2 valores, um sendo a condição inicial de presas $x(t_0)$ e outro sendo a condição inicial de predadores $y(t_0)$.
Step11: Em seguida, nós plotamos o resultado. Devido a que a função v_xy retorna uma lista, o processo de extrair os dados fica um pouco mais complicado, e nós vamos omitir a explicação para essa sintaxe.
|
<ASSISTANT_TASK:>
Python Code:
# Antes de tudo, importamos o pacote matemático numpy
# que nos permite manipular matrizes e vetores.
import numpy as np
# Declaramos uma função onde colocaremos todo o código
# para integração de Euler, que poderemos invocar facilmente
# sempre que quisermos integrar numericamente uma equação
# diferencial.
def solveE(f, x_t0, time, dt):
'Integra numericamente um sistema de equações diferenciais\
através do método de Euler. Como argumentos recebe uma\
função "f", um valor inicial "x_t0", um intervalo de\
integração "time" e um tamanho de passo "dt". Retorna dois\
vetores: "t" o eixo do tempo e "x" a trajetória do sistema'
# A descrição acima é forma de documentar a nossa função.
# Abaixo, determinamos o número de passos. Em Python 3,
# o operador divisão "/" retorna um float (número racional),
# mas nosso número de passos precisa ser do tipo int.
steps = int(time/dt)
# Abaixo nós criamos uma lista que vai de zero até o final
# do intervalo de integração ("time") em passos de tamanho
# dt. Ela represente o eixo do tempo "t".
t = np.arange(0,time,dt)
x = [0]*(steps)
# Esta é uma forma de declarar e inicializar uma lista
# de zeros com tamanho para armazenar todos os passos.
# A lista "x" armazerá todos os valores x(t).
# Abaixo armazenamos a condição inicial.
x[0] = x_t0
for t_n in range(1,steps):
# O nosso loop terá "steps" passos e "t" irá variar
# de 1 à "steps". O valor de "t" servirá para acessarmos
# e armazenarmos valores dentro da lista "x".
x[t_n] = x[t_n - 1] + f(x[t_n - 1])*dt
# Nossa função retorna a o eixo do tempo "t" e a
# trajetória estimada "x" durante esse intervalo.
# Para podermos plotar gráfitos, precisamos converter "x"
# de uma lista do Python para um vetor do Numpy.
return t, x
help(solveE)
def g_x(x, r = .5, k = 10):
'Determina a taxa de crescimento de uma população "x"\
de acordo com o modelo logístico.'
return -r*(x**2)/k + r*x
g_x(10)
g_x(10,.5,20)
x_0 = 1
time = 15
dt = .01
# como a função retorna 2 resultados, nós precisamos
# guardá-los em 2 variáveis, ou então numa lista
t1,x1 = solveE(g_x,x_0,time,dt)
# exibe gráficos neste documento
%matplotlib inline
# importa o pacote gráfico pylab (Matplotlib)
import pylab as py
# plota o gráfico
py.plot(t1,x1)
def v_xy(v, a = 1.5, b = 1, c = 3, d = 1):
'Determina a evolução de um sistema predador-presa\
do tipo Lotka-Volterra no instante "t". Retorna um\
vetor "v_n" com "v_n[0]" sendo as presas e "v_n[1]"\
os predadores.'
x = v[0]
y = v[1]
dx = a*x - b*x*y
dy = -c*y + d*x*y
vn = np.array([dx,dy])
return vn
[1,2]*3
np.array([1,2])*3
v_0 = [10,1]
time = 20
dt = .01
t2,v2 = solveE(v_xy,v_0,time,dt)
py.plot(t2,np.array(v2).T[0])
py.plot(t2,np.array(v2).T[1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Any one point inside the unit square would represent an image. For example the image associated with the point $(0.25,0.85)$ is shown below.
Step2: Now consider the case where there is some
Step3: We will refer to the structure suggested by
|
<ASSISTANT_TASK:>
Python Code:
x1 = np.random.uniform(size=500)
x2 = np.random.uniform(size=500)
fig = plt.figure();
ax = fig.add_subplot(1,1,1);
ax.scatter(x1,x2, edgecolor='black', s=80);
ax.grid();
ax.set_axisbelow(True);
ax.set_xlim(-0.25,1.25); ax.set_ylim(-0.25,1.25)
ax.set_xlabel('Pixel 2'); ax.set_ylabel('Pixel 1'); plt.savefig('images_in_2dspace.pdf')
im = [(0.25, 0.85)]
plt.imshow(im, cmap='gray',vmin=0,vmax=1)
plt.tick_params(
axis='both', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom='off', # ticks along the bottom edge are off
top='off', # ticks along the top edge are off
left='off',
right='off'
)
plt.xticks([])
plt.yticks([])
plt.xlabel('Pixel 1 = 0.25 Pixel 2 = 0.85')
plt.savefig('sample_2dspace_image.pdf')
x1 = lambda x2: 0.5*np.cos(2*np.pi*x2)+0.5
x2 = np.linspace(0,1,200)
eps = np.random.normal(scale=0.1, size=200)
fig = plt.figure();
ax = fig.add_subplot(1,1,1);
ax.scatter(x2,x1(x2)+eps, edgecolor='black', s=80);
ax.grid();
ax.set_axisbelow(True);
ax.set_xlim(-0.25,1.25); ax.set_ylim(-0.25,1.25); plt.axes().set_aspect('equal')
ax.set_xlabel('Pixel 2'); ax.set_ylabel('Pixel 1'); plt.savefig('structured_images_in_2dspace.pdf')
from matplotlib.colors import LogNorm
x2 = np.random.uniform(size=100000)
eps = np.random.normal(scale=0.1, size=100000)
hist2d = plt.hist2d(x2,x1(x2)+eps, bins=50, norm=LogNorm())
plt.xlim(0.0,1.0); plt.ylim(-0.3,1.3); plt.axes().set_aspect('equal')
plt.xlabel('Pixel 2'); plt.ylabel('Pixel 1')
plt.colorbar();
plt.savefig('histogram_of_structured_images.pdf')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is a particularly small example of a corpus for illustration purposes. Another example could be a list of all the plays written by Shakespeare, list of all wikipedia articles, or all tweets by a particular person of interest.
Step2: Before proceeding, we want to associate each word in the corpus with a unique integer ID. We can do this using the gensim.corpora.Dictionary class. This dictionary defines the vocabulary of all words that our processing knows about.
Step3: Because our corpus is small, there is only 12 different tokens in this Dictionary. For larger corpuses, dictionaries that contains hundreds of thousands of tokens are quite common.
Step4: For example, suppose we wanted to vectorize the phrase "Human computer interaction" (note that this phrase was not in our original corpus). We can create the bag-of-word representation for a document using the doc2bow method of the dictionary, which returns a sparse representation of the word counts
Step5: The first entry in each tuple corresponds to the ID of the token in the dictionary, the second corresponds to the count of this token.
Step6: Note that while this list lives entirely in memory, while in most applications you will want a more scalable solution. Luckily, gensim allows you to use any iterator that returns a single document vector at a time. See the documentation for more details.
|
<ASSISTANT_TASK:>
Python Code:
raw_corpus = ["Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey"]
# Create a set of frequent words
stoplist = set('for a of the and to in'.split(' '))
# Lowercase each document, split it by white space and filter out stopwords
texts = [[word for word in document.lower().split() if word not in stoplist]
for document in raw_corpus]
# Count word frequencies
from collections import defaultdict
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
# Only keep words that appear more than once
processed_corpus = [[token for token in text if frequency[token] > 1] for text in texts]
processed_corpus
from gensim import corpora
dictionary = corpora.Dictionary(processed_corpus)
print(dictionary)
print(dictionary.token2id)
new_doc = "Human computer interaction"
new_vec = dictionary.doc2bow(new_doc.lower().split())
new_vec
bow_corpus = [dictionary.doc2bow(text) for text in processed_corpus]
bow_corpus
from gensim import models
# train the model
tfidf = models.TfidfModel(bow_corpus)
# transform the "system minors" sting
tfidf[dictionary.doc2bow("system minors".lower().split())]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The basics of creating a NetworkX Graph
Step2: For testing and diagnostics it's useful to generate a random Graph. NetworkX comes with several graph models including
Step3: Accessing Nodes and Edges
Step4: Serialization of Graphs
Step5: NetworkX has a ton of Graph serialization methods, and most have methods in the following format for serialization format, format
Step6: Computing Key Players
Step7: Betweenness Centrality
Step8: Closeness Centrality
Step9: Eigenvector Centrality
Step10: Clustering and Cohesion
Step11: Graphs can also be analyzed in terms of distance (the shortest path between two nodes). The longest distance in a graph is called the diameter of the social graph, and represents the longest information flow along the graph. Typically less dense (sparse) social networks will have a larger diameter than more dense networks. Additionally, the average distance is an interesting metric as it can give you information about how close nodes are to each other.
Step12: Let's actually get into some clustering. The python-louvain library uses NetworkX to perform community detection with the louvain method. Here is a simple example of cluster partitioning on a small, built-in social network.
Step13: Visualizing Graphs
Step14: There is, however, a rich drawing library underneath that lets you customize how the Graph looks and is laid out with many different layout algorithms. Let's take a look at an example using one of the built-in Social Graphs
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import random
import community
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
from tribe.utils import *
from tribe.stats import *
from operator import itemgetter
## Some Helper constants
FIXTURES = os.path.join(os.getcwd(), "fixtures")
GRAPHML = os.path.join(FIXTURES, "emails.graphml")
H = nx.Graph(name="Hello World Graph")
# Also nx.DiGraph, nx.MultiGraph, etc
# Add nodes manually, label can be anything hashable
H.add_node(1, name="Ben", email="benjamin@bengfort.com")
H.add_node(2, name="Tony", email="ojedatony1616@gmail.com")
# Can also add an iterable of nodes: H.add_nodes_from
H.add_edge(1,2, label="friends", weight=0.832)
# Can also add an iterable of edges: H.add_edges_from
print nx.info(H)
# Clearing a graph is easy
H.remove_node(1)
H.clear()
H = nx.erdos_renyi_graph(100, 0.20)
print H.nodes()[1:10]
print H.edges()[1:5]
print H.neighbors(3)
# For fast, memory safe iteration, use the `_iter` methods
edges, nodes = 0,0
for e in H.edges_iter(): edges += 1
for n in H.nodes_iter(): nodes += 1
print "%i edges, %i nodes" % (edges, nodes)
# Accessing the properties of a graph
print H.graph['name']
H.graph['created'] = strfnow()
print H.graph
# Accessing the properties of nodes and edges
H.node[1]['color'] = 'red'
H.node[43]['color'] = 'blue'
print H.node[43]
print H.nodes(data=True)[:3]
# The weight property is special and should be numeric
H.edge[0][40]['weight'] = 0.432
H.edge[0][39]['weight'] = 0.123
print H.edge[40][0]
# Accessing the highest degree node
center, degree = sorted(H.degree().items(), key=itemgetter(1), reverse=True)[0]
# A special type of subgraph
ego = nx.ego_graph(H, center)
pos = nx.spring_layout(H)
nx.draw(H, pos, node_color='#0080C9', edge_color='#cccccc', node_size=50)
nx.draw_networkx_nodes(H, pos, nodelist=[center], node_size=100, node_color="r")
plt.show()
# Other subgraphs can be extracted with nx.subgraph
# Finding the shortest path
H = nx.star_graph(100)
print nx.shortest_path(H, random.choice(H.nodes()), random.choice(H.nodes()))
pos = nx.spring_layout(H)
nx.draw(H, pos)
plt.show()
# Preparing for Data Science Analysis
print nx.to_numpy_matrix(H)
# print nx.to_scipy_sparse_matrix(G)
G = nx.read_graphml(GRAPHML) # opposite of nx.write_graphml
print nx.info(G)
# Generate a list of connected components
# See also nx.strongly_connected_components
for component in nx.connected_components(G):
print len(component)
len([c for c in nx.connected_components(G)])
# Get a list of the degree frequencies
dist = FreqDist(nx.degree(G).values())
dist.plot()
# Compute Power log sequence
degree_sequence=sorted(nx.degree(G).values(),reverse=True) # degree sequence
plt.loglog(degree_sequence,'b-',marker='.')
plt.title("Degree rank plot")
plt.ylabel("degree")
plt.xlabel("rank")
# Graph Properties
print "Order: %i" % G.number_of_nodes()
print "Size: %i" % G.number_of_edges()
print "Clustering: %0.5f" % nx.average_clustering(G)
print "Transitivity: %0.5f" % nx.transitivity(G)
hairball = nx.subgraph(G, [x for x in nx.connected_components(G)][0])
print "Average shortest path: %0.4f" % nx.average_shortest_path_length(hairball)
# Node Properties
node = 'benjamin@bengfort.com' # Change to an email in your graph
print "Degree of node: %i" % nx.degree(G, node)
print "Local clustering: %0.4f" % nx.clustering(G, node)
def nbest_centrality(graph, metric, n=10, attribute="centrality", **kwargs):
centrality = metric(graph, **kwargs)
nx.set_node_attributes(graph, attribute, centrality)
degrees = sorted(centrality.items(), key=itemgetter(1), reverse=True)
for idx, item in enumerate(degrees[0:n]):
item = (idx+1,) + item
print "%i. %s: %0.4f" % item
return degrees
degrees = nbest_centrality(G, nx.degree_centrality, n=15)
# centrality = nx.betweenness_centrality(G)
# normalized = nx.betweenness_centrality(G, normalized=True)
# weighted = nx.betweenness_centrality(G, weight="weight")
degrees = nbest_centrality(G, nx.betweenness_centrality, n=15)
# centrality = nx.closeness_centrality(graph)
# normalied = nx.closeness_centrality(graph, normalized=True)
# weighted = nx.closeness_centrality(graph, distance="weight")
degrees = nbest_centrality(G, nx.closeness_centrality, n=15)
# centrality = nx.eigenvector_centality(graph)
# centrality = nx.eigenvector_centrality_numpy(graph)
degrees = nbest_centrality(G, nx.eigenvector_centrality_numpy, n=15)
print nx.density(G)
for subgraph in nx.connected_component_subgraphs(G):
print nx.diameter(subgraph)
print nx.average_shortest_path_length(subgraph)
partition = community.best_partition(G)
print "%i partitions" % len(set(partition.values()))
nx.set_node_attributes(G, 'partition', partition)
pos = nx.spring_layout(G)
plt.figure(figsize=(12,12))
plt.axis('off')
nx.draw_networkx_nodes(G, pos, node_size=200, cmap=plt.cm.RdYlBu, node_color=partition.values())
nx.draw_networkx_edges(G,pos, alpha=0.5)
nx.draw(nx.erdos_renyi_graph(20, 0.20))
plt.show()
# Generate the Graph
G=nx.davis_southern_women_graph()
# Create a Spring Layout
pos=nx.spring_layout(G)
# Find the center Node
dmin=1
ncenter=0
for n in pos:
x,y=pos[n]
d=(x-0.5)**2+(y-0.5)**2
if d<dmin:
ncenter=n
dmin=d
# color by path length from node near center
p=nx.single_source_shortest_path_length(G,ncenter)
# Draw the graph
plt.figure(figsize=(8,8))
nx.draw_networkx_edges(G,pos,nodelist=[ncenter],alpha=0.4)
nx.draw_networkx_nodes(G,pos,nodelist=p.keys(),
node_size=90,
node_color=p.values(),
cmap=plt.cm.Reds_r)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Q
Step2: The probability of being blocked after making a personal attack and increases as a function of how many times the user has been blocked before. This could indicate heightened scrutiny by administrators. The pattern could also occur if users who continue to attack after being blocked make more frequent or more toxic attacks and are hence more likely to be discovered.
Step3: Most attacking comments do not lead to the user being warned/blocked within the next 7 days.
Step4: The more attacks a user makes, the more likely it is that they will have been blocked at least once.
|
<ASSISTANT_TASK:>
Python Code:
# Load scored diffs and moderation event data
d = load_diffs()
df_block_events, df_blocked_user_text = load_block_events_and_users()
df_warn_events, df_warned_user_text = load_warn_events_and_users()
moderated_users = [('warned', df_warned_user_text),
('blocked', df_blocked_user_text),
('either', pd.concat([df_warned_user_text, df_blocked_user_text]))
]
moderation_events = [('warned', df_warn_events),
('blocked', df_block_events),
('either', pd.concat([df_warn_events, df_block_events]))
]
moderation_events_2015 = [('warned', df_warn_events.query('year == 2015')),
('blocked', df_block_events.query('year == 2015')),
('either', pd.concat([df_warn_events.query('year == 2015'), df_block_events.query('year == 2015')]))
]
moderated_users_2015 = [('warn', df_warn_events.query('year == 2015')[['user_text']].assign(blocked = 1)),
('block', df_block_events.query('year == 2015')[['user_text']].assign(blocked = 1)),
('either', pd.concat([df_warn_events.query('year == 2015')[['user_text']].assign(blocked = 1), df_block_events.query('year == 2015')[['user_text']].assign(blocked = 1)]))
]
K = 6
sample = 'blocked'
er_t = 0.425
events = {}
# null events set
e = d[sample][['user_text']].drop_duplicates()
e['timestamp'] = pd.to_datetime('1900')
events[0] = e
# rank block events
ranked_events = df_block_events.copy()
ranks = df_block_events\
.groupby('user_text')['timestamp']\
.rank()
ranked_events['rank'] = ranks
for k in range(1,K):
e = ranked_events.query("rank==%d" % k)[['user_text', 'timestamp']]
events[k] = e
attacks = {}
for k in range(0, K-1):
c = d[sample].merge(events[k], how = 'inner', on='user_text')
c = c.query('timestamp < rev_timestamp')
del c['timestamp']
c = c.merge(events[k+1], how = 'left', on = 'user_text')
c['timestamp'] = c['timestamp'].fillna(pd.to_datetime('2100'))
c = c.query('rev_timestamp < timestamp')
c = c.query('pred_recipient_score_uncalibrated > %f' % er_t)
attacks[k] = c
blocked_users = {i:set(events[i]['user_text']) for i in events.keys()}
attackers = {i:set(attacks[i]['user_text']) for i in attacks.keys()}
dfs_sns = []
for k in range(1, K-1):
u_a = attackers[k]
u_b = blocked_users[k+1]
u_ab = u_a.intersection(u_b)
n_a = len(u_a)
n_ab = len(u_ab)
print('k:',k, n_ab/n_a)
dfs_sns.append(pd.DataFrame({'blocked': [1]*n_ab, 'k': [k]*n_ab}))
dfs_sns.append(pd.DataFrame({'blocked': [0]*(n_a- n_ab), 'k': [k]*(n_a- n_ab)}))
sns.set(font_scale=1.5)
sns.pointplot(x = 'k', y = 'blocked', data = pd.concat(dfs_sns), capsize=.1)
plt.xlabel('k')
plt.ylabel('P(blocked | new attack and blocked k times already)')
plt.savefig('../../paper/figs/p_of_blocked_given_new_attack_and_blocked_already.png')
dfs = []
ts = np.arange(0.325, 0.96, 0.1)
def get_delta(x):
if x['timestamp'] is not None and x['rev_timestamp'] is not None:
return x['timestamp'] - x['rev_timestamp']
else:
return pd.Timedelta('0 seconds')
for t in ts:
for (event_type, events) in moderation_events:
dfs.append(
d['2015'].query('pred_recipient_score_uncalibrated >= %f' % t)\
.loc[:, ['user_text', 'rev_id', 'rev_timestamp']]\
.merge(events, how = 'left', on = 'user_text')\
.assign(delta = lambda x: get_delta(x))\
.assign(blocked= lambda x: 100 * ((x['delta'] < pd.Timedelta('7 days')) & (x['delta'] > pd.Timedelta('0 seconds'))))\
.drop_duplicates(subset = ['rev_id'])\
.assign(threshold = t, event=event_type)
)
ax = sns.pointplot(x='threshold', y='blocked', hue='event', data = pd.concat(dfs), dodge=0.15, capsize=.1, linestyles=[" ", "", " "])
plt.xlabel('Threshold')
#ax.set_ylabels('% of attacks followed bymoderation')
pd.concat(dfs).groupby(['threshold','event'])['blocked'].mean()
def remap(x):
if x < 5:
return str(int(x))
else:
return '5+'
sns.set(font_scale=2.5)
dfs = []
for event_type, users in moderated_users_2015:
dfs.append(\
d['2015'].assign(attack = lambda x: x.pred_recipient_score_uncalibrated >= 0.425)\
.groupby('user_text', as_index = False)['attack'].sum()\
.rename(columns={'attack':'num_attacks'})\
.merge(users, how = 'left', on = 'user_text')\
.assign(
blocked = lambda x: x.blocked.fillna(0,),
num_attacks = lambda x: x.num_attacks.apply(remap),
event = event_type)
)
df = pd.concat(dfs)
g = sns.factorplot(x = 'num_attacks',
y = 'blocked',
col = 'event', data = df, order = ('0', '1', '2', '3','4', '5+'), capsize=.1)
g.set_ylabels('P(event)')
g.set_xlabels('Number of attacks')
plt.savefig('../../paper/figs/fraction_blocked_given_num_attacks.png')
attacks = d['blocked'].query("pred_recipient_score_uncalibrated >= 0.425").query("not author_anon").query("not own_page")
results = []
for i , r in attacks.iterrows():
ts = r['rev_timestamp']
user = r['user_text']
user_blocks = df_block_events[df_block_events['user_text'] == user]
prior_blocks = user_blocks[user_blocks['timestamp'] < ts ]
max_ts = prior_blocks['timestamp'].max()
#if ts < (max_ts + pd.Timedelta('60 days')):
# continue
post_blocks = user_blocks[user_blocks['timestamp'] > ts]
n_blocks_prior = prior_blocks.shape[0]
blocked_again = post_blocks.shape[0] > 0
for days in [7, 14, 30, 60, 90, 180]:
within_x = post_blocks[post_blocks['timestamp'] < (ts + pd.Timedelta('%d days' % days)) ].shape[0] > 0
results.append({'n_blocks_prior': n_blocks_prior,
'blocked_again': within_x,
'within_x': days
})
df = pd.DataFrame(results)
def remap(x):
if x < 5:
return str(int(x))
else:
return '5+'
df['n_blocks_prior'] = df['n_blocks_prior'].apply(remap)
sns.set(font_scale=1.5)
g = sns.factorplot(x = 'n_blocks_prior',
y = 'blocked_again',
col = 'within_x',
data = df,
capsize=.1,
order = [ '1', '2', '3', '4', '5+']
)
#g.set_ylabels('P(attack followed by block | # prior blocks)')
#g.set_xlabels('Number of prior blocks')
plt.savefig('../../paper/figs/8.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Our query this time is going to extract the both the hashtag and the tweets associated with the hashtag. We are going to created documents full of tweets that are defined by their hashtags so we need to be able to reference the hashtags per tweet.
Step3: We can now use pandas to count how many times each hashtag was used. We can turn this into a data frame.
Step4: ...and the most popular hashtags for Chicago.
Step5: Twitter is unique from other types of natural language given the constraints on size. This often makes it difficult to find coherent topics from tweets. Therefore, we want to create documents of tweets. Each document is a list of tweets that contain a particular hashtag. So what we want to do is create a list of tweets per hashtag.
Step6: Above, we are grouping by hashtag and then concatenating the tweets per group into a list. So this is going to be a data frame where the first attribute is the hashtag and the second is a list of tweets with that hashtag. Let's take a look...
Step7: We now need to use a helper function to remove some patterns from the tweets that we don't want. First, we don't want '@' signs or '#'s. We also want to remove urls. We will create a regular expression to do that.
Step8: The function above takes in a sting and replace each of the patterns in that string with the replacement. Notice that we use *pats. This is a way to create an unspecified number of arguments. Let's look at an example.
Step9: This took the string s and replaced @ and # with a blank ''.
Step10: In natural language processing, you often have to tokenize a task, which is to break it up text up into components. These components are often splitting on words so that each word is a unit called a token. Below we are going to simultaneously remove the patterns we don't want and tokenize each tweet and save it to a list of lists called tokenized_docs.
Step11: We can now look at the first item of tokenized_docs to see what it looks like. Notice that it contains a list/lists.
Step12: We then remove the stop words and return it to a list of lists object.
Step13: After tokenization, there is also stemming. This is the process and getting words to their base version. We are going to do a similar process here where we save it to a list of lists called texts.
Step14: And let's look at the first item...
|
<ASSISTANT_TASK:>
Python Code:
# BE SURE TO RUN THIS CELL BEFORE ANY OF THE OTHER CELLS
import psycopg2
import pandas as pd
import re
# pull in our stopwords
from nltk.corpus import stopwords
stops = stopwords.words('english')
# define our query
statement =
SELECT lower(t.text) as tweet, lower(h.text) as hashtag
FROM twitter.tweet t, twitter.hashtag h
WHERE t.job_id = 273 AND t.text NOT LIKE 'RT%' AND t.iso_language = 'en' AND t.tweet_id_str = h.tweet_id
LIMIT 100000;
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
# execute the statement from above
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
# fetch all of the rows associated with the query
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
tweet_dict = {}
for i in list(range(len(column_names))):
tweet_dict['{}'.format(column_names[i])] = [x[i] for x in rows]
tweets = pd.DataFrame(tweet_dict)
tweets.head()
hashtag_groups = tweets.groupby('hashtag').size().sort_values().reset_index()
hashtag_groups.tail()
docs = tweets.groupby('hashtag')['tweet'].apply(list).reset_index()
docs.head()
def removePatterns(string, replacement, *pats):
for pattern in pats:
string = re.sub(pattern,replacement,string)
return string
s = "I have @3 friends named #Arnold"
removePatterns(s,'', '#','@')
url = r'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'\w+')
tokenized_docs = []
for i in docs['tweet']:
document = []
for text in i:
document.append(tokenizer.tokenize(removePatterns(text,'','@','#',url).lower()))
tokenized_docs.append(document)
tokenized_docs[0]
stops_removed = []
for doc in tokenized_docs:
phrases = []
for phrase in doc:
p = [i for i in phrase if i not in stops]
phrases.append(p)
stops_removed.append(phrases)
from nltk.stem.porter import PorterStemmer
p_stemmer = PorterStemmer()
texts = []
for doc in stops_removed:
stemmed = []
for phrase in doc:
try:
stemmed.append([p_stemmer.stem(i) for i in phrase])
except:
pass
texts.append(stemmed)
texts[0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Visualize the constellation
Step2: Transmit though the channel
Step3: Let's see the obtained error value. However, note that in order to get the proper error value we need to repeat this simulation many times and average the result.
Step4: Delete previous variables
Step6: Move code to a function in order to average results
Step8: Simulate for different values of noise power
Step9: Now we can run the simulation. Let's create an AwgnSimulator object and check its parameters.
Step10: We have only added the SNR_db parameter. The * in the name of the SNR_db parameter indicates that this parameter will be unpacked. This means that the _run_simulation method will receive one element of it at a time instead of the whole array.
Step11: We can get the simulation results (a SimulationResults object) with the results atribute of our simulator object. We can call the get_result_values_list method to get the actual values. Let's first check what is stored there.
Step12: We can see that we have our "symbol_error_rate" result, which is a combination of all the times the _run_simulation method was called. Other stored information are the "elapsed_time" and "num_skipped_reps" (more on this one later).
Step13: Now we can finally see the plot and compare the simulated results with theoretical values.
Step14: The simulated values match the theoretical values up to some point, where we can see that we need to simulate more symbols to get the proper symbol error rate. The easy way is to increase the rep_max attribute in our simulator, but this will simulate more symbols also for lower SNR values, which will unecesarelly make the simulation slower. Ideally, we want to simulate a large number of symbols for high SNR values and a low number of symbols for low SNR values.
Step15: Now we can run the simulation. We will also call the set_results_filename so that the results are saved to the disk. The results will be saved using pickle. Furthermore, partial results are also saved (inside a "partial_results" folder) and you can interrupt the simulation and continue from where it was interrupted.
Step16: Calling again will simply load the results from the file and it should be very fast.
Step17: As before we have the "symbol_error_rate", "elapsed_time" and "num_skipped_reps" results.
Step18: The "num_skipped_reps" results indicate how many iterations were skiped due to a SkipThisOne exception being raised in the _run_simulation method for some reason. It is not related to the simulation being stopped ealier due to _keep_going returning False. If you need to know how many iterations were run just add a result in _run_simulation to track that.
Step19: Notice the elapsed times for the low SNR values. The simulation was very fast for low SNR values, while for high SNR values it took longer. You can compare this to the case where the _keep_going method was not implemented and the elapsed time was approximatelly the same for all SNR values.
Step20: Now the simulated symbol error rate match the theoretical value also in the high SNR regime.
Step21: We can index this object with the name of the desired result.
Step22: This yeilds a list of Result objects (see the help of the Result class for more), but if you just want the values then call the get_result_values_list method of the SimulationResults class instead.
Step23: You can also get the confidence interval of some result with the get_result_values_confidence_intervals method. You can even use the confidence interval in the implementation of the _keep_going method, if you want.
Step24: To save the results to a file you can use save_to_file method of SimulationResults. That was already done for us in the end of the simulation, since we have called the set_results_filename method of our simulator.
Step25: Another way to use the data in a SimulationResults object is calling its to_dataframe method. It will create a (https
Step26: Notice that besides the simulation results, the dataframe also includes the simulation parameters (SNR_db in our case) as well as the number of runned repetitions, skipped repetitions, and the elapsed time.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import math
import numpy as np
from matplotlib import pyplot as plt
from pyphysim.modulators.fundamental import BPSK, QAM, QPSK, Modulator
from pyphysim.simulations import Result, SimulationResults, SimulationRunner
from pyphysim.util.conversion import dB2Linear
from pyphysim.util.misc import pretty_time, randn_c
np.set_printoptions(precision=2, linewidth=120)
qpsk = QPSK()
bpsk = BPSK()
qam16 = QAM(16)
qam64 = QAM(64)
fig, [[ax11, ax12], [ax21, ax22]] = plt.subplots(figsize=(10, 10),
nrows=2,
ncols=2)
ax11.set_title("BPSK")
ax11.plot(bpsk.symbols.real, bpsk.symbols.imag, "*r", label="BPSK")
ax11.axis("equal")
ax12.set_title("QPSK")
ax12.plot(qpsk.symbols.real, qpsk.symbols.imag, "*r", label="QPSK")
ax12.axis("equal")
ax21.set_title("16-QAM")
ax21.plot(qam16.symbols.real, qam16.symbols.imag, "*r", label="16-QAM")
ax21.axis("equal")
ax22.set_title("64-QAM")
ax22.plot(qam64.symbols.real, qam64.symbols.imag, "*r", label="64-QAM")
ax22.axis("equal");
qpsk = QPSK()
num_symbols = int(1e3)
# We need the data to be in the interval [0, M), where M is the
# number of symbols in the constellation
data_qpsk = np.random.randint(0, qpsk.M, size=num_symbols)
modulated_data_qpsk = qpsk.modulate(data_qpsk)
SNR_dB = 20
snr_linear = dB2Linear(SNR_dB)
noise_power = 1 / snr_linear
# Noise vector
n = math.sqrt(noise_power) * randn_c(num_symbols)
# received_data_qam = su_channel.corrupt_data(modulated_data_qam)
received_data_qpsk = modulated_data_qpsk + n
# Received data
fig, ax = plt.subplots(figsize=(5, 5))
ax.plot(received_data_qpsk.real, received_data_qpsk.imag, "*")
ax.axis("equal")
# fig.show()
demodulated_data_qpsk = qpsk.demodulate(received_data_qpsk)
symbol_error_rate_qpsk = 1 - sum(
demodulated_data_qpsk == data_qpsk) / demodulated_data_qpsk.size
print("Error QPSK:", symbol_error_rate_qpsk)
del SNR_dB, ax, ax11, ax12, ax21, ax22, bpsk, data_qpsk, demodulated_data_qpsk, fig, modulated_data_qpsk, n, noise_power, num_symbols, qam16, qam64, qpsk, received_data_qpsk, snr_linear, symbol_error_rate_qpsk
def simulate_awgn(modulator: Modulator, num_symbols: int, noise_power: float,
num_reps: int):
Return the symbol error rate
symbol_error_rate = 0.0
for rep in range(num_reps):
data = np.random.randint(0, modulator.M, size=num_symbols)
modulated_data = modulator.modulate(data)
# Noise vector
n = math.sqrt(noise_power) * randn_c(num_symbols)
# received_data_qam = su_channel.corrupt_data(modulated_data_qam)
received_data = modulated_data + n
demodulated_data = modulator.demodulate(received_data)
symbol_error_rate += 1 - sum(
demodulated_data == data) / demodulated_data.size
return symbol_error_rate / num_reps
qpsk = QPSK()
num_symbols = int(1e3)
SNR_dB = 5
snr_linear = dB2Linear(SNR_dB)
noise_power = 1 / snr_linear
# We run twice just to check that the number of simulated symbols and repetitions is enouth to get a proper value
symbol_error1 = simulate_awgn(qpsk, num_symbols, noise_power, num_reps=5000)
symbol_error2 = simulate_awgn(qpsk, num_symbols, noise_power, num_reps=5000)
print(f"Obtained symbol error for SNR {SNR_dB}: {symbol_error1}")
print(f"Obtained symbol error for SNR {SNR_dB}: {symbol_error2}")
# Let's print the theoretical value
print(
f"\nTheoretical symbol error for SNR {SNR_dB}: {qpsk.calcTheoreticalSER(SNR_dB)}"
)
class AwgnSimulator(SimulationRunner):
def __init__(self, SINR_dB_values):
Parameters
----------
SINR_dB_values : np.ndarray
An array with the several SNR values to simulate
super().__init__()
# Add the simulation parameters to the `params` attribute.
self.params.add('SNR_db', SINR_dB_values)
# Here we indicate that the SNR_db parameter should be "unpacked".
# What that means is that the `current_parameters` argument passed to `_run_simulation` will only have one value
# and this value will change in subsequent calls to `_run_simulation`
self.params.set_unpack_parameter('SNR_db')
# Number of times the `_run_simulation` method will run when `simulate` method is called
self.rep_max = 500
# We can save anything that does not change in `_run_simulation` as attributes
# We could also add the information to `self.params` and buind the modulator inside `_run_simulation`
self.modulator = QPSK()
def _run_simulation(self, current_parameters):
# Since SNR_db is an "unpacked parameter" a single value is passed to `_run_simulation`.
# We can get the current value as below
sinr_dB = current_parameters['SNR_db']
# Number of symbols generated for this realization
num_symbols = 1000
# Find the noise power from the SNR value (in dB)
snr_linear = dB2Linear(sinr_dB)
noise_power = 1 / snr_linear
# Generate random transmit data and modulate it
data = np.random.randint(0, self.modulator.M, size=num_symbols)
modulated_data = self.modulator.modulate(data)
# Noise vector
n = math.sqrt(noise_power) * randn_c(num_symbols)
# Receive the corrupted data
received_data = modulated_data + n
# Demodulate the received data and compute the number of symbol errors
demodulated_data = self.modulator.demodulate(received_data)
symbol_errors = sum(demodulated_data != data)
# Create a SimulationResults object and save the symbol error rate.
# Note that the symbol error rate is given by the number of symbol errors divided by the number of
# transmited symbols. We want to combine the symbol error rate for the many calls of `_run_simulation`.
# Thus, we choose `Result.RATIOTYPE` as the "update_type". See the documentation of the `Result` class
# for more about it.
simResults = SimulationResults()
simResults.add_new_result(
"symbol_error_rate",
Result.RATIOTYPE,
value=symbol_errors,
total=num_symbols) # Add one each result you want
return simResults
SNR_db = np.linspace(-5, 15, 9)
runner = AwgnSimulator(SNR_db)
# We can see the simulation parameters using the `params` attribute of the `SimulationRunner`
print(runner.params)
runner.simulate()
runner.results
print("Symbol Errors:\n",
np.array(runner.results.get_result_values_list("symbol_error_rate")))
elapsed_times = runner.results.get_result_values_list("elapsed_time")
print("\nElapsed times:\n", np.array(elapsed_times))
# pretty_time receives a float number corresponding to an elapsed time in seconds and returns a nice string
print(f"\nTotal elapsed time: {pretty_time(sum(elapsed_times))}")
# Now let's plot the results
fig, ax = plt.subplots(figsize=(5, 5))
ax.semilogy(SNR_db, qpsk.calcTheoreticalSER(SNR_db), "--", label="Theoretical")
ax.semilogy(SNR_db,
runner.results.get_result_values_list("symbol_error_rate"),
label="Simulated")
ax.set_title("QPSK Symbol Error Rate (AWGN channel)")
ax.set_ylabel("Symbol Error Rate")
ax.set_xlabel("SNR (dB)")
ax.legend();
class AwgnSimulator2(SimulationRunner):
def __init__(self, SINR_dB_values):
super().__init__()
# Add the simulation parameters to the `params` attribute.
self.params.add('SNR_db', SINR_dB_values)
# Here we indicate that the SNR_db parameter should be "unpacked".
# What that means is that the `current_parameters` argument passed to `_run_simulation` will only have one value
# and this value will change in subsequent calls to `_run_simulation`
self.params.set_unpack_parameter('SNR_db')
# Number of times the `_run_simulation` method will run when `simulate` method is called.
# We are using a value 100 times larger than before, but the simulation will not take
# 100 times the previous elapsed time to finish thanks to the implementation of the
# `_keep_going` method that will allow us to skip many of these iterations for low SNR values
self.rep_max = 50000
# Number of symbols generated for this realization
self.num_symbols = 1000
# Used in the implementation of `_keep_going` method. This is the maximum numbers of symbol
# errors we allow before `_run_simulation` is stoped for a given configuration
self.max_symbol_errors = 1. / 1000. * self.num_symbols * self.rep_max
# We can save anything that does not change in `_run_simulation` as attributes
self.modulator = QPSK()
# Set a nice message for the progressbar
self.progressbar_message = "Simulating for SNR {SNR_db}"
# Change the progressbar "style" to something nicer for the notebook
# Possible values are 'text1', 'text2' and 'ipython'
self.update_progress_function_style = "ipython"
def _keep_going(self, current_params, current_sim_results, current_rep):
# Note that we have added a "symbol_errors" result in `_run_simulation` to use here
# Get the last value in the "symbol_errors" results list, which corresponds to the current configuration
cumulated_symbol_errors \
= current_sim_results['symbol_errors'][-1].get_result()
return cumulated_symbol_errors < self.max_symbol_errors
def _run_simulation(self, current_parameters):
# Since SNR_db is an "unpacked parameter" a single value is passed to `_run_simulation`.
# We can get the current value as below
sinr_dB = current_parameters['SNR_db']
# Find the noise power from the SNR value (in dB)
snr_linear = dB2Linear(sinr_dB)
noise_power = 1 / snr_linear
# Generate random transmit data and modulate it
data = np.random.randint(0, self.modulator.M, size=self.num_symbols)
modulated_data = self.modulator.modulate(data)
# Noise vector
n = math.sqrt(noise_power) * randn_c(self.num_symbols)
# Receive the corrupted data
received_data = modulated_data + n
# Demodulate the received data and compute the number of symbol errors
demodulated_data = self.modulator.demodulate(received_data)
symbol_errors = sum(demodulated_data != data)
# Create a SimulationResults object and save the symbol error rate.
# Note that the symbol error rate is given by the number of symbol errors divided by the number of
# transmited symbols. We want to combine the symbol error rate for the many calls of `_run_simulation`.
# Thus, we choose `Result.RATIOTYPE` as the "update_type". See the documentation of the `Result` class
# for more about it.
simResults = SimulationResults()
simResults.add_new_result("symbol_error_rate",
Result.RATIOTYPE,
value=symbol_errors,
total=self.num_symbols)
simResults.add_new_result("symbol_errors",
Result.SUMTYPE,
value=symbol_errors)
return simResults
runner2 = AwgnSimulator2(SNR_db)
# Set the name name of the results file
# If the file extension is not provided, then pickle will be used to save the results
# If an extension is provided, it can be either 'pickle' or 'json'
runner2.set_results_filename("results_qpsk_awgn")
runner2.simulate()
runner2.simulate()
# However, if you increase the value of rep_max and run the `simulate` method again
# it will run only the remaining iterations
runner2.rep_max += 2000
runner2.simulate()
runner2.results
print(runner2.results.get_result_values_list("num_skipped_reps"))
print("Symbol Errors:\n",
np.array(runner2.results.get_result_values_list("symbol_error_rate")))
elapsed_times2 = runner2.results.get_result_values_list("elapsed_time")
print("\nElapsed times:\n", np.array(elapsed_times2))
# pretty_time receives a float number corresponding to an elapsed time in seconds and returns a nice string
print(f"\nTotal elapsed time: {pretty_time(sum(elapsed_times2))}")
SNR_db
# Now let's plot the results
fig, ax = plt.subplots(figsize=(5, 5))
ax.semilogy(SNR_db, qpsk.calcTheoreticalSER(SNR_db), "--", label="Theoretical")
ax.semilogy(SNR_db,
runner2.results.get_result_values_list("symbol_error_rate"),
label="Simulated")
ax.set_title("QPSK Symbol Error Rate (AWGN channel)")
ax.set_ylabel("Symbol Error Rate")
ax.set_xlabel("SNR (dB)")
ax.legend();
runner2.results.get_result_names()
runner2.results["elapsed_time"]
# Get the elapsed time of each configuration (each SNR in our case)
runner2.results.get_result_values_list("elapsed_time")
runner2.results.get_result_values_confidence_intervals("symbol_error_rate")
# If the filename is provided without extension, then 'pickle' is assumed as the file extension
results = SimulationResults.load_from_file("results_qpsk_awgn")
results
results.to_dataframe()
results.params
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Line plot of sunspot data
Step2: Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.
Step3: Make a line plot showing the sunspot count as a function of year.
Step4: Describe the choices you have made in building this visualization and how they make it effective.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
assert os.path.isfile('yearssn.dat')
data=np.loadtxt('yearssn.dat')
year = data[:,0]
ssc = data[:,1]
assert len(year)==315
assert year.dtype==np.dtype(float)
assert len(ssc)==315
assert ssc.dtype==np.dtype(float)
f = plt.figure(figsize=(10,5))
plt.plot(year,ssc)
plt.xlabel('Year')
plt.ylabel('Sunspot Count')
plt.xlim(1700, 2015)
plt.ylim(0, 200)
plt.tick_params(axis='y', direction='out', length=5)
plt.box(False)
assert True # leave for grading
x = plt.figure(figsize=(15,15))
plt.subplot(4,1,1)
plt.plot(year[0:99], ssc[0:99])
plt.ylabel("Sunspot Count")
plt.box(False)
plt.subplot(4,1,2)
plt.plot(year[100:199], ssc[100:199])
plt.box(False)
plt.subplot(4,1,3)
plt.plot(year[200:299], ssc[200:299])
plt.box(False)
plt.subplot(4,1,4)
plt.plot(year[300:], ssc[300:])
plt.xlabel("Year")
plt.box(False)
plt.tight_layout()
assert True # leave for grading
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now let's simulate different voting outcomes
Step2: With with 2 standard deviations (sigma) margin, we have 98% certainty (only looking at one side of the bell curve) that the true sentiment is yes. So, if we have about 100 people then we can be pretty confident that the Majority of Sandbox is for if more than 60% votes responses are.
Step3: We can here see that the required majority goes steadily down as the number of respondents go up. At 25 responses a 70% majority is required, while with 300 respondents only a 55% majority is required.
|
<ASSISTANT_TASK:>
Python Code:
N_members = 1254
N_respondents = 100
p = 0.6
N_yes = int(N_members*p)
N_no = int(N_members*(1-p))
runs = 10000 # sufficient to get good statistics
votes_samples = np.vstack(([
np.random.choice(
np.hstack((np.ones(N_yes), np.zeros(N_no))),
size=N_respondents, replace=False, p=None)
for i in range(runs)]))
results = np.mean(votes_samples, axis=1)
print('mean: %.3f Sigma: %.3f' % (np.mean(results), np.std(results)))
print('mean-2*sigma.: %.3f' % (np.mean(results)- 2*np.std(results)))
plt.hist(results, bins=59*2+1, range=[0.2, 0.8])
plt.show()
runs = 50000
certain_result_two_percentile = []
certain_result_two_std = []
N_respondants_list = list(range(10, 20, 5)) + list(range(20, 100, 10)) + list(range(100, 700, 25))
for N_respondents in N_respondants_list:
p = 0.5
N_yes = int(N_members*p)
N_no = int(N_members*(1-p))
votes_samples = np.vstack(([
np.random.choice(
np.hstack((np.ones(N_yes), np.zeros(N_no))),
size=N_respondents, replace=False, p=None)
for i in range(runs)]))
results = np.mean(votes_samples, axis=1)
certain_result_two_percentile.append(np.sort(results)[-int(runs*0.02)])
certain_result_two_std.append(0.5 + 2*np.std(results))
fig, ax = plt.subplots(figsize=(12, 6))
x = N_respondants_list
plt.plot(x, certain_result_two_percentile, '-', label='2-percentile')
plt.plot(x, certain_result_two_std, '-', label='Two standard deviations')
ax.xaxis.set_major_locator(MultipleLocator(25))
ax.yaxis.set_major_locator(MultipleLocator(0.05))
plt.ylabel('Needed sentiment incl. margin')
plt.xlabel('Number of responses')
plt.legend()
plt.show()
runs = int(1e6)
N_respondents = 468
p = 0.5
N_yes = int(N_members*p)
N_no = int(N_members*(1-p))
votes_samples = np.vstack(([
np.random.choice(
np.hstack((np.ones(N_yes), np.zeros(N_no))),
size=N_respondents,
replace=False,
p=None)
for i in range(runs)]))
results = np.mean(votes_samples, axis=1)
print('98percentile: %.2f%%' % (100 * np.sort(results)[-int(runs * 0.02)]))
print('2 sigma value: %.2f%%' % (100 * (0.5 + 2*np.std(results))))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The resulting dataset is a Bunch object
Step2: The features of each sample flower are stored in the data attribute of the dataset
Step3: The information about the class of each sample is stored in the target attribute of the dataset
Step4: The names of the classes are stored in the last attribute, namely target_names
Step5: This data is four dimensional, but we can visualize two of the dimensions
Step6: Quick Exercise
Step7: The data downloaded using the fetch_ scripts are stored locally,
Step8: Be warned
Step9: The target here is just the digit represented by the data. The data is an array of
Step10: We can see that they're related by a simple reshaping
Step11: Aside... numpy and memory efficiency
Step12: The long integer here is a memory address
Step13: We see now what the features mean. Each feature is a real-valued quantity representing the
Step14: This example is typically used with an unsupervised learning method called Locally
Step15: Solution
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_iris
iris = load_iris()
iris.keys()
n_samples, n_features = iris.data.shape
print(n_samples)
print(n_features)
# the sepal length, sepal width, petal length and petal width of the first sample (first flower)
print(iris.data[0])
print(iris.data.shape)
print(iris.target.shape)
print(iris.target)
print(iris.target_names)
%matplotlib inline
import matplotlib.pyplot as plt
x_index = 0
y_index = 1
# this formatter will label the colorbar with the correct target names
formatter = plt.FuncFormatter(lambda i, *args: iris.target_names[int(i)])
plt.scatter(iris.data[:, x_index], iris.data[:, y_index], c=iris.target)
plt.colorbar(ticks=[0, 1, 2], format=formatter)
plt.xlabel(iris.feature_names[x_index])
plt.ylabel(iris.feature_names[y_index])
from sklearn import datasets
from sklearn.datasets import get_data_home
get_data_home()
!ls $HOME/scikit_learn_data/
from sklearn.datasets import load_digits
digits = load_digits()
digits.keys()
n_samples, n_features = digits.data.shape
print((n_samples, n_features))
print(digits.data[0])
print(digits.target)
print(digits.data.shape)
print(digits.images.shape)
import numpy as np
print(np.all(digits.images.reshape((1797, 64)) == digits.data))
print(digits.data.__array_interface__['data'])
print(digits.images.__array_interface__['data'])
# set up the figure
fig = plt.figure(figsize=(6, 6)) # figure size in inches
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# plot the digits: each image is 8x8 pixels
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest')
# label the image with the target value
ax.text(0, 7, str(digits.target[i]))
from sklearn.datasets import make_s_curve
data, colors = make_s_curve(n_samples=1000)
print(data.shape)
print(colors.shape)
from mpl_toolkits.mplot3d import Axes3D
ax = plt.axes(projection='3d')
ax.scatter(data[:, 0], data[:, 1], data[:, 2], c=colors)
ax.view_init(10, -60)
from sklearn.datasets import fetch_olivetti_faces
# fetch the faces data
# Use a script like above to plot the faces image data.
# hint: plt.cm.bone is a good colormap for this data
# %load solutions/02A_faces_plot.py
faces = fetch_olivetti_faces()
# set up the figure
fig = plt.figure(figsize=(6, 6)) # figure size in inches
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# plot the faces:
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(faces.images[i], cmap=plt.cm.bone, interpolation='nearest')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Some global data
Step2: Run Strategy
Step3: View logs
Step4: Generate strategy stats - display all available stats
Step5: Run Benchmark, Retrieve benchmark logs, and Generate benchmark stats
Step6: Plot Equity Curves
Step7: Bar Graph
|
<ASSISTANT_TASK:>
Python Code:
import datetime
import matplotlib.pyplot as plt
import pandas as pd
import pinkfish as pf
import strategy
# Format price data.
pd.options.display.float_format = '{:0.2f}'.format
%matplotlib inline
# Set size of inline plots.
'''note: rcParams can't be in same cell as import matplotlib
or %matplotlib inline
%matplotlib notebook: will lead to interactive plots embedded within
the notebook, you can zoom and resize the figure
%matplotlib inline: only draw static images in the notebook
'''
plt.rcParams["figure.figsize"] = (10, 7)
SP500_Sectors = ['SPY', 'XLB', 'XLE', 'XLF', 'XLI', 'XLK', 'XLP', 'XLU', 'XLV', 'XLY']
Other_Sectors = ['RSP', 'DIA', 'IWM', 'QQQ', 'DAX', 'EEM', 'TLT', 'GLD', 'XHB']
Diversified_Assets = ['SPY', 'TLT', 'NLY', 'GLD']
Diversified_Assets_Reddit = ['IWB', 'IEV', 'EWJ', 'EPP', 'IEF', 'SHY', 'GLD']
Robot_Dual_Momentum_Equities = ['SPY', 'CWI']
Robot_Dual_Momentum_Bonds = ['CSJ', 'HYG']
Robot_Dual_Momentum_Equities_Bonds = ['SPY', 'AGG']
Robot_Wealth = ['IWM', 'SPY', 'VGK', 'IEV', 'EWJ', 'EPP', 'IEF', 'SHY', 'GLD']
# Pick one of the above
symbols = SP500_Sectors
capital = 10000
start = datetime.datetime(2007, 1, 1)
#start = datetime.datetime(*pf.SP500_BEGIN)
end = datetime.datetime.now()
#end = datetime.datetime(2019, 12, 1)
options = {
'use_adj' : True,
'use_cache' : True,
'lookback': 6,
'margin': 1,
'use_absolute_mom': False,
'use_regime_filter': False,
'top_tier': 2
#'top_tier': int(len(symbols)/2)
}
options
s = strategy.Strategy(symbols, capital, start, end, options)
s.run()
s.rlog.head()
s.tlog.tail()
s.dbal.tail()
pf.print_full(s.stats)
benchmark = pf.Benchmark('SPY', s.capital, s.start, s.end, use_adj=True)
benchmark.run()
pf.plot_equity_curve(s.dbal, benchmark=benchmark.dbal)
df = pf.plot_bar_graph(s.stats, benchmark.stats)
df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Uncomment to reproject
Step2: The area is very big -> 35000 points.
Step3: Residuals
Step4: Residuals of $ Biomass ~ SppRich + Z(x,y) + \epsilon $
Step5: Fitting the empirical variogram to into a theoretical model
Step6: Valid parametric empirical variogram
Step7: Exponential Model
Step8: Spherical Variogram
|
<ASSISTANT_TASK:>
Python Code:
new_data.crs = {'init':'epsg:4326'}
new_data = new_data.to_crs("+proj=aea +lat_1=29.5 +lat_2=45.5 +lat_0=37.5 +lon_0=-96 +x_0=0 +y_0=0 +ellps=GRS80 +datum=NAD83 +units=m +no_defs ")
new_data['newLon'] = new_data.apply(lambda c : c.geometry.x, axis=1)
new_data['newLat'] = new_data.apply(lambda c : c.geometry.y, axis=1)
new_data['logBiomass'] = np.log(new_data.plotBiomass)
new_data['logSppN'] = np.log(new_data.SppN)
## Let´s make a simple linear trend here.
import statsmodels.api as sm
import statsmodels.formula.api as smf
## All data
### Now with statsmodels.api
#xx = X.SppN.values.reshape(-1,1)
#xx = sm.add_constant(xx)
#model = sm.OLS(Y.values.reshape(-1,1),xx)
model = smf.ols(formula='logBiomass ~ logSppN',data=new_data)
results = model.fit()
param_model = results.params
results.summary()
new_data['residuals1'] = results.resid
# COnsider the the following subregion
section = new_data[lambda x: (x.LON > -100) & (x.LON < -85) & (x.LAT > 30) & (x.LAT < 35) ]
section.plot(column='SppN')
section.plot(column='plotBiomass')
section.shape
Y_hat = results.predict(section)
ress = (section.logBiomass - Y_hat)
param_model.Intercept
conf_int = results.conf_int(alpha=0.05)
plt.scatter(section.logSppN,section.plotBiomass)
#plt.plot(section.SppN,param_model.Intercept + param_model.SppN * section.SppN)
plt.plot(section.logSppN,Y_hat)
#plt.fill_between(Y_hat,Y_hat + conf_int , Y_hat - conf_int)
conf_int
plt.scatter(section.logSppN,section.residuals1)
plt.scatter(section.newLon,section.newLat,c=section.residuals1)
plt.colorbar()
# Import GPFlow
import GPflow as gf
k = gf.kernels.Matern12(2, lengthscales=0.2, active_dims = [0,1] ) + gf.kernels.Constant(2,active_dims=[0,1])
results.resid.plot.hist()
model = gf.gpr.GPR(section[['newLon','newLat']].as_matrix(),section.residuals1.values.reshape(-1,1),k)
%time model.optimize()
k.get_parameter_dict()
model.get_parameter_dict()
import numpy as np
Nn = 300
dsc = section
predicted_x = np.linspace(min(dsc.newLon),max(dsc.newLon),Nn)
predicted_y = np.linspace(min(dsc.newLat),max(dsc.newLat),Nn)
Xx, Yy = np.meshgrid(predicted_x,predicted_y)
## Fake richness
fake_sp_rich = np.ones(len(Xx.ravel()))
predicted_coordinates = np.vstack([ Xx.ravel(), Yy.ravel()]).transpose()
#predicted_coordinates = np.vstack([section.SppN, section.newLon,section.newLat]).transpose()
predicted_coordinates.shape
means,variances = model.predict_y(predicted_coordinates)
sum(means)
fig = plt.figure(figsize=(16,10), dpi= 80, facecolor='w', edgecolor='w')
#plt.pcolor(Xx,Yy,np.sqrt(variances.reshape(Nn,Nn))) #,cmap=plt.cm.Greens)
plt.pcolormesh(Xx,Yy,np.sqrt(variances.reshape(Nn,Nn)))
plt.colorbar()
plt.scatter(dsc.newLon,dsc.newLat,c=dsc.SppN,edgecolors='')
plt.title("VAriance Biomass")
plt.colorbar()
import cartopy
plt.figure(figsize=(17,11))
proj = cartopy.crs.PlateCarree()
ax = plt.subplot(111, projection=proj)
ax = plt.axes(projection=proj)
#algo = new_data.plot(column='SppN',ax=ax,cmap=colormap,edgecolors='')
#ax.set_extent([-93, -70, 30, 50])
#ax.set_extent([-100, -60, 20, 50])
ax.set_extent([-95, -70, 25, 45])
#ax.add_feature(cartopy.feature.LAND)
ax.add_feature(cartopy.feature.OCEAN)
ax.add_feature(cartopy.feature.COASTLINE)
ax.add_feature(cartopy.feature.BORDERS, linestyle=':')
ax.add_feature(cartopy.feature.LAKES, alpha=0.9)
ax.stock_img()
#ax.add_geometries(new_data.geometry,crs=cartopy.crs.PlateCarree())
#ax.add_feature(cartopy.feature.RIVERS)
mm = ax.pcolormesh(Xx,Yy,means.reshape(Nn,Nn),transform=proj )
#cs = plt.contour(Xx,Yy,np.sqrt(variances).reshape(Nn,Nn),linewidths=2,cmap=plt.cm.Greys_r,linestyles='dotted')
cs = plt.contour(Xx,Yy,means.reshape(Nn,Nn),linewidths=2,colors='k',linestyles='dotted',levels=[4.0,5.0,6.0,7.0,8.0])
plt.clabel(cs, fontsize=16,inline=True,fmt='%1.1f')
#ax.scatter(new_data.lon,new_data.lat,c=new_data.error,edgecolors='',transform=proj,cmap=plt.cm.Greys,alpha=0.2)
plt.colorbar(mm)
plt.title("Predicted Species Richness")
#(x.LON > -90) & (x.LON < -80) & (x.LAT > 40) & (x.LAT < 50)
from external_plugins.spystats import tools
filename = "../HEC_runs/results/low_q/data_envelope.csv"
envelope_data = pd.read_csv(filename)
gvg = tools.Variogram(new_data,'logBiomass',using_distance_threshold=600000)
gvg.envelope = envelope_data
gvg.empirical = gvg.envelope.variogram
gvg.lags = gvg.envelope.lags
vdata = gvg.envelope.dropna()
gvg.plot(refresh=False)
from scipy.optimize import curve_fit
s = 0.345
r = 100000.0
nugget = 0.33
init_vals = [0.34, 50000, 0.33] # for [amp, cen, wid]
best_vals_gaussian, covar_gaussian = curve_fit(exponentialVariogram, xdata=vdata.lags.values, ydata=vdata.variogram.values, p0=init_vals)
#best_vals_gaussian, covar_gaussian = curve_fit(exponentialVariogram, xdata=vdata.lags, ydata=vdata.variogram, p0=init_vals)
#best_vals_gaussian, covar_gaussian = curve_fit(sphericalVariogram, xdata=vdata.lags, ydata=vdata.variogram, p0=init_vals)
v
gaussianVariogram(hx)
s =best_vals[0]
r = best_vals[1]
nugget = best_vals[2]
fitted_gaussianVariogram = lambda x : exponentialVariogram(x,sill=s,range_a=r,nugget=nugget)
gammas = pd.DataFrame(map(fitted_gaussianVariogram,hx))
import functools
fitted_gaussian2 = functools.partial(gaussianVariogram,s,r,nugget)
hx = np.linspace(0,600000,100)
vg = tools.Variogram(section,'residuals1',using_distance_threshold=500000)
## already fitted previously
s = 0.345255240992
r = 65857.797111
nugget = 0.332850902482
def gaussianVariogram(h,sill=0,range_a=0,nugget=0):
Ih = 1.0 if h >= 0 else 0.0
g_h = ((sill - nugget)*(1 - np.exp(-(h**2 / range_a**2)))) + nugget*Ih
return g_h
def exponentialVariogram(h,sill=0,range_a=0,nugget=0):
if isinstance(h,np.array):
Ih = [1.0 if hx >= 0.0 else 0.0 for hx in h]
else:
Ih = 1.0 if h >= 0 else 0.0
g_h = (sill - nugget)*(1 - np.exp(-h/range_a)) + (nugget*Ih)
return g_h
h = 2
[1.0 if hx >= 0.0 else 0.0 for hx in i
def sphericalVariogram(h,sill=0,range_a=0,nugget=0):
Ih = 1.0 if h >= 0 else 0.0
I0r = 1.0 if h <= range_a else 0.0
Irinf = 1.0 if h > range_a else 0.0
g_h = (sill - nugget)((3*h / float(2*range_a))*I0r + Irinf) - (h**3 / float(2*range_a)) + (nugget*Ih)
return g_h
def theoreticalVariogram(model_function,sill,range_a,nugget):
return lambda x : model_function(x,sill,range_a,nugget)
tvariogram = theoreticalVariogram(gaussianVariogram,s,r,nugget)
tvariogram = theoreticalVariogram(,s,r,nugget)
%time gs = np.array(map(tvariogram,hx))
x = vg.plot(with_envelope=True,num_iterations=30,refresh=False)
plt.plot(hx,gs,color='blue')
import statsmodels.regression.linear_model as lm
Mdist = vg.distance_coordinates.flatten()
%time vars = np.array(map(tvariogram,Mdist))
CovMat = vars.reshape(len(section),len(section))
X = section.logSppN.values
Y = section.logBiomass.values
plt.imshow(CovMat)
%time model = lm.GLS(Y,X,sigma=CovMat)
%time results = model.fit()
new_data.residuals
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: HiveContext, a superset of SQLContext, was recommended for most use cases. Please make sure you are using HiveContext now!
Step2: Show time
Step3: (2b) Read from Hive
Step4: In this class, you will use pixnet_user_log_1000 for further works
Step5: How many rows in pixnet_user_log
Step6: Part 3
Step7: Part 4
Step8: Part 5
Step9: Don't forget to stop sc
|
<ASSISTANT_TASK:>
Python Code:
from pyspark.sql import SQLContext, Row
sqlContext = SQLContext(sc)
sqlContext
from pyspark.sql import HiveContext, Row
sqlContext= HiveContext(sc)
sqlContext
jsonfile = "file:///opt/spark-1.4.1-bin-hadoop2.6/examples/src/main/resources/people.json"
df = sqlContext.read.load(jsonfile, format="json")
# TODO: Replace <FILL IN> with appropriate code
df.<FILL IN>
#print df's schema
df.printSchema()
sqlContext.sql("SHOW TABLES").show()
sqlContext.sql("SELECT * FROM pixnet_user_log_1000").printSchema()
from datetime import datetime
start_time = datetime.now()
df2 = sqlContext.sql("SELECT * FROM pixnet_user_log_1000")
end_time = datetime.now()
print df2.count()
print('Duration: {}'.format(end_time - start_time))
df2.select('time').show(2)
#registers this RDD as a temporary table using the given name.
df2.registerTempTable("people")
# Create an UDF for how long some text is
# example from user guide, length function
sqlContext.registerFunction("strLenPython", lambda x: len(x))
# split function for parser
sqlContext.registerFunction("strDate", lambda x: x.split("T")[0])
# put udf with expected columns
results = sqlContext.sql("SELECT author, \
strDate(time) AS dt, \
strLenPython(action) AS lenAct \
FROM people")
# print top 5 results
results.show(5)
sqlContext.cacheTable("people")
start_time = datetime.now()
sqlContext.sql("SELECT * FROM people").count()
end_time = datetime.now()
print('Duration: {}'.format(end_time - start_time))
sqlContext.sql("SELECT strDate(time) AS dt,\
count(distinct author) AS cnt \
FROM people \
GROUP BY strDate(time)").show(5)
sqlContext.uncacheTable("people")
# TODO: Replace <FILL IN> with appropriate code
result = <FILL IN>
sc.stop()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 2
Step2: Step 3
Step3: Step 4
Step4: Step 5
Step5: Bonus
|
<ASSISTANT_TASK:>
Python Code:
# read the data into a DataFrame
import pandas as pd
url = 'https://raw.githubusercontent.com/kjones8812/DAT4-students/master/kerry/Final/NBA_players_2015.csv'
nba = pd.read_csv(url, index_col=0)
nba.head()
# examine the columns
# examine the positions
# map positions to numbers
# create feature matrix (X) (use fields: 'ast', 'stl', 'blk', 'tov', 'pf')
# create response vector (y)
# import class
# instantiate with K=5
# fit with data
# create a list to represent a player
# make a prediction
# calculate predicted probabilities
# repeat for K=50
# calculate predicted probabilities
# allow plots to appear in the notebook
%matplotlib inline
import matplotlib.pyplot as plt
# increase default figure and font sizes for easier viewing
plt.rcParams['figure.figsize'] = (6, 4)
plt.rcParams['font.size'] = 14
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. 比特币 vs 黄金 ?莱特币 vs 白银 ?
Step2: 接下来从内置期货symbol数据中查到国际期货黄金,白银的code:
Step3: 将上述期货黄金,白银产品和比特币,莱特币一起做交易数据获取,如下:
Step4: 只使用正负号相关度计算,如下所示:
Step5: 从结果可以看到,和比特币最相关的是国内白银,并不是黄金,而且国内期货相关性要高于国际期货产品。
Step6: 关闭沙盒数据,需要运行下载了第19节中的各个全市场数据,由于hdf5文件解压后非常大,还需要区分python版本,而且python2还分了市场,所以建议使用csv格式的缓存文件。
Step7: 下面使用正负号(涨跌)相关计算比特币和a股市场中所有股票的相关性,如下:
Step8: 将结果使用DataFrame进行包装, 如下:
Step9: 首先统计一下A股所有股票与比特币相关性的平均值,可以看到平均结果在0.035左右,则上面期货黄金,白银与比特币的相关度也就大概这个水平,并不高,如下 :
Step10: 先用qcut统计一下相关度的平均各个级,如下:
Step11: 大概算出bin的阀值,使用cut bins的方式进行统计,如下所示:
Step12: 可以看到A股市场中与比特币正相关的值比较高,所以如下要大概选取100个a股市场中与比特币最相关的只选取正相关的,如下:
Step13: 3. 比特币综合实战交易当日策略
Step18: 下面开始写择时策略AbuBTCDayBuy,如下:
Step19: 上面编写的AbuBTCDayBuy即完成了在预测今天比特币有大行情,且今天与比特币最相关的市场是涨势的择时策略:
Step20: 下面开始使用2016-08-09至2017-08-08做为回测时间段,如下:
Step21: 上面的结果可以看到胜率可以达到65%,盈亏比也比较高,由于使用a股市场的top100个做为相关性策略参数对象,比特币是24小时交易,所以可以当天a股闭市后获取数据进行实盘策略,也可以a股开盘涨跌稳定后(开盘两个小时后)获取数据进行实盘策略,也可以使用其它市场的参数,下面示例使用美股市场的最相关的top100,由于美股是晚上进行交易,即可错开时间。
Step22: 使用qcut可以明显看到和A股市场不同,美股市场中存在一定数量与比特币负相关值比较高的symbol:
Step23: 同样再使用cut找到正负相关性最高的top100,与a股市场不同,美股市场同时使用了正相关和负相关,如下:
Step24: 从下面可以看到vote_direction的值根据相关性的正负设定了方向,在AbuBTCDayBuy策略中会根据这个方向做投票方向统计:
Step25: 与a股市场类似,下面使用AbuBTCDayBuy进行回测,不同点是使用美股市场的top100做为相关参数,如下:
|
<ASSISTANT_TASK:>
Python Code:
# 基础库导入
from __future__ import print_function
from __future__ import division
import warnings
warnings.filterwarnings('ignore')
warnings.simplefilter('ignore')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import ipywidgets
%matplotlib inline
import os
import sys
sys.path.insert(0, os.path.abspath('../'))
import abupy
# 使用沙盒数据,目的是和书中一样的数据环境
abupy.env.enable_example_env_ipython()
from abupy import AbuFuturesCn, AbuFuturesGB, ABuSymbolPd, ABuCorrcoef, ECoreCorrType, EMarketDataFetchMode
from abupy import ECoreCorrType, EMarketTargetType, find_similar_with_se, ABuScalerUtil, abu, ABuProgress
from abupy import AbuProgress, AbuMetricsBase, EDataCacheType, ml, AbuFactorSellNDay
fcn = AbuFuturesCn()
fcn.futures_cn_df[(fcn.futures_cn_df['product'] == '黄金') |
(fcn.futures_cn_df['product'] == '白银')]
fgb = AbuFuturesGB()
fgb.futures_gb_df[(fgb.futures_gb_df['product'] == '伦敦金') | (fgb.futures_gb_df['product'] == '伦敦银') |
(fgb.futures_gb_df['product'] == '纽约黄金') | (fgb.futures_gb_df['product'] == '纽约白银')]
choice_symbols = ['btc', 'ltc', 'AU0', 'AG0', 'XAU', 'XAG', 'SI', 'GC']
panel = ABuSymbolPd.make_kl_df(choice_symbols, start='2014-03-19', end='2017-07-25',
show_progress=True)
# 转换panel轴方向,即可方便获取所有金融时间数据的某一个列
panel = panel.swapaxes('items', 'minor')
# dropna:因为btc, ltc一周交易7天,别的市场5天,dropna即把周六,周日的都drop了
cg_df = panel['p_change'].dropna()
cg_df.tail()
corr_df = ABuCorrcoef.corr_matrix(cg_df, similar_type=ECoreCorrType.E_CORE_TYPE_SIGN)
corr_df.btc.sort_values()[::-1]
# 关闭沙盒数据
abupy.env.disable_example_env_ipython()
# 将数据读取模式设置为本地数据模式,即进行全市场回测时最合适的模式,运行效率高,且分类数据更新和交易回测。
abupy.env.g_data_fetch_mode = EMarketDataFetchMode.E_DATA_FETCH_FORCE_LOCAL
def select_store_cache(use_csv):
if use_csv:
abupy.env.g_data_cache_type = EDataCacheType.E_DATA_CACHE_CSV
else:
abupy.env.g_data_cache_type = EDataCacheType.E_DATA_CACHE_HDF5
print(abupy.env.g_data_cache_type)
use_csv = ipywidgets.Checkbox(True)
_ = ipywidgets.interact(select_store_cache, use_csv=use_csv)
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN
similar_a = find_similar_with_se('btc', start='2013-09-01', end='2016-08-08', corr_type=ECoreCorrType.E_CORE_TYPE_SIGN)
similar_a_pd = pd.DataFrame(similar_a, columns=['symbol', 'sim'])
similar_a_pd.head()
similar_a_pd.sim.mean()
pd.qcut(similar_a_pd.sim, 10).value_counts()
pd.cut(similar_a_pd.sim, bins=[-np.inf, -0.03, 0.1, np.inf]).value_counts()
similar_a_top = similar_a_pd[(similar_a_pd.sim > 0.088)].iloc[2:]
# 添加个投票方向在下面的示例策略中会使用
similar_a_top['vote_direction'] = np.where(similar_a_top.sim > 0, 1, -1)
print(similar_a_top.shape)
similar_a_top.head()
# 只取到2016-08-08,保留一年的数据做回测使用
btc = ABuSymbolPd.make_kl_df('btc', start='2013-09-01', end='2016-08-08')
btc_ml = ml.BtcBigWaveClf(btc=btc)
param_grid = {'max_features': ['sqrt', 'log2'], 'n_estimators': np.arange(50, 500, 50)}
btc_ml.random_forest_classifier_best(param_grid=param_grid)
_ = btc_ml.fit()
from abupy import AbuFactorBuyBase, BuyCallMixin
class AbuBTCDayBuy(AbuFactorBuyBase, BuyCallMixin):
def _init_self(self, **kwargs):
# 市场中与btc最相关的top个股票
self.btc_similar_top = kwargs.pop('btc_similar_top')
# 超过多少个相关股票今天趋势相同就买入
self.btc_vote_val = kwargs.pop('btc_vote_val', 0.60)
def _collect_kl(sim_line):
在初始化中将所有相关股票的对应时间的k线数据进行收集
start = self.kl_pd.iloc[0].date
end = self.kl_pd.iloc[-1].date
kl = ABuSymbolPd.make_kl_df(sim_line.symbol, start=start, end=end)
self.kl_dict[sim_line.symbol] = kl
self.kl_dict = {}
# k线数据进行收集到类字典对象self.kl_dict中
self.btc_similar_top.apply(_collect_kl, axis=1)
def fit_day(self, today):
:param today: 当前驱动的交易日金融时间序列数据
:return:
# 忽略不符合买入的天(统计周期内前两天, 因为btc的机器学习特证需要三天交易数据)
if self.today_ind < 2:
return None
# 今天,昨天,前天三天的交易数据进行特证转换
btc = self.kl_pd[self.today_ind - 2:self.today_ind + 1]
# 三天的交易数据进行转换后得到btc_today_x
btc_today_x = self.make_btc_today(btc)
# btc_ml并没有在这里传入,实际如果要使用,需要对外部的btc_ml进行本地序列化后,构造读取本地
# 买入条件2: 使用在第12节:机器学习与比特币示例中编写的:信号发出今天比特币会有大行情
if btc_ml.predict(btc_today_x):
# 买入条件1: 当日这100个股票60%以上都是上涨的
vote_val = self.similar_predict(today.date)
if vote_val > self.btc_vote_val:
# 没有使用当天交易日的close等数据,且btc_ml判断的大波动是当日,所以当日买入
return self.buy_today()
def make_btc_today(self, sib_btc):
构造比特币三天数据特证
sib_btc['big_wave'] = (sib_btc.high - sib_btc.low) / sib_btc.pre_close > 0.55
sib_btc['big_wave'] = sib_btc['big_wave'].astype(int)
sib_btc_scale = ABuScalerUtil.scaler_std(
sib_btc.filter(['open', 'close', 'high', 'low', 'volume', 'pre_close',
'ma5', 'ma10', 'ma21', 'ma60', 'atr21', 'atr14']))
# 把标准化后的和big_wave,date_week连接起来
sib_btc_scale = pd.concat([sib_btc['big_wave'], sib_btc_scale, sib_btc['date_week']], axis=1)
# 抽取第一天,第二天的大多数特征分别改名字以one,two为特征前缀,如:one_open,one_close,two_ma5,two_high.....
a0 = sib_btc_scale.iloc[0].filter(['open', 'close', 'high', 'low', 'volume', 'pre_close',
'ma5', 'ma10', 'ma21', 'ma60', 'atr21', 'atr14', 'date_week'])
a0.rename(index={'open': 'one_open', 'close': 'one_close', 'high': 'one_high', 'low': 'one_low',
'volume': 'one_volume', 'pre_close': 'one_pre_close',
'ma5': 'one_ma5', 'ma10': 'one_ma10', 'ma21': 'one_ma21',
'ma60': 'one_ma60', 'atr21': 'one_atr21', 'atr14': 'one_atr14',
'date_week': 'one_date_week'}, inplace=True)
a1 = sib_btc_scale.iloc[1].filter(['open', 'close', 'high', 'low', 'volume', 'pre_close',
'ma5', 'ma10', 'ma21', 'ma60', 'atr21', 'atr14', 'date_week'])
a1.rename(index={'open': 'two_open', 'close': 'two_close', 'high': 'two_high', 'low': 'two_low',
'volume': 'two_volume', 'pre_close': 'two_pre_close',
'ma5': 'two_ma5', 'ma10': 'two_ma10', 'ma21': 'two_ma21',
'ma60': 'two_ma60', 'atr21': 'two_atr21', 'atr14': 'two_atr14',
'date_week': 'two_date_week'}, inplace=True)
# 第三天的特征只使用'open', 'low', 'pre_close', 'date_week',该名前缀today,如today_open,today_date_week
a2 = sib_btc_scale.iloc[2].filter(['big_wave', 'open', 'low', 'pre_close', 'date_week'])
a2.rename(index={'open': 'today_open', 'low': 'today_low',
'pre_close': 'today_pre_close',
'date_week': 'today_date_week'}, inplace=True)
# 将抽取改名字后的特征连接起来组合成为一条新数据,即3天的交易数据特征->1条新的数据
btc_today = pd.DataFrame(pd.concat([a0, a1, a2], axis=0)).T
# 开始将周几进行离散处理
dummies_week_col = btc_ml.df.filter(regex='(^one_date_week_|^two_date_week_|^today_date_week_)').columns
dummies_week_df = pd.DataFrame(np.zeros((1, len(dummies_week_col))), columns=dummies_week_col)
# 手动修改每一天的one hot
one_day_key = 'one_date_week_{}'.format(btc_today.one_date_week.values[0])
dummies_week_df[one_day_key] = 1
two_day_key = 'two_date_week_{}'.format(btc_today.two_date_week.values[0])
dummies_week_df[two_day_key] = 1
today_day_key = 'today_date_week_{}'.format(btc_today.today_date_week.values[0])
dummies_week_df[today_day_key] = 1
btc_today.drop(['one_date_week', 'two_date_week', 'today_date_week'], inplace=True, axis=1)
btc_today = pd.concat([btc_today, dummies_week_df], axis=1)
return btc_today.as_matrix()[:, 1:]
def similar_predict(self, today_date):
与比特币在市场中最相关的top100个股票已各自今天的涨跌结果进行投票
def _predict_vote(sim_line, _today_date):
kl = self.kl_dict[sim_line.symbol]
if kl is None:
return -1 * sim_line.vote_direction > 0
kl_today = kl[kl.date == _today_date]
if kl_today is None or kl_today.empty:
return -1 * sim_line.vote_direction > 0
# 需要 * sim_line.vote_direction,因为负相关的存在
return kl_today.p_change.values[0] * sim_line.vote_direction > 0
vote_result = self.btc_similar_top.apply(_predict_vote, axis=1, args={today_date, })
# 将所有投票结果进行统计,得到与比特币最相关的这top100个股票的今天投票结果
vote_val = 1 - vote_result.value_counts()[False] / vote_result.value_counts().sum()
return vote_val
buy_factors = [{'btc_similar_top':similar_a_top,
'class': AbuBTCDayBuy}]
sell_factors = [{'class': AbuFactorSellNDay, 'sell_n': 1}]
# 设置市场类型为币类
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_TC
#买入因子,卖出因子等依然使用相同的设置,如下所示:
read_cash = 1000000
abupy.beta.atr.g_atr_pos_base = 0.5
abu_result_tuple, kl_pd_manger = abu.run_loop_back(read_cash,
buy_factors,
sell_factors,
start='2016-08-09',
end='2017-08-08',
choice_symbols=['btc'], n_process_pick=1)
AbuMetricsBase.show_general(*abu_result_tuple, returns_cmp=True)
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_US
similar_us = find_similar_with_se('btc', start='2013-09-01', end='2016-08-08', corr_type=ECoreCorrType.E_CORE_TYPE_SIGN)
similar_us_pd = pd.DataFrame(similar_us, columns=['symbol', 'sim'])
similar_us_pd.head()
pd.qcut(similar_us_pd.sim, 10).value_counts()
pd.cut(similar_us_pd.sim, bins=[-np.inf, -0.072, 0.072, np.inf]).value_counts()
sim_us_top = similar_us_pd[(similar_us_pd.sim > 0.071) | (similar_us_pd.sim < -0.070)].iloc[2:]
sim_us_top['vote_direction'] = np.where(sim_us_top.sim > 0, 1, -1)
pd.options.display.max_rows = 6
sim_us_top
# 设置市场类型为港股
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_TC
buy_factors = [{'btc_similar_top': sim_us_top, 'btc_vote_val': 0.55, 'class': AbuBTCDayBuy}]
abu_result_tuple, kl_pd_manger = abu.run_loop_back(read_cash,
buy_factors,
sell_factors,
start='2016-08-09',
end='2017-08-08',
choice_symbols=['btc'], n_process_pick=1)
AbuMetricsBase.show_general(*abu_result_tuple, returns_cmp=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Take and $\beta_0=3$ and $\beta_1=5$, then we can generate a dataset from our statistical model, such as the one pictured below. In black we plot the line $y=3+5x$, and in red we plot the line of best fit, in the least squares sense.
Step2: Of course, the red and the black lines are not identical, because our datapoints are a random sample from our statistical model. If we were to resample our data, we would get an entirely different set of datapoints, and consequently a new set of estimates.
Step3: Here we see that the estimates for the slope of the least squares line have a histogram that looks like
Step4: So far we have used simulation to show that estimates of statistics of interest are inherently variable across datasets. In practice, however, we only collect one dataset, but we still want to quantify the variability of our estimate. It turns out that the simulation procedure from above is still useful to us.
Step5: Now that we have information about when each trial begins, we can slice our data so that we collect a window around each trial. Here we'll define the window, and create a new array of shape (trials, neurons, times). We'll use the phrase epochs interchangeably with trials.
Step6: We'll now bootstrap lower / upper bounds for the activity at each timepoint in a trial. We'll do this by considering the data across trials.
Step7: Finally, we can plot the timepoint that had the most activity in each bootstrap iteration. This gives us an idea for the variability across trials, and where in time the activity tends to be clustered.
Step8: ADVANCED QUESTION
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('../src/')
import opencourse as oc
import numpy as np
import scipy.stats as stt
import matplotlib.pyplot as plt
import pandas as pd
from scipy import polyfit
from scipy.ndimage.filters import gaussian_filter1d
%matplotlib inline
# Below we'll plot the PDF of a normal distribution.
mean, std = 0, 1
inputs = np.arange(-4, 4, .01)
prob = stt.norm.pdf(inputs, mean, std)
fig, ax = plt.subplots()
ax.plot(inputs, prob)
def simulate_data(n_datapoints, beta_1, beta_0, noise_func=np.random.randn):
x = np.random.rand(n_datapoints)
noise = noise_func(n_datapoints)
y = beta_1 * x + beta_0 + noise
return x, y
def fit_model_to_data(x, y, model_degree=1):
betas_hat = polyfit(x, y, model_degree)
return betas_hat
n_datapoints = 25
beta_1 = 5
beta_0 = 3
x, y = simulate_data(n_datapoints, beta_1, beta_0)
beta_1_hat, beta_0_hat = fit_model_to_data(x, y)
# Create "test" predicted points for our two models
x_pred = np.linspace(x.min(), x.max(), 1000)
# The "true" model
y_pred_true = x_pred * beta_1 + beta_0
y_pred_model = x_pred * beta_1_hat + beta_0_hat
# Now plot the sample datapoints and our model
fig, ax = plt.subplots()
ax.plot(x, y, 'b.')
ax.plot(x_pred, y_pred_true, 'k')
ax.plot(x_pred, y_pred_model, 'r')
n_datapoints = 25
n_simulations = 1000
beta_1 = 5
beta_0 = 3
betas = np.zeros([n_simulations, 2])
simulations = np.zeros([n_simulations, x_pred.shape[-1]])
for ii in range(n_simulations):
x = np.random.rand(n_datapoints)
noise = np.random.randn(n_datapoints)
y = beta_1 * x + beta_0 + noise
beta_1_hat, beta_0_hat = polyfit(x, y, 1)
y_pred_model = x_pred * beta_1_hat + beta_0_hat
betas[ii] = [beta_0_hat, beta_1_hat]
simulations[ii] = y_pred_model
fig, axs = plt.subplots(1, 2, sharey=True)
for ii, (ax, ibeta) in enumerate(zip(axs, betas.T)):
ax.hist(ibeta)
ax.set_title("Estimated Beta {}\nMean: {:.3f}\nSTD: {:.3f}".format(
ii, ibeta.mean(), ibeta.std()))
def simulate_multiple_data_sets(beta_1, beta_0, sample_sizes,
noise_func=np.random.randn, n_simulations=1000,
n_col=2):
n_row = int(np.ceil(len(sample_sizes) / float(n_col)))
fig, axs = plt.subplots(n_row, n_col, figsize=(3*n_col, 3*n_row), sharex=True)
for n_samples, ax in zip(sample_sizes, axs.ravel()):
all_betas = np.zeros([n_simulations, 2])
for ii in range(n_simulations):
x, y = simulate_data(n_samples, beta_1, beta_0, noise_func=noise_func)
betas = fit_model_to_data(x, y)
all_betas[ii] = betas
ax.hist(all_betas[:, 0])
ax.set_title('Sample size: {}'.format(n_samples))
_ = fig.suptitle(r'Distribution of $\beta_1$', fontsize=20)
return fig
### QUESTION ANSWER
sample_sizes = [10, 20, 40, 80]
n_simulations = 1000
fig = simulate_multiple_data_sets(beta_1, beta_0, sample_sizes)
_ = plt.setp(fig.axes, xlim=[0, 8])
### QUESTION ANSWER
def my_noise_func(n):
noise = 4 * np.random.beta(1, 3, n)
return noise - np.mean(noise)
fig, ax = plt.subplots()
ax.hist(my_noise_func(100), bins=20)
### QUESTION ANSWER
# Effect of different noise distributions on the empirical mean
# Define noise function here
empirical_means = np.zeros(n_simulations)
# Run simulations
for ii in range(n_simulations):
x, y = simulate_data(n_datapoints, beta_1, beta_0, noise_func=my_noise_func)
empirical_means[ii] = np.mean(y)
# Plot the results
fig, ax = plt.subplots()
_ = ax.hist(empirical_means, bins=20)
### QUESTION ANSWER
# Fit multiple datasets and show how error dist changes betas
fig = simulate_multiple_data_sets(beta_1, beta_0, sample_sizes,
noise_func=my_noise_func)
_ = plt.setp(fig.axes, xlim=[0, 8])
from scipy import io as si
data = si.loadmat('../../data/StevensonV2.mat')
# This defines the neuron and target locations we care about
neuron_n = 192
target_location = [0.0706, -0.0709]
# Extract useful information from our dataset
all_spikes = data['spikes']
spikes = all_spikes[neuron_n]
time = data['time']
# This is the onset of each trial
onsets = data['startBins'][0]
# This determines where the target was on each trial
locations = data['targets']
locations = locations.T[:, :2]
unique_locations = np.unique(locations)
n_trials = onsets.shape[0]
# Define time and the sampling frequency of data
time_step = data['timeBase']
sfreq = (1. / time_step).squeeze()
# Define trials with the target location
diff = (locations - target_location) < 1e-4
mask_use = diff.all(axis=1)
# Low-pass the spikes to smooth
spikes_low = gaussian_filter1d(spikes.astype(float),5)
# Convert data into epochs
wmin, wmax = -5., 15.
epochs = []
for i_onset in onsets[mask_use]:
this_spikes = spikes_low[i_onset + int(wmin): i_onset + int(wmax)]
epochs.append(this_spikes)
epochs = np.array(epochs)
n_ep = len(epochs)
# Define time for our epochs
tmin = wmin / sfreq
tmax = wmax / sfreq
times = np.linspace(tmin, tmax, num=epochs.shape[-1])
# Bootstrap lo / hi at each time point
n_boots = 1000
boot_means = np.zeros([n_boots, len(times)])
for ii, i_time in enumerate(times):
for jj in range(n_boots):
sample = epochs[:, ii][np.random.randint(0, n_ep, n_ep)]
boot_means[jj, ii] = sample.mean()
max_times = boot_means.argmax(axis=1)
clo, chi = np.percentile(boot_means, [2.5, 97.5], axis=0)
# Plot the mean firing rate across trials
fig, ax = plt.subplots()
ax.plot(times, epochs.mean(0), 'k')
ax.fill_between(times, clo, chi, alpha=.3, color='k')
ax.set_title('Mean +/- 95% CI PSTH')
plt.autoscale(tight=True)
fig, ax = plt.subplots()
_ = ax.hist(times[max_times], bins=20)
ax.set_title('Maximum time in each bootstrap')
### QUESTION ANSWER
sample_sizes = [15, 50, 100, 150, 300, 500, 1000, 10000]
n_simulations = 1000
stat = np.mean
random_func = np.random.randn
#
standard_errors = pd.DataFrame(index=sample_sizes,
columns=['se', 'se_bootstrap'])
for n_sample in sample_sizes:
sample = random_func(n_sample)
se = np.std(sample) / np.sqrt(n_sample)
simulation_means = np.zeros(n_simulations)
for ii in range(n_simulations):
boot_sample = sample[np.random.randint(0, n_sample, n_sample)]
simulation_means[ii] = stat(boot_sample)
se_boot = np.std(simulation_means)
standard_errors.loc[n_sample] = [se, se_boot]
standard_errors
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: SeisCL is controlled by a single class in python that will group all relevant information to perform forward and adjoint modeling. The class creates all parameters with default values
Step2: The class creates all parameters with default values
Step3: Simulation Grid
Step4: Velocity Model
Step5: Sources and receivers
Step6: All this function did is to define the attributes SeisCL.src_pos_all and SeisCL.rec_pos_all.
Step7: We see here that we have one source, at x=36m, z= 36m (just outside the absorbing boundary), and that the source type is 100 (explosive source). The format of this array is [x, y, z, srcid, src_type], where all shots having the same src_id are fired simultaneously.
Step8: We see here that the first receiver is at x=40m and z=36m. The receiver is for the source id 0, and have the receiver id 1. The receiver id should be unique and start at 1 for the first receiver. The format of this array is [x, y, z, srcid, recid, src_type, none, none, none].
Step9: All hdf5 files are created in a temporary directory, called seiscl by default
Step10: Each file contains specific information required for simulations
|
<ASSISTANT_TASK:>
Python Code:
from SeisCL import SeisCL
import numpy as np
import matplotlib.pyplot as plt
import os
seis = SeisCL()
help(seis.__init__)
seis.ND = 2 # Number of dimension
seis.N = np.array([250, 250]) # Grid size [NZ, NX, NY]
seis.dh = dh = 2 # Grid spatial spacing
seis.dt = dt = 0.25e-03 # Time step size
seis.NT = NT = 1000 # Number of time steps
vp = 3500
vs = 2000
rho = 2000
vp_a = np.zeros(seis.csts['N']) + vp
vs_a = np.zeros(seis.csts['N']) + vs
rho_a = np.zeros(seis.csts['N']) + rho
model_dict = {"vp": vp_a, "rho": rho_a, "vs": vs_a}
seis.surface_acquisition_2d(ds=1000)
print(seis.src_pos_all)
print(seis.rec_pos_all[:,0])
seis.set_forward([0], model_dict, withgrad=False)
seis.execute()
datafd = seis.read_data()[0]
os.listdir("./seiscl")
clip = 0.1
extent = [min(seis.rec_pos_all[0]), max(seis.rec_pos_all[0]), (datafd.shape[0]-1)*dt, 0]
vmax = np.max(datafd) * clip
vmin = -vmax
fig, ax = plt.subplots(1, 1, figsize=[4, 6])
ax.imshow(datafd, aspect='auto', vmax=vmax, vmin=vmin, extent = extent,
interpolation='bilinear', cmap=plt.get_cmap('Greys'))
ax.set_title("FD solution to elastic wave equation in 2D \n", fontsize=16, fontweight='bold')
ax.set_xlabel("Receiver position (m)")
ax.set_ylabel("time (s)")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Form
Step2: With a password
Step3: To excecute an instruction when the button Ok is clicked
Step4: Animated output
Step5: In order to have a fast display, the function show_lib is called for each possible version. If it is a graph, all possible graphs will be generated.
Step6: A form with ipywidgets
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
from pyquickhelper.ipythonhelper import open_html_form
params = {"module":"", "version":"v..."}
open_html_form(params, "fill the fields", "form1")
form1
from pyquickhelper.ipythonhelper import open_html_form
params= {"login":"", "password":""}
open_html_form(params, "credential", "credential")
credential
my_address = None
def custom_action(x):
x["combined"] = x["first_name"] + " " + x["last_name"]
return str(x)
from pyquickhelper.ipythonhelper import open_html_form
params = { "first_name":"", "last_name":"" }
open_html_form (params, title="enter your name", key_save="my_address", hook="custom_action(my_address)")
my_address
from pyquickhelper.ipythonhelper import StaticInteract, RangeWidget, RadioWidget
def show_fib(N):
sequence = ""
a, b = 0, 1
for i in range(N):
sequence += "{0} ".format(a)
a, b = b, a + b
return sequence
StaticInteract(show_fib,
N=RangeWidget(1, 100, default=10))
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
def plot(amplitude, color):
fig, ax = plt.subplots(figsize=(4, 3),
subplot_kw={'axisbelow':True})
ax.grid(color='w', linewidth=2, linestyle='solid')
x = np.linspace(0, 10, 1000)
ax.plot(x, amplitude * np.sin(x), color=color,
lw=5, alpha=0.4)
ax.set_xlim(0, 10)
ax.set_ylim(-1.1, 1.1)
return fig
StaticInteract(plot,
amplitude=RangeWidget(0.1, 0.5, 0.1, default=0.4),
color=RadioWidget(['blue', 'green', 'red'], default='red'))
from IPython.display import display
from ipywidgets import Text
last_name = Text(description="Last Name")
first_name = Text(description="First Name")
display(last_name)
display(first_name)
first_name.value, last_name.value
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Although this works, it is visually unappealing. We can improve on this using styles and themes.
Step2: As our applications get more complicated we must give greater thought to the layout. The following example comes from the TkDocs site.
Step4: Matplotlib
|
<ASSISTANT_TASK:>
Python Code:
import tkinter as tk
class Application(tk.Frame):
def __init__(self, master=None):
tk.Frame.__init__(self, master)
self.pack()
self.createWidgets()
def createWidgets(self):
self.hi_there = tk.Button(self)
self.hi_there["text"] = "Hello World\n(click me)"
self.hi_there["command"] = self.say_hi
self.hi_there.pack(side="top")
self.QUIT = tk.Button(self, text="QUIT", fg="red",
command=root.destroy)
self.QUIT.pack(side="bottom")
def say_hi(self):
print("hi there, everyone!")
root = tk.Tk()
app = Application(master=root)
app.mainloop()
import tkinter as tk
from tkinter import ttk
class Application(ttk.Frame):
def __init__(self, master=None):
super().__init__(master, padding="3 3 12 12")
self.grid(column=0, row=0, )
self.createWidgets()
self.master.title('Test')
def createWidgets(self):
self.hi_there = ttk.Button(self)
self.hi_there["text"] = "Hello World\n(click me)"
self.hi_there["command"] = self.say_hi
self.QUIT = ttk.Button(self, text="QUIT", style='Alert.TButton', command=root.destroy)
for child in self.winfo_children():
child.grid_configure(padx=10, pady=10)
def say_hi(self):
print("hi there, everyone!")
root = tk.Tk()
app = Application(master=root)
s = ttk.Style()
s.configure('TButton', font='helvetica 24')
s.configure('Alert.TButton', foreground='red')
root.mainloop()
from tkinter import *
from tkinter import ttk
def calculate(*args):
try:
value = float(feet.get())
meters.set((0.3048 * value * 10000.0 + 0.5)/10000.0)
except ValueError:
pass
root = Tk()
root.title("Feet to Meters")
mainframe = ttk.Frame(root, padding="3 3 12 12")
mainframe.grid(column=0, row=0, sticky=(N, W, E, S))
mainframe.columnconfigure(0, weight=1)
mainframe.rowconfigure(0, weight=1)
feet = StringVar()
meters = StringVar()
feet_entry = ttk.Entry(mainframe, width=7, textvariable=feet)
feet_entry.grid(column=2, row=1, sticky=(W, E))
ttk.Label(mainframe, textvariable=meters).grid(column=2, row=2, sticky=(W, E))
ttk.Button(mainframe, text="Calculate", command=calculate).grid(column=3, row=3, sticky=W)
ttk.Label(mainframe, text="feet").grid(column=3, row=1, sticky=W)
ttk.Label(mainframe, text="is equivalent to").grid(column=1, row=2, sticky=E)
ttk.Label(mainframe, text="meters").grid(column=3, row=2, sticky=W)
for child in mainframe.winfo_children(): child.grid_configure(padx=5, pady=5)
feet_entry.focus()
root.bind('<Return>', calculate)
root.mainloop()
Do a mouseclick somewhere, move the mouse to some destination, release
the button. This class gives click- and release-events and also draws
a line or a box from the click-point to the actual mouseposition
(within the same axes) until the button is released. Within the
method 'self.ignore()' it is checked wether the button from eventpress
and eventrelease are the same.
from matplotlib.widgets import RectangleSelector
import matplotlib.pyplot as plt
import matplotlib.cbook as cbook
def line_select_callback(eclick, erelease):
'eclick and erelease are the press and release events'
x1, y1 = eclick.xdata, eclick.ydata
x2, y2 = erelease.xdata, erelease.ydata
print ("(%3.2f, %3.2f) --> (%3.2f, %3.2f)" % (x1, y1, x2, y2))
print (" The button you used were: %s %s" % (eclick.button, erelease.button))
def toggle_selector(event):
print (' Key pressed.')
if event.key in ['Q', 'q'] and toggle_selector.RS.active:
print (' RectangleSelector deactivated.')
toggle_selector.RS.set_active(False)
if event.key in ['A', 'a'] and not toggle_selector.RS.active:
print (' RectangleSelector activated.')
toggle_selector.RS.set_active(True)
image_file = cbook.get_sample_data('grace_hopper.png')
image = plt.imread(image_file)
fig, current_ax = plt.subplots()
plt.imshow(image)
toggle_selector.RS = RectangleSelector(current_ax,
line_select_callback,
drawtype='box', useblit=True,
button=[1,3], # don't use middle button
minspanx=5, minspany=5,
spancoords='pixels')
plt.connect('key_press_event', toggle_selector)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 過学習と学習不足について知る
Step2: IMDBデータセットのダウンロード
Step3: 結果として得られるマルチホットベクトルの1つを見てみましょう。単語のインデックスは頻度順にソートされています。このため、インデックスが0に近いほど1が多く出現するはずです。分布を見てみましょう。
Step4: 過学習のデモ
Step5: より小さいモデルの構築
Step6: 同じデータを使って訓練します。
Step7: より大きなモデルの構築
Step8: このモデルもまた同じデータを使って訓練します。
Step9: 訓練時と検証時の損失をグラフにする
Step10: より大きなネットワークでは、すぐに、1エポックで過学習が始まり、その度合も強いことに注目してください。ネットワークの容量が大きいほど訓練用データをモデル化するスピードが早くなり(結果として訓練時の損失値が小さくなり)ますが、より過学習しやすく(結果として訓練時の損失値と検証時の損失値が大きく乖離しやすく)なります。
Step11: l2(0.001)というのは、層の重み行列の係数全てに対して0.001 * 重み係数の値 **2をネットワークの損失値合計に加えることを意味します。このペナルティは訓練時のみに加えられるため、このネットワークの損失値は、訓練時にはテスト時に比べて大きくなることに注意してください。
Step12: ご覧のように、L2正則化ありのモデルは比較基準のモデルに比べて過学習しにくくなっています。両方のモデルのパラメータ数は同じであるにもかかわらずです。
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
import tensorflow.compat.v1 as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# 形状が (len(sequences), dimension)ですべて0の行列を作る
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # 特定のインデックスに対してresults[i] を1に設定する
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
plt.plot(train_data[0])
baseline_model = keras.Sequential([
# `.summary` を見るために`input_shape`が必要
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(rate=0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(rate=0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The idea is to look at the title of a newspaper article and figure out whether the article came from the New York Times or from TechCrunch. There are very sophisticated approaches that we can try, but for now, let's go with something very simple.
Step3: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http
Step5: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
Step6: For ML training, we will need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). A simple way to do this is to use the hash of a well-distributed column in our data (See https
Step7: <h2> TensorFlow code </h2>
Step8: Let's make sure the code works locally on a small dataset for a few steps.
Step9: When I ran it, I got a 41% accuracy after a few steps. Because batchsize=32, 200 steps is essentially 6400 examples -- the full dataset is 72,000 examples, so this is not even the full dataset. And already, we are doing better than random chance.
Step10: Training finished with an accuracy of 73%. Obviously, this was trained on a really small dataset and with more data will hopefully come even greater accuracy.
Step11: <h2> Use model to predict </h2>
|
<ASSISTANT_TASK:>
Python Code:
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%datalab project set -p $PROJECT
!pip install --upgrade tensorflow
import tensorflow as tf
print tf.__version__
%bq query
SELECT
url, title, score
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
LENGTH(title) > 10
AND score > 10
LIMIT 10
query=
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
COUNT(title) AS num_articles
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
GROUP BY
source
ORDER BY num_articles DESC
LIMIT 10
import google.datalab.bigquery as bq
df = bq.Query(query).execute().result().to_dataframe()
df
query=
SELECT source, REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ') AS title FROM
(SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
title
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
)
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
df = bq.Query(query + " LIMIT 10").execute().result().to_dataframe()
df.head()
traindf = bq.Query(query + " AND ABS(MOD(FARM_FINGERPRINT(title), 4)) > 0").execute().result().to_dataframe()
evaldf = bq.Query(query + " AND ABS(MOD(FARM_FINGERPRINT(title), 4)) = 0").execute().result().to_dataframe()
traindf.head()
traindf['source'].value_counts()
evaldf['source'].value_counts()
traindf.to_csv('train.csv', header=False, index=False, encoding='utf-8', sep='\t')
evaldf.to_csv('eval.csv', header=False, index=False, encoding='utf-8', sep='\t')
!head -3 train.csv
!wc -l *.csv
%bash
gsutil cp *.csv gs://${BUCKET}/txtcls1/
import tensorflow as tf
from tensorflow.contrib import lookup
from tensorflow.python.platform import gfile
print tf.__version__
MAX_DOCUMENT_LENGTH = 5
PADWORD = 'ZYXW'
# vocabulary
lines = ['Some title', 'A longer title', 'An even longer title', 'This is longer than doc length']
# create vocabulary
vocab_processor = tf.contrib.learn.preprocessing.VocabularyProcessor(MAX_DOCUMENT_LENGTH)
vocab_processor.fit(lines)
with gfile.Open('vocab.tsv', 'wb') as f:
f.write("{}\n".format(PADWORD))
for word, index in vocab_processor.vocabulary_._mapping.iteritems():
f.write("{}\n".format(word))
N_WORDS = len(vocab_processor.vocabulary_)
print '{} words into vocab.tsv'.format(N_WORDS)
# can use the vocabulary to convert words to numbers
table = lookup.index_table_from_file(
vocabulary_file='vocab.tsv', num_oov_buckets=1, vocab_size=None, default_value=-1)
numbers = table.lookup(tf.constant(lines[0].split()))
with tf.Session() as sess:
tf.tables_initializer().run()
print "{} --> {}".format(lines[0], numbers.eval())
!cat vocab.tsv
# string operations
titles = tf.constant(lines)
words = tf.string_split(titles)
densewords = tf.sparse_tensor_to_dense(words, default_value=PADWORD)
numbers = table.lookup(densewords)
# now pad out with zeros and then slice to constant length
padding = tf.constant([[0,0],[0,MAX_DOCUMENT_LENGTH]])
padded = tf.pad(numbers, padding)
sliced = tf.slice(padded, [0,0], [-1, MAX_DOCUMENT_LENGTH])
with tf.Session() as sess:
tf.tables_initializer().run()
print "titles=", titles.eval(), titles.shape
print "words=", words.eval()
print "dense=", densewords.eval(), densewords.shape
print "numbers=", numbers.eval(), numbers.shape
print "padding=", padding.eval(), padding.shape
print "padded=", padded.eval(), padded.shape
print "sliced=", sliced.eval(), sliced.shape
%bash
grep "^def" txtcls1/trainer/model.py
%bash
echo "bucket=${BUCKET}"
rm -rf outputdir
export PYTHONPATH=${PYTHONPATH}:${PWD}/txtcls1
python -m trainer.task \
--bucket=${BUCKET} \
--output_dir=outputdir \
--job-dir=./tmp --train_steps=200
%bash
OUTDIR=gs://${BUCKET}/txtcls1/trained_model
JOBNAME=txtcls_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gsutil cp txtcls1/trainer/*.py $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/txtcls1/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC --runtime-version=1.2 \
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--train_steps=36000
%bash
gsutil ls gs://${BUCKET}/txtcls1/trained_model/export/Servo/
%bash
MODEL_NAME="txtcls"
MODEL_VERSION="v1"
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/txtcls1/trained_model/export/Servo/ | tail -1)
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#gcloud ml-engine versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ml-engine models delete ${MODEL_NAME}
gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
gcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION}
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import json
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1beta1', credentials=credentials,
discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1beta1_discovery.json')
request_data = {'instances':
[
{
'title': 'Supreme Court to Hear Major Case on Partisan Districts'
},
{
'title': 'Furan -- build and push Docker images from GitHub to target'
},
{
'title': 'Time Warner will spend $100M on Snapchat original shows and ads'
},
]
}
parent = 'projects/%s/models/%s/versions/%s' % (PROJECT, 'txtcls', 'v1')
response = api.projects().predict(body=request_data, name=parent).execute()
print "response={0}".format(response)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We need to download the data in order to plot. Let's use Quandl for this (although, I'd very much recommend using my library findatapy which provides an easy way to download data from many data sources including Bloomberg, Quandl, FRED, Yahoo, Google etc. Let's also import all the classes we need from chartpy too
Step2: Let's download the data from Quandl for US Real GDP QoQ
Step3: We now create style objects, which give us the property setting for the charts. We can then create Chart objects with these styles. We just change the engine variable to switch from using bokeh to matplotlib etc. No need to use different calls for each plotting engine! We specify filenames for each plot (otherwise, the filename will be automatically created from the timestamp).
Step4: We can if we choose plot individual charts like this (we need the inline statement for matplotlib to plot in the Jupyter notebook), using the following syntax.
Step5: Creating a Canvas for a webpage
Step6: We create another canvas object and then use the Keen IO based template, which looks a bit neater.
|
<ASSISTANT_TASK:>
Python Code:
import sys
try:
sys.path.append('E:/Remote/chartpy')
except:
pass
# support Quandl 3.x.x
try:
import quandl as Quandl
except:
# if import fails use Quandl 2.x.x
import Quandl
from chartpy import Chart, Style, Canvas
# get your own free Quandl API key from https://www.quandl.com/ (i've used another class for this)
try:
from chartpy.chartcred import ChartCred
cred = ChartCred()
quandl_api_key = cred.quandl_api_key
except:
quandl_api_key = "x"
df = Quandl.get(["FRED/A191RL1Q225SBEA"], authtoken=quandl_api_key)
df.columns = ["Real QoQ"]
import copy
style = Style(title="US GDP", source="Quandl/Fred", scale_factor=-1, width=400, height=300, silent_display=True, thin_margin=True)
style_bokeh = copy.copy(style); style_bokeh.html_file_output = 's_bokeh.html'
style_plotly = copy.copy(style); style_plotly.html_file_output = 's_plotly.html'
style_matplotlib = copy.copy(style); style_matplotlib.file_output = 's_matplotlib.png'
# Chart object is initialised with the dataframe and our chart style
chart_bokeh = Chart(df=df, chart_type='line', engine='bokeh', style=style_bokeh)
chart_plotly = Chart(df=df, chart_type='line', engine='plotly', style=style_plotly)
chart_matplotlib = Chart(df=df, chart_type='line', engine='matplotlib', style=style_matplotlib)
%matplotlib inline
chart_matplotlib.plot()
text = "A demo of chartpy canvas!!"
# using plain template
canvas = Canvas([[text, chart_bokeh], [chart_plotly, df.tail(n=5)]])
canvas.generate_canvas(jupyter_notebook=True, silent_display=True, canvas_plotter='plain', output_filename='s_canvas_plain.html')
# using the Keen template (needs static folder in the same place as final HTML file)
canvas = Canvas([[chart_bokeh, chart_plotly], [chart_plotly, chart_matplotlib]])
canvas.generate_canvas(jupyter_notebook=True, silent_display=True, canvas_plotter='keen', output_filename = 's_canvas_keen.html')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Imports
Step3: Model definition, compilation and rendering
Step5: static_model is written in MuJoCo's XML-based MJCF modeling language. The from_xml_string() method invokes the model compiler, which instantiates the library's internal data structures. These can be accessed via the physics object, see below.
Step6: The things that move (and which have inertia) are called bodies. The body's child joint specifies how that body can move with respect to its parent, in this case box_and_sphere w.r.t the worldbody.
Step7: Note how we collect the video frames. Because physics simulation timesteps are generally much smaller than framerates (the default timestep is 2ms), we don't render after each step.
Step8: MuJoCo basics and named indexing
Step9: The model.opt structure contains global quantities like
Step10: mjData
Step11: physics.data also contains functions of the state, for example the cartesian positions of objects in the world frame. The (x, y, z) positions of our two geoms are in data.geom_xpos
Step12: Named indexing
Step13: Note how model.geom_pos and data.geom_xpos have similar semantics but very different meanings.
Step14: Name strings can be used to index into the relevant quantities, making code much more readable and robust.
Step15: Joint names can be used to index into quantities in configuration space (beginning with the letter q)
Step16: We can mix NumPy slicing operations with named indexing. As an example, we can set the color of the box using its name ("red_box") as an index into the rows of the geom_rgba array.
Step17: Note that while physics.model quantities will not be changed by the engine, we can change them ourselves between steps. This however is generally not recommended, the preferred approach being to modify the model at the XML level using the PyMJCF library, see below.
Step19: Free bodies
Step20: Note several new features of this model definition
Step21: The velocities are easy to interpret, 6 zeros, one for each DOF. What about the length-7 positions? We can see the initial 2cm height of the body; the subsequent four numbers are the 3D orientation, defined by a unit quaternion. These normalized four-vectors, which preserve the topology of the orientation group, are the reason that data.qpos can be bigger than data.qvel
Step22: Measuring values from physics.data
Step24: PyMJCF tutorial
Step26: The Leg class describes an abstract articulated leg, with two joints and corresponding proportional-derivative actuators.
Step27: The make_creature function uses PyMJCF's attach() method to procedurally attach legs to the torso. Note that at this stage both the torso and hip attachment sites are children of the worldbody, since their parent body has yet to be instantiated. We'll now make an arena with a chequered floor and two lights, and place our creatures in a grid.
Step28: Multi-legged creatures, ready to roam! Let's inject some controls and watch them move. We'll generate a sinusoidal open-loop control signal of fixed frequency and random phase, recording both video frames and the horizontal positions of the torso geoms, in order to plot the movement trajectories.
Step30: The plot above shows the corresponding movement trajectories of creature positions. Note how physics.bind(torsos) was used to access both xpos and rgba values. Once the Physics had been instantiated by from_mjcf_model(), the bind() method will expose both the associated mjData and mjModel fields of an mjcf element, providing unified access to all quantities in the simulation.
Step33: The Creature Entity includes generic Observables for joint angles and velocities. Because find_all() is called on the Creature's MJCF model, it will only return the creature's leg joints, and not the "free" joint with which it will be attached to the world. Note that Composer Entities should override the _build and _build_observables methods rather than __init__. The implementation of __init__ in the base class calls _build and _build_observables, in that order, to ensure that the entity's MJCF model is created before its observables. This was a design choice which allows the user to refer to an observable as an attribute (entity.observables.foo) while still making it clear which attributes are observables. The stateful Button class derives from composer.Entity and implements the initialize_episode and after_substep callbacks.
Step35: Note how the Button counts the number of sub-steps during which it is pressed with the desired force. It also exposes an Observable of the force being applied to the button, whose value is an average of the readings over the physics time-steps.
Step36: The Control Suite
Step37: Locomotion
Step38: Next, we construct a corridor-shaped arena that is obstructed by walls.
Step39: The task constructor places the walker in the arena.
Step40: Finally, a task that rewards the agent for running down the corridor at a specific velocity is instantiated as a composer.Environment.
Step41: Multi-Agent Soccer
Step42: It can trivially be replaced by e.g. the WalkerType.ANT walker
Step43: Manipulation
|
<ASSISTANT_TASK:>
Python Code:
#@title Run to install MuJoCo and `dm_control`
import distutils.util
import subprocess
if subprocess.run('nvidia-smi').returncode:
raise RuntimeError(
'Cannot communicate with GPU. '
'Make sure you are using a GPU Colab runtime. '
'Go to the Runtime menu and select Choose runtime type.')
print('Installing dm_control...')
!pip install -q dm_control>=1.0.3.post1
# Configure dm_control to use the EGL rendering backend (requires GPU)
%env MUJOCO_GL=egl
print('Checking that the dm_control installation succeeded...')
try:
from dm_control import suite
env = suite.load('cartpole', 'swingup')
pixels = env.physics.render()
except Exception as e:
raise e from RuntimeError(
'Something went wrong during installation. Check the shell output above '
'for more information.\n'
'If using a hosted Colab runtime, make sure you enable GPU acceleration '
'by going to the Runtime menu and selecting "Choose runtime type".')
else:
del pixels, suite
!echo Installed dm_control $(pip show dm_control | grep -Po "(?<=Version: ).+")
#@title All `dm_control` imports required for this tutorial
# The basic mujoco wrapper.
from dm_control import mujoco
# Access to enums and MuJoCo library functions.
from dm_control.mujoco.wrapper.mjbindings import enums
from dm_control.mujoco.wrapper.mjbindings import mjlib
# PyMJCF
from dm_control import mjcf
# Composer high level imports
from dm_control import composer
from dm_control.composer.observation import observable
from dm_control.composer import variation
# Imports for Composer tutorial example
from dm_control.composer.variation import distributions
from dm_control.composer.variation import noises
from dm_control.locomotion.arenas import floors
# Control Suite
from dm_control import suite
# Run through corridor example
from dm_control.locomotion.walkers import cmu_humanoid
from dm_control.locomotion.arenas import corridors as corridor_arenas
from dm_control.locomotion.tasks import corridors as corridor_tasks
# Soccer
from dm_control.locomotion import soccer
# Manipulation
from dm_control import manipulation
#@title Other imports and helper functions
# General
import copy
import os
import itertools
from IPython.display import clear_output
import numpy as np
# Graphics-related
import matplotlib
import matplotlib.animation as animation
import matplotlib.pyplot as plt
from IPython.display import HTML
import PIL.Image
# Internal loading of video libraries.
# Use svg backend for figure rendering
%config InlineBackend.figure_format = 'svg'
# Font sizes
SMALL_SIZE = 8
MEDIUM_SIZE = 10
BIGGER_SIZE = 12
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
# Inline video helper function
if os.environ.get('COLAB_NOTEBOOK_TEST', False):
# We skip video generation during tests, as it is quite expensive.
display_video = lambda *args, **kwargs: None
else:
def display_video(frames, framerate=30):
height, width, _ = frames[0].shape
dpi = 70
orig_backend = matplotlib.get_backend()
matplotlib.use('Agg') # Switch to headless 'Agg' to inhibit figure rendering.
fig, ax = plt.subplots(1, 1, figsize=(width / dpi, height / dpi), dpi=dpi)
matplotlib.use(orig_backend) # Switch back to the original backend.
ax.set_axis_off()
ax.set_aspect('equal')
ax.set_position([0, 0, 1, 1])
im = ax.imshow(frames[0])
def update(frame):
im.set_data(frame)
return [im]
interval = 1000/framerate
anim = animation.FuncAnimation(fig=fig, func=update, frames=frames,
interval=interval, blit=True, repeat=False)
return HTML(anim.to_html5_video())
# Seed numpy's global RNG so that cell outputs are deterministic. We also try to
# use RandomState instances that are local to a single cell wherever possible.
np.random.seed(42)
#@title A static model {vertical-output: true}
static_model =
<mujoco>
<worldbody>
<light name="top" pos="0 0 1"/>
<geom name="red_box" type="box" size=".2 .2 .2" rgba="1 0 0 1"/>
<geom name="green_sphere" pos=".2 .2 .2" size=".1" rgba="0 1 0 1"/>
</worldbody>
</mujoco>
physics = mujoco.Physics.from_xml_string(static_model)
pixels = physics.render()
PIL.Image.fromarray(pixels)
#@title A child body with a joint { vertical-output: true }
swinging_body =
<mujoco>
<worldbody>
<light name="top" pos="0 0 1"/>
<body name="box_and_sphere" euler="0 0 -30">
<joint name="swing" type="hinge" axis="1 -1 0" pos="-.2 -.2 -.2"/>
<geom name="red_box" type="box" size=".2 .2 .2" rgba="1 0 0 1"/>
<geom name="green_sphere" pos=".2 .2 .2" size=".1" rgba="0 1 0 1"/>
</body>
</worldbody>
</mujoco>
physics = mujoco.Physics.from_xml_string(swinging_body)
# Visualize the joint axis.
scene_option = mujoco.wrapper.core.MjvOption()
scene_option.flags[enums.mjtVisFlag.mjVIS_JOINT] = True
pixels = physics.render(scene_option=scene_option)
PIL.Image.fromarray(pixels)
#@title Making a video {vertical-output: true}
duration = 2 # (seconds)
framerate = 30 # (Hz)
# Visualize the joint axis
scene_option = mujoco.wrapper.core.MjvOption()
scene_option.flags[enums.mjtVisFlag.mjVIS_JOINT] = True
# Simulate and display video.
frames = []
physics.reset() # Reset state and time
while physics.data.time < duration:
physics.step()
if len(frames) < physics.data.time * framerate:
pixels = physics.render(scene_option=scene_option)
frames.append(pixels)
display_video(frames, framerate)
#@title Enable transparency and frame visualization {vertical-output: true}
scene_option = mujoco.wrapper.core.MjvOption()
scene_option.frame = enums.mjtFrame.mjFRAME_GEOM
scene_option.flags[enums.mjtVisFlag.mjVIS_TRANSPARENT] = True
pixels = physics.render(scene_option=scene_option)
PIL.Image.fromarray(pixels)
#@title Depth rendering {vertical-output: true}
# depth is a float array, in meters.
depth = physics.render(depth=True)
# Shift nearest values to the origin.
depth -= depth.min()
# Scale by 2 mean distances of near rays.
depth /= 2*depth[depth <= 1].mean()
# Scale to [0, 255]
pixels = 255*np.clip(depth, 0, 1)
PIL.Image.fromarray(pixels.astype(np.uint8))
#@title Segmentation rendering {vertical-output: true}
seg = physics.render(segmentation=True)
# Display the contents of the first channel, which contains object
# IDs. The second channel, seg[:, :, 1], contains object types.
geom_ids = seg[:, :, 0]
# Infinity is mapped to -1
geom_ids = geom_ids.astype(np.float64) + 1
# Scale to [0, 1]
geom_ids = geom_ids / geom_ids.max()
pixels = 255*geom_ids
PIL.Image.fromarray(pixels.astype(np.uint8))
#@title Projecting from world to camera coordinates {vertical-output: true}
# Get the world coordinates of the box corners
box_pos = physics.named.data.geom_xpos['red_box']
box_mat = physics.named.data.geom_xmat['red_box'].reshape(3, 3)
box_size = physics.named.model.geom_size['red_box']
offsets = np.array([-1, 1]) * box_size[:, None]
xyz_local = np.stack(itertools.product(*offsets)).T
xyz_global = box_pos[:, None] + box_mat @ xyz_local
# Camera matrices multiply homogenous [x, y, z, 1] vectors.
corners_homogeneous = np.ones((4, xyz_global.shape[1]), dtype=float)
corners_homogeneous[:3, :] = xyz_global
# Get the camera matrix.
camera = mujoco.Camera(physics)
camera_matrix = camera.matrix
# Project world coordinates into pixel space. See:
# https://en.wikipedia.org/wiki/3D_projection#Mathematical_formula
xs, ys, s = camera_matrix @ corners_homogeneous
# x and y are in the pixel coordinate system.
x = xs / s
y = ys / s
# Render the camera view and overlay the projected corner coordinates.
pixels = camera.render()
fig, ax = plt.subplots(1, 1)
ax.imshow(pixels)
ax.plot(x, y, '+', c='w')
ax.set_axis_off()
physics.model.geom_pos
print('timestep', physics.model.opt.timestep)
print('gravity', physics.model.opt.gravity)
print(physics.data.time, physics.data.qpos, physics.data.qvel)
print(physics.data.geom_xpos)
print(physics.named.data.geom_xpos)
print(physics.named.model.geom_pos)
physics.named.data.geom_xpos['green_sphere', 'z']
physics.named.data.qpos['swing']
#@title Changing colors using named indexing{vertical-output: true}
random_rgb = np.random.rand(3)
physics.named.model.geom_rgba['red_box', :3] = random_rgb
pixels = physics.render()
PIL.Image.fromarray(pixels)
physics.named.data.qpos['swing'] = np.pi
print('Without reset_context, spatial positions are not updated:',
physics.named.data.geom_xpos['green_sphere', ['z']])
with physics.reset_context():
physics.named.data.qpos['swing'] = np.pi
print('After reset_context, positions are up-to-date:',
physics.named.data.geom_xpos['green_sphere', ['z']])
#@title The "tippe-top" model{vertical-output: true}
tippe_top =
<mujoco model="tippe top">
<option integrator="RK4"/>
<asset>
<texture name="grid" type="2d" builtin="checker" rgb1=".1 .2 .3"
rgb2=".2 .3 .4" width="300" height="300"/>
<material name="grid" texture="grid" texrepeat="8 8" reflectance=".2"/>
</asset>
<worldbody>
<geom size=".2 .2 .01" type="plane" material="grid"/>
<light pos="0 0 .6"/>
<camera name="closeup" pos="0 -.1 .07" xyaxes="1 0 0 0 1 2"/>
<body name="top" pos="0 0 .02">
<freejoint/>
<geom name="ball" type="sphere" size=".02" />
<geom name="stem" type="cylinder" pos="0 0 .02" size="0.004 .008"/>
<geom name="ballast" type="box" size=".023 .023 0.005" pos="0 0 -.015"
contype="0" conaffinity="0" group="3"/>
</body>
</worldbody>
<keyframe>
<key name="spinning" qpos="0 0 0.02 1 0 0 0" qvel="0 0 0 0 1 200" />
</keyframe>
</mujoco>
physics = mujoco.Physics.from_xml_string(tippe_top)
PIL.Image.fromarray(physics.render(camera_id='closeup'))
print('positions', physics.data.qpos)
print('velocities', physics.data.qvel)
#@title Video of the tippe-top {vertical-output: true}
duration = 7 # (seconds)
framerate = 60 # (Hz)
# Simulate and display video.
frames = []
physics.reset(0) # Reset to keyframe 0 (load a saved state).
while physics.data.time < duration:
physics.step()
if len(frames) < (physics.data.time) * framerate:
pixels = physics.render(camera_id='closeup')
frames.append(pixels)
display_video(frames, framerate)
#@title Measuring values {vertical-output: true}
timevals = []
angular_velocity = []
stem_height = []
# Simulate and save data
physics.reset(0)
while physics.data.time < duration:
physics.step()
timevals.append(physics.data.time)
angular_velocity.append(physics.data.qvel[3:6].copy())
stem_height.append(physics.named.data.geom_xpos['stem', 'z'])
dpi = 100
width = 480
height = 640
figsize = (width / dpi, height / dpi)
_, ax = plt.subplots(2, 1, figsize=figsize, dpi=dpi, sharex=True)
ax[0].plot(timevals, angular_velocity)
ax[0].set_title('angular velocity')
ax[0].set_ylabel('radians / second')
ax[1].plot(timevals, stem_height)
ax[1].set_xlabel('time (seconds)')
ax[1].set_ylabel('meters')
_ = ax[1].set_title('stem height')
class Leg(object):
A 2-DoF leg with position actuators.
def __init__(self, length, rgba):
self.model = mjcf.RootElement()
# Defaults:
self.model.default.joint.damping = 2
self.model.default.joint.type = 'hinge'
self.model.default.geom.type = 'capsule'
self.model.default.geom.rgba = rgba # Continued below...
# Thigh:
self.thigh = self.model.worldbody.add('body')
self.hip = self.thigh.add('joint', axis=[0, 0, 1])
self.thigh.add('geom', fromto=[0, 0, 0, length, 0, 0], size=[length/4])
# Hip:
self.shin = self.thigh.add('body', pos=[length, 0, 0])
self.knee = self.shin.add('joint', axis=[0, 1, 0])
self.shin.add('geom', fromto=[0, 0, 0, 0, 0, -length], size=[length/5])
# Position actuators:
self.model.actuator.add('position', joint=self.hip, kp=10)
self.model.actuator.add('position', joint=self.knee, kp=10)
BODY_RADIUS = 0.1
BODY_SIZE = (BODY_RADIUS, BODY_RADIUS, BODY_RADIUS / 2)
random_state = np.random.RandomState(42)
def make_creature(num_legs):
Constructs a creature with `num_legs` legs.
rgba = random_state.uniform([0, 0, 0, 1], [1, 1, 1, 1])
model = mjcf.RootElement()
model.compiler.angle = 'radian' # Use radians.
# Make the torso geom.
model.worldbody.add(
'geom', name='torso', type='ellipsoid', size=BODY_SIZE, rgba=rgba)
# Attach legs to equidistant sites on the circumference.
for i in range(num_legs):
theta = 2 * i * np.pi / num_legs
hip_pos = BODY_RADIUS * np.array([np.cos(theta), np.sin(theta), 0])
hip_site = model.worldbody.add('site', pos=hip_pos, euler=[0, 0, theta])
leg = Leg(length=BODY_RADIUS, rgba=rgba)
hip_site.attach(leg.model)
return model
#@title Six Creatures on a floor.{vertical-output: true}
arena = mjcf.RootElement()
chequered = arena.asset.add('texture', type='2d', builtin='checker', width=300,
height=300, rgb1=[.2, .3, .4], rgb2=[.3, .4, .5])
grid = arena.asset.add('material', name='grid', texture=chequered,
texrepeat=[5, 5], reflectance=.2)
arena.worldbody.add('geom', type='plane', size=[2, 2, .1], material=grid)
for x in [-2, 2]:
arena.worldbody.add('light', pos=[x, -1, 3], dir=[-x, 1, -2])
# Instantiate 6 creatures with 3 to 8 legs.
creatures = [make_creature(num_legs=num_legs) for num_legs in range(3, 9)]
# Place them on a grid in the arena.
height = .15
grid = 5 * BODY_RADIUS
xpos, ypos, zpos = np.meshgrid([-grid, 0, grid], [0, grid], [height])
for i, model in enumerate(creatures):
# Place spawn sites on a grid.
spawn_pos = (xpos.flat[i], ypos.flat[i], zpos.flat[i])
spawn_site = arena.worldbody.add('site', pos=spawn_pos, group=3)
# Attach to the arena at the spawn sites, with a free joint.
spawn_site.attach(model).add('freejoint')
# Instantiate the physics and render.
physics = mjcf.Physics.from_mjcf_model(arena)
PIL.Image.fromarray(physics.render())
#@title Video of the movement{vertical-output: true}
#@test {"timeout": 600}
duration = 10 # (Seconds)
framerate = 30 # (Hz)
video = []
pos_x = []
pos_y = []
torsos = [] # List of torso geom elements.
actuators = [] # List of actuator elements.
for creature in creatures:
torsos.append(creature.find('geom', 'torso'))
actuators.extend(creature.find_all('actuator'))
# Control signal frequency, phase, amplitude.
freq = 5
phase = 2 * np.pi * random_state.rand(len(actuators))
amp = 0.9
# Simulate, saving video frames and torso locations.
physics.reset()
while physics.data.time < duration:
# Inject controls and step the physics.
physics.bind(actuators).ctrl = amp * np.sin(freq * physics.data.time + phase)
physics.step()
# Save torso horizontal positions using bind().
pos_x.append(physics.bind(torsos).xpos[:, 0].copy())
pos_y.append(physics.bind(torsos).xpos[:, 1].copy())
# Save video frames.
if len(video) < physics.data.time * framerate:
pixels = physics.render()
video.append(pixels.copy())
display_video(video, framerate)
#@title Movement trajectories{vertical-output: true}
creature_colors = physics.bind(torsos).rgba[:, :3]
fig, ax = plt.subplots(figsize=(4, 4))
ax.set_prop_cycle(color=creature_colors)
_ = ax.plot(pos_x, pos_y, linewidth=4)
#@title The `Creature` class
class Creature(composer.Entity):
A multi-legged creature derived from `composer.Entity`.
def _build(self, num_legs):
self._model = make_creature(num_legs)
def _build_observables(self):
return CreatureObservables(self)
@property
def mjcf_model(self):
return self._model
@property
def actuators(self):
return tuple(self._model.find_all('actuator'))
# Add simple observable features for joint angles and velocities.
class CreatureObservables(composer.Observables):
@composer.observable
def joint_positions(self):
all_joints = self._entity.mjcf_model.find_all('joint')
return observable.MJCFFeature('qpos', all_joints)
@composer.observable
def joint_velocities(self):
all_joints = self._entity.mjcf_model.find_all('joint')
return observable.MJCFFeature('qvel', all_joints)
#@title The `Button` class
NUM_SUBSTEPS = 25 # The number of physics substeps per control timestep.
class Button(composer.Entity):
A button Entity which changes colour when pressed with certain force.
def _build(self, target_force_range=(5, 10)):
self._min_force, self._max_force = target_force_range
self._mjcf_model = mjcf.RootElement()
self._geom = self._mjcf_model.worldbody.add(
'geom', type='cylinder', size=[0.25, 0.02], rgba=[1, 0, 0, 1])
self._site = self._mjcf_model.worldbody.add(
'site', type='cylinder', size=self._geom.size*1.01, rgba=[1, 0, 0, 0])
self._sensor = self._mjcf_model.sensor.add('touch', site=self._site)
self._num_activated_steps = 0
def _build_observables(self):
return ButtonObservables(self)
@property
def mjcf_model(self):
return self._mjcf_model
# Update the activation (and colour) if the desired force is applied.
def _update_activation(self, physics):
current_force = physics.bind(self.touch_sensor).sensordata[0]
self._is_activated = (current_force >= self._min_force and
current_force <= self._max_force)
physics.bind(self._geom).rgba = (
[0, 1, 0, 1] if self._is_activated else [1, 0, 0, 1])
self._num_activated_steps += int(self._is_activated)
def initialize_episode(self, physics, random_state):
self._reward = 0.0
self._num_activated_steps = 0
self._update_activation(physics)
def after_substep(self, physics, random_state):
self._update_activation(physics)
@property
def touch_sensor(self):
return self._sensor
@property
def num_activated_steps(self):
return self._num_activated_steps
class ButtonObservables(composer.Observables):
A touch sensor which averages contact force over physics substeps.
@composer.observable
def touch_force(self):
return observable.MJCFFeature('sensordata', self._entity.touch_sensor,
buffer_size=NUM_SUBSTEPS, aggregator='mean')
#@title Random initialiser using `composer.variation`
class UniformCircle(variation.Variation):
A uniformly sampled horizontal point on a circle of radius `distance`.
def __init__(self, distance):
self._distance = distance
self._heading = distributions.Uniform(0, 2*np.pi)
def __call__(self, initial_value=None, current_value=None, random_state=None):
distance, heading = variation.evaluate(
(self._distance, self._heading), random_state=random_state)
return (distance*np.cos(heading), distance*np.sin(heading), 0)
#@title The `PressWithSpecificForce` task
class PressWithSpecificForce(composer.Task):
def __init__(self, creature):
self._creature = creature
self._arena = floors.Floor()
self._arena.add_free_entity(self._creature)
self._arena.mjcf_model.worldbody.add('light', pos=(0, 0, 4))
self._button = Button()
self._arena.attach(self._button)
# Configure initial poses
self._creature_initial_pose = (0, 0, 0.15)
button_distance = distributions.Uniform(0.5, .75)
self._button_initial_pose = UniformCircle(button_distance)
# Configure variators
self._mjcf_variator = variation.MJCFVariator()
self._physics_variator = variation.PhysicsVariator()
# Configure and enable observables
pos_corrptor = noises.Additive(distributions.Normal(scale=0.01))
self._creature.observables.joint_positions.corruptor = pos_corrptor
self._creature.observables.joint_positions.enabled = True
vel_corruptor = noises.Multiplicative(distributions.LogNormal(sigma=0.01))
self._creature.observables.joint_velocities.corruptor = vel_corruptor
self._creature.observables.joint_velocities.enabled = True
self._button.observables.touch_force.enabled = True
def to_button(physics):
button_pos, _ = self._button.get_pose(physics)
return self._creature.global_vector_to_local_frame(physics, button_pos)
self._task_observables = {}
self._task_observables['button_position'] = observable.Generic(to_button)
for obs in self._task_observables.values():
obs.enabled = True
self.control_timestep = NUM_SUBSTEPS * self.physics_timestep
@property
def root_entity(self):
return self._arena
@property
def task_observables(self):
return self._task_observables
def initialize_episode_mjcf(self, random_state):
self._mjcf_variator.apply_variations(random_state)
def initialize_episode(self, physics, random_state):
self._physics_variator.apply_variations(physics, random_state)
creature_pose, button_pose = variation.evaluate(
(self._creature_initial_pose, self._button_initial_pose),
random_state=random_state)
self._creature.set_pose(physics, position=creature_pose)
self._button.set_pose(physics, position=button_pose)
def get_reward(self, physics):
return self._button.num_activated_steps / NUM_SUBSTEPS
#@title Instantiating an environment{vertical-output: true}
creature = Creature(num_legs=4)
task = PressWithSpecificForce(creature)
env = composer.Environment(task, random_state=np.random.RandomState(42))
env.reset()
PIL.Image.fromarray(env.physics.render())
#@title Iterating over tasks{vertical-output: true}
max_len = max(len(d) for d, _ in suite.BENCHMARKING)
for domain, task in suite.BENCHMARKING:
print(f'{domain:<{max_len}} {task}')
#@title Loading and simulating a `suite` task{vertical-output: true}
# Load the environment
random_state = np.random.RandomState(42)
env = suite.load('hopper', 'stand', task_kwargs={'random': random_state})
# Simulate episode with random actions
duration = 4 # Seconds
frames = []
ticks = []
rewards = []
observations = []
spec = env.action_spec()
time_step = env.reset()
while env.physics.data.time < duration:
action = random_state.uniform(spec.minimum, spec.maximum, spec.shape)
time_step = env.step(action)
camera0 = env.physics.render(camera_id=0, height=200, width=200)
camera1 = env.physics.render(camera_id=1, height=200, width=200)
frames.append(np.hstack((camera0, camera1)))
rewards.append(time_step.reward)
observations.append(copy.deepcopy(time_step.observation))
ticks.append(env.physics.data.time)
html_video = display_video(frames, framerate=1./env.control_timestep())
# Show video and plot reward and observations
num_sensors = len(time_step.observation)
_, ax = plt.subplots(1 + num_sensors, 1, sharex=True, figsize=(4, 8))
ax[0].plot(ticks, rewards)
ax[0].set_ylabel('reward')
ax[-1].set_xlabel('time')
for i, key in enumerate(time_step.observation):
data = np.asarray([observations[j][key] for j in range(len(observations))])
ax[i+1].plot(ticks, data, label=key)
ax[i+1].set_ylabel(key)
html_video
#@title Visualizing an initial state of one task per domain in the Control Suite
domains_tasks = {domain: task for domain, task in suite.ALL_TASKS}
random_state = np.random.RandomState(42)
num_domains = len(domains_tasks)
n_col = num_domains // int(np.sqrt(num_domains))
n_row = num_domains // n_col + int(0 < num_domains % n_col)
_, ax = plt.subplots(n_row, n_col, figsize=(12, 12))
for a in ax.flat:
a.axis('off')
a.grid(False)
print(f'Iterating over all {num_domains} domains in the Suite:')
for j, [domain, task] in enumerate(domains_tasks.items()):
print(domain, task)
env = suite.load(domain, task, task_kwargs={'random': random_state})
timestep = env.reset()
pixels = env.physics.render(height=200, width=200, camera_id=0)
ax.flat[j].imshow(pixels)
ax.flat[j].set_title(domain + ': ' + task)
clear_output()
#@title A position controlled `cmu_humanoid`
walker = cmu_humanoid.CMUHumanoidPositionControlledV2020(
observable_options={'egocentric_camera': dict(enabled=True)})
#@title A corridor arena with wall obstacles
arena = corridor_arenas.WallsCorridor(
wall_gap=3.,
wall_width=distributions.Uniform(2., 3.),
wall_height=distributions.Uniform(2.5, 3.5),
corridor_width=4.,
corridor_length=30.,
)
#@title A task to navigate the arena
task = corridor_tasks.RunThroughCorridor(
walker=walker,
arena=arena,
walker_spawn_position=(0.5, 0, 0),
target_velocity=3.0,
physics_timestep=0.005,
control_timestep=0.03,
)
#@title The `RunThroughCorridor` environment
env = composer.Environment(
task=task,
time_limit=10,
random_state=np.random.RandomState(42),
strip_singleton_obs_buffer_dim=True,
)
env.reset()
pixels = []
for camera_id in range(3):
pixels.append(env.physics.render(camera_id=camera_id, width=240))
PIL.Image.fromarray(np.hstack(pixels))
#@title 2-v-2 `Boxhead` soccer
random_state = np.random.RandomState(42)
env = soccer.load(
team_size=2,
time_limit=45.,
random_state=random_state,
disable_walker_contacts=False,
walker_type=soccer.WalkerType.BOXHEAD,
)
env.reset()
pixels = []
# Select a random subset of 6 cameras (soccer envs have lots of cameras)
cameras = random_state.choice(env.physics.model.ncam, 6, replace=False)
for camera_id in cameras:
pixels.append(env.physics.render(camera_id=camera_id, width=240))
image = np.vstack((np.hstack(pixels[:3]), np.hstack(pixels[3:])))
PIL.Image.fromarray(image)
#@title 3-v-3 `Ant` soccer
random_state = np.random.RandomState(42)
env = soccer.load(
team_size=3,
time_limit=45.,
random_state=random_state,
disable_walker_contacts=False,
walker_type=soccer.WalkerType.ANT,
)
env.reset()
pixels = []
cameras = random_state.choice(env.physics.model.ncam, 6, replace=False)
for camera_id in cameras:
pixels.append(env.physics.render(camera_id=camera_id, width=240))
image = np.vstack((np.hstack(pixels[:3]), np.hstack(pixels[3:])))
PIL.Image.fromarray(image)
#@title Listing all `manipulation` tasks{vertical-output: true}
# `ALL` is a tuple containing the names of all of the environments in the suite.
print('\n'.join(manipulation.ALL))
#@title Listing `manipulation` tasks that use vision{vertical-output: true}
print('\n'.join(manipulation.get_environments_by_tag('vision')))
#@title Loading and simulating a `manipulation` task{vertical-output: true}
env = manipulation.load('stack_2_of_3_bricks_random_order_vision', seed=42)
action_spec = env.action_spec()
def sample_random_action():
return env.random_state.uniform(
low=action_spec.minimum,
high=action_spec.maximum,
).astype(action_spec.dtype, copy=False)
# Step the environment through a full episode using random actions and record
# the camera observations.
frames = []
timestep = env.reset()
frames.append(timestep.observation['front_close'])
while not timestep.last():
timestep = env.step(sample_random_action())
frames.append(timestep.observation['front_close'])
all_frames = np.concatenate(frames, axis=0)
display_video(all_frames, 30)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Generation
Step2: The array X contains the microstructure information and has the dimensions
Step3: Lets take a look at the 6 types the microstructures to get an idea of what they
Step4: In this dataset 4 of the 6 microstructure types have grains that are elongated in either
Step5: Now that we have a dataset to work with, we can look at how to use the MKSHomogenizationModelto predict stress values for new microstructures.
Step6: Let's take a look at the default values for the number of components and the order of the polynomial.
Step7: These default parameters may not be the best model for a given problem; we will now show one method that can be used to optimize them.
Step8: Now look at how the cumlative variance changes as a function of the number of components using draw_component_variance
Step9: Roughly 93 percent of the variance is captured with the first 5 components. This means our model may only need a few components to predict the average stress.
Step10: We will use cross validation with the testing data to fit a number
Step11: The default score method for the MKSHomogenizationModel is the R-squared value. Let's look at the how the mean R-squared values and their
Step12: It looks like we get a poor fit, when only the first and second component are used, and when we increase
Step13: For the parameter range that we searched, we have found that a model with 2nd order polynomial
Step14: As we said, a model with a 2rd order polynomial and 11 components will give us the best result. Let's use the
Step15: Prediction using MKSHomogenizationModel
Step16: Let's generate some more data that can be used to try and validate our model's prediction accuracy. We are going to
Step17: Now let's predict the stress values for the new microstructures.
Step18: We can look to see, if the low-dimensional representation of the
Step19: The predicted data seems to be reasonably similar to the data we used to fit the model
Step20: Looks pretty good. Let's print out one actual and predicted stress value for each of the 6 microstructure types to see how they compare.
Step21: Lastly, we can also evaluate our prediction by looking at a goodness-of-fit plot. We
|
<ASSISTANT_TASK:>
Python Code:
import pymks
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
from pymks.datasets import make_elastic_stress_random
sample_size = 200
grain_size = [(15, 2), (2, 15), (7, 7), (8, 3), (3, 9), (2, 2)]
n_samples = [sample_size] * 6
elastic_modulus = (310, 200)
poissons_ratio = (0.28, 0.3)
macro_strain = 0.001
size = (21, 21)
X, y = make_elastic_stress_random(n_samples=n_samples, size=size, grain_size=grain_size,
elastic_modulus=elastic_modulus, poissons_ratio=poissons_ratio,
macro_strain=macro_strain, seed=0)
print(X.shape)
print(y.shape)
from pymks.tools import draw_microstructures
X_examples = X[::sample_size]
draw_microstructures(X_examples[:3])
print('Stress Values'), (y[::200])
from pymks import MKSHomogenizationModel
from pymks import PrimitiveBasis
prim_basis = PrimitiveBasis(n_states=2, domain=[0, 1])
model = MKSHomogenizationModel(basis=prim_basis, periodic_axes=[0, 1],
correlations=[(0, 0), (1, 1)])
print('Default Number of Components'), (model.n_components)
print('Default Polynomail Order'), (model.degree)
model.n_components = 40
model.fit(X, y)
from pymks.tools import draw_component_variance
draw_component_variance(model.dimension_reducer.explained_variance_ratio_)
from sklearn.cross_validation import train_test_split
flat_shape = (X.shape[0],) + (X[0].size,)
X_train, X_test, y_train, y_test = train_test_split(X.reshape(flat_shape), y,
test_size=0.2, random_state=3)
print(X_train.shape)
print(X_test.shape)
from sklearn.grid_search import GridSearchCV
params_to_tune = {'degree': np.arange(1, 4), 'n_components': np.arange(2, 12)}
fit_params = {'size': X[0].shape}
gs = GridSearchCV(model, params_to_tune, fit_params=fit_params).fit(X_train, y_train)
from pymks.tools import draw_gridscores_matrix
draw_gridscores_matrix(gs, ['n_components', 'degree'], score_label='R-Squared',
param_labels=['Number of Components', 'Order of Polynomial'])
print('Order of Polynomial'), (gs.best_estimator_.degree)
print('Number of Components'), (gs.best_estimator_.n_components)
print('R-squared Value'), (gs.score(X_test, y_test))
from pymks.tools import draw_gridscores
gs_deg_1 = [x for x in gs.grid_scores_ \
if x.parameters['degree'] == 1][1:]
gs_deg_2 = [x for x in gs.grid_scores_ \
if x.parameters['degree'] == 2][1:]
gs_deg_3 = [x for x in gs.grid_scores_ \
if x.parameters['degree'] == 3][1:]
draw_gridscores([gs_deg_1, gs_deg_2, gs_deg_3], 'n_components',
data_labels=['1st Order', '2nd Order', '3rd Order'],
param_label='Number of Components', score_label='R-Squared')
model = gs.best_estimator_
model.fit(X, y)
test_sample_size = 20
n_samples = [test_sample_size] * 6
X_new, y_new = make_elastic_stress_random(n_samples=n_samples, size=size, grain_size=grain_size,
elastic_modulus=elastic_modulus, poissons_ratio=poissons_ratio,
macro_strain=macro_strain, seed=1)
y_predict = model.predict(X_new)
from pymks.tools import draw_components_scatter
draw_components_scatter([model.reduced_fit_data[:, :2],
model.reduced_predict_data[:, :2]],
['Training Data', 'Test Data'],
legend_outside=True)
from sklearn.metrics import r2_score
print('R-squared'), (model.score(X_new, y_new))
print('Actual Stress '), (y_new[::20])
print('Predicted Stress'), (y_predict[::20])
from pymks.tools import draw_goodness_of_fit
fit_data = np.array([y, model.predict(X)])
pred_data = np.array([y_new, y_predict])
draw_goodness_of_fit(fit_data, pred_data, ['Training Data', 'Test Data'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Jupyter
Step2: If the previous cell runs without producing any error messages, you are all set.
Step3: A variable is a name that corresponds to a value. In this example, the name is a and the value is the number 9.8.
Step4: Now we can compute the velocity of the penny after t seconds.
Step5: Python uses the symbol * for multiplication. The other arithmetic operators are + for addition, - for subtraction, / for division, and ** for exponentiation.
Step6: After $3.4$ s, the velocity of the penny is about $33$ m/s (ignoring air resistance). Now let's see how far it would travel during that time
Step7: It would travel about $56$ m. Now, going in the other direction, let's compute the time it takes to fall 381 m, the height of the Empire State Building.
Step8: For this computation, we need the square root function, which is provided by a library called NumPy.
Step9: Now we can use it like this
Step10: With no air resistance, it would take about $8.8$ s for the penny to reach the sidewalk.
Step11: And its velocity on impact would be about $86$ m/s.
Step12: To find out what other units are defined, type units. (including the period) in the next cell and then press TAB. You should see a pop-up menu with a list of units.
Step13: The result is a quantity with two parts, called magnitude and units, which we can access like this
Step14: Now we can create a quantity that represents $3.4$ s.
Step15: And use it to compute the distance a penny would fall after t seconds with constant acceleration a.
Step16: Notice that the units of the result are correct.
Step17: We can use it to compute the time to reach the sidewalk.
Step18: And the velocity of the penny on impact
Step19: As in the previous section, the result is about $86$, but now it has the correct units, m/s.
Step20: If you are more familiar with miles per hour, this result might be easier to interpret.
Step21: Exercise
Step22: NumPy provides other functions we'll use, including log, exp, sin, and cos.
Step23: Exercise
Step24: Exercise
Step25: In this example, you should get a DimensionalityError, which is defined by Pint to indicate that you have violated a rules of dimensional analysis
Step26: Exercise
Step27: Exercise
Step28: Exercise
|
<ASSISTANT_TASK:>
Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
# check if the libraries we need are installed
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
a = 9.8
t = 3.4
v = a * t
v
x = a * t**2 / 2
x
h = 381
from numpy import sqrt
t = sqrt(2 * h / a)
t
v = a * t
v
meter = units.meter
meter
second = units.second
second
a = 9.8 * meter / second**2
a
a.magnitude
a.units
t = 3.4 * second
t
a * t**2 / 2
h = 381 * meter
t = sqrt(2 * h / a)
t
v = a * t
v
mile = units.mile
hour = units.hour
v.to(mile/hour)
a = 9.8 * meter / second**2
t = 3.4 * second
v = a * t
v
from numpy import pi
pi
# Solution
from numpy import sin, cos
sin(pi/4)**2 + cos(pi/4)**2
h = 381 * meter
# Solution
foot = units.foot
pole_height = 10 * foot
h + pole_height
# Solution
pole_height + h
a = 9.8 * meter / second**2
t = 3.4 * second
# Solution
h = 381 * meter
v = 29 * meter / second
t = h / v
t
a = 9.8 * meter / second**2
h = 381 * meter
# Solution
v_terminal = 29 * meter / second
# Solution
t1 = v_terminal / a
print('Time to reach terminal velocity', t1)
# Solution
h1 = a * t1**2 / 2
print('Height fallen in t1', h1)
# Solution
t2 = (h - h1) / v_terminal
print('Time to fall remaining distance', t2)
# Solution
t_total = t1 + t2
print('Total falling time', t_total)
# Solution
# I suggest the following model:
# 1. Let's ignore the motion of the ball toward home plate and
# think about how far the ball would drop while it's in flight.
# 2. Let's ignore air resistance. Since we are only thinking
# about the relatively slow motion in the vertical direction,
# this is probably a good assumption.
# 3. Let's also ignore the effect of spin. This is probably a
# less good assumption.
# The distance from the pitcher's mound to home plate is about 60
# feet, but the point where the ball is released is a bit closer.
# An average pitcher in high school might be able to throw a ball
# at 80 mph.
# Solution
v = (80 * mile / hour).to(meter/second)
x = (60 * foot).to(meter)
# Solution
t = x / v
t
# Solution
a = 9.8 * meter / second**2
h = a * t**2 / 2
h
# Solution
# In the time it takes the ball to reach home plate, it drops
# about 1.3 m. If the release point is at 2 m, which is plausible,
# it would cross the plate at 0.7 m, which is in the strike zone.
# So I could be wrong -- it is plausible that the ball leaves
# the pitcher's hand at a downward angle, at least for some pitches.
mile = units.mile
kilometer = units.kilometer
minute = units.minute
# Solution
t = 52 * second + 44 * minute
t
# Solution
v = 10 * kilometer / t
v
# Solution
pace = (1 / v).to(minute / mile)
pace
# Solution
# To convert to minutes and seconds, we can use np.round.
# But we haven't covered that yet.
from numpy import round
round(pace)
# Solution
remainder = pace - round(pace)
remainder.to(second/mile)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TFP Probabilistic Layers
Step2: Make things Fast!
Step3: Note
Step4: Note that preprocess() above returns image, image rather than just image because Keras is set up for discriminative models with an (example, label) input format, i.e. $p\theta(y|x)$. Since the goal of the VAE is to recover the input x from x itself (i.e. $p_\theta(x|x)$), the data pair is (example, example).
Step5: Do inference.
Step6: Look Ma, No ~~Hands~~Tensors!
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title Import { display-mode: "form" }
import numpy as np
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_datasets as tfds
import tensorflow_probability as tfp
tfk = tf.keras
tfkl = tf.keras.layers
tfpl = tfp.layers
tfd = tfp.distributions
if tf.test.gpu_device_name() != '/device:GPU:0':
print('WARNING: GPU device not found.')
else:
print('SUCCESS: Found GPU: {}'.format(tf.test.gpu_device_name()))
datasets, datasets_info = tfds.load(name='mnist',
with_info=True,
as_supervised=False)
def _preprocess(sample):
image = tf.cast(sample['image'], tf.float32) / 255. # Scale to unit interval.
image = image < tf.random.uniform(tf.shape(image)) # Randomly binarize.
return image, image
train_dataset = (datasets['train']
.map(_preprocess)
.batch(256)
.prefetch(tf.data.AUTOTUNE)
.shuffle(int(10e3)))
eval_dataset = (datasets['test']
.map(_preprocess)
.batch(256)
.prefetch(tf.data.AUTOTUNE))
input_shape = datasets_info.features['image'].shape
encoded_size = 16
base_depth = 32
prior = tfd.Independent(tfd.Normal(loc=tf.zeros(encoded_size), scale=1),
reinterpreted_batch_ndims=1)
encoder = tfk.Sequential([
tfkl.InputLayer(input_shape=input_shape),
tfkl.Lambda(lambda x: tf.cast(x, tf.float32) - 0.5),
tfkl.Conv2D(base_depth, 5, strides=1,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2D(base_depth, 5, strides=2,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2D(2 * base_depth, 5, strides=1,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2D(2 * base_depth, 5, strides=2,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2D(4 * encoded_size, 7, strides=1,
padding='valid', activation=tf.nn.leaky_relu),
tfkl.Flatten(),
tfkl.Dense(tfpl.MultivariateNormalTriL.params_size(encoded_size),
activation=None),
tfpl.MultivariateNormalTriL(
encoded_size,
activity_regularizer=tfpl.KLDivergenceRegularizer(prior)),
])
decoder = tfk.Sequential([
tfkl.InputLayer(input_shape=[encoded_size]),
tfkl.Reshape([1, 1, encoded_size]),
tfkl.Conv2DTranspose(2 * base_depth, 7, strides=1,
padding='valid', activation=tf.nn.leaky_relu),
tfkl.Conv2DTranspose(2 * base_depth, 5, strides=1,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2DTranspose(2 * base_depth, 5, strides=2,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2DTranspose(base_depth, 5, strides=1,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2DTranspose(base_depth, 5, strides=2,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2DTranspose(base_depth, 5, strides=1,
padding='same', activation=tf.nn.leaky_relu),
tfkl.Conv2D(filters=1, kernel_size=5, strides=1,
padding='same', activation=None),
tfkl.Flatten(),
tfpl.IndependentBernoulli(input_shape, tfd.Bernoulli.logits),
])
vae = tfk.Model(inputs=encoder.inputs,
outputs=decoder(encoder.outputs[0]))
negloglik = lambda x, rv_x: -rv_x.log_prob(x)
vae.compile(optimizer=tf.optimizers.Adam(learning_rate=1e-3),
loss=negloglik)
_ = vae.fit(train_dataset,
epochs=15,
validation_data=eval_dataset)
# We'll just examine ten random digits.
x = next(iter(eval_dataset))[0][:10]
xhat = vae(x)
assert isinstance(xhat, tfd.Distribution)
#@title Image Plot Util
import matplotlib.pyplot as plt
def display_imgs(x, y=None):
if not isinstance(x, (np.ndarray, np.generic)):
x = np.array(x)
plt.ioff()
n = x.shape[0]
fig, axs = plt.subplots(1, n, figsize=(n, 1))
if y is not None:
fig.suptitle(np.argmax(y, axis=1))
for i in range(n):
axs.flat[i].imshow(x[i].squeeze(), interpolation='none', cmap='gray')
axs.flat[i].axis('off')
plt.show()
plt.close()
plt.ion()
print('Originals:')
display_imgs(x)
print('Decoded Random Samples:')
display_imgs(xhat.sample())
print('Decoded Modes:')
display_imgs(xhat.mode())
print('Decoded Means:')
display_imgs(xhat.mean())
# Now, let's generate ten never-before-seen digits.
z = prior.sample(10)
xtilde = decoder(z)
assert isinstance(xtilde, tfd.Distribution)
print('Randomly Generated Samples:')
display_imgs(xtilde.sample())
print('Randomly Generated Modes:')
display_imgs(xtilde.mode())
print('Randomly Generated Means:')
display_imgs(xtilde.mean())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Remember that these machines are expensive, many hundreds of dollars a month. So make sure you stop the VM when you are not using it, either at the Vertex AI workbench or else using gcloud
Step2: After creating the notebook
Step3: Now you can import all of the python modules that this notebook depends on
Step4: Additional imports that may be necessary
Step5: A couple different strategies to use to distribute work, also helps move up some TF debug spew
Step6: Tokenizer
Step7: The tfhub_handle_preprocess does one-way preprocessing, which is going to be hard to work with. The SQuAD dataset returns a start index as well as the answer text. We will need to identify the matching tokens in the text and their start/end index.
Step10: Let's define a couple helper functions to help us go between strings and tokens.
Step11: Now let's try it out!
Step12: Dataset
Step13: Let's take a look at what we got
Step17: We will need to encode each question into the format expected by bert. This involves tokenizing the question and context, converting the tokens into numbers and then also including the word type and word mask inputs as well.
Step19: Add labels
Step20: Now let's process the dataset to generate our expected inputs and outputs.
Step21: Let's first pack the tensors into a dataset, which will reshape things from a single dictionary with many items in each key
Step22: Now they are in the format our processing functions from above expects, so lets generate all the inputs and outputs
Step25: Lets also define a function that is able to decode the input/output into a format that's easier for a person to read
Step26: Let's check an example from each dataset split
Step27: Now let's package things back up into a format that is easy to send to our model
Step28: And now filter our the bad examples
Step29: We are now ready to construct our model!
Step30: Now let's construct the model!
Step31: Let's see what it looks like
Step32: Let's try it out!
Step33: Train
Step34: Now we are ready to run our training program, choosing a couple of hyperparameters
Step35: Export
Step36: Evaluate
Step37: Let's first try it on the examples it trained about
Step38: Now let's see how it does against the validation data it never saw
Step40: Now let's try some really random data it has not seen. Here is a paragraph taken from the spyglass and its lens documentation for prow as well as testgrid.
Step41: Note that there are no known answers to the questions above. How well will it answer these questions?
Step42: Now let's see it's predictions!
|
<ASSISTANT_TASK:>
Python Code:
vm_image_project='deeplearning-platform-release'
vm_image_family='tf-ent-2-8-cu113-notebooks'
machine_type='n1-standard-8'
location='us-central1-a'
accelerator_type='CHOOSE' # eg, 'NVIDIA_TESLA_V100'
accelerator_cores=1
project='MY_PROJECT_ID'
instance_name='MY_INSTANCE_NAME'
print('Run the following command:')
print(' \\\n '.join([
f' gcloud notebooks instances create {instance_name}',
f'--project={project}',
f'--vm-image-project={vm_image_project}',
f'--vm-image-family={vm_image_family}',
f'--machine-type={machine_type}',
f'--location={location}',
f'--accelerator_type={accelerator_type}',
f'--accelerator_cores={accelerator_cores}',
]))
print('Stop your notebook:')
print(f' gcloud notebooks instances stop {instance_name} --project={project} --location={location}')
print('Delete your notebook:')
print(f' gcloud notebooks instances delete {instance_name} --project={project} --location={location}')
!pip install -U "tensorflow-text==2.8.*"
# tf-models-official 2.8.0 produces a official.nlp.bert.configs below for some reasons, so use the previous version.
!pip install tf-models-official==2.7.1
!pip install pydot
!sudo apt install graphviz
import os
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from official.modeling import tf_utils
from official import nlp
from official.nlp import bert
# Load the required submodules
from official.nlp import optimization
import official.nlp.bert.bert_models
import official.nlp.bert.configs
import official.nlp.bert.run_classifier
import official.nlp.bert.tokenization
import official.nlp.data.classifier_data_lib
import official.nlp.modeling.losses
import official.nlp.modeling.models
import official.nlp.modeling.networks
import tensorflow_text as text # A dependency of the preprocessing model
import tensorflow_addons as tfa
default_strategy = tf.distribute.get_strategy()
if os.environ.get('COLAB_TPU_ADDR'):
cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(cluster_resolver)
tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
strategy = tf.distribute.TPUStrategy(cluster_resolver)
print('Using TPU')
elif tf.config.list_physical_devices('GPU'):
# https://www.tensorflow.org/guide/distributed_training
strategy = tf.distribute.MirroredStrategy()
# TODO(fejta): strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
# TODO(fejta): default_strategy = tf.distribute.get_strategy()
print('Using GPU')
else:
raise ValueError('Running on CPU is not recommended.')
print('Select pretrained bert model')
# Pre-trained model
tfhub_handle_encoder = 'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/4'
# Matching encoder
tfhub_handle_preprocess = 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3'
print(' ', tfhub_handle_encoder)
gs_folder_bert = "gs://cloud-tpu-checkpoints/bert/v3/uncased_L-12_H-768_A-12"
print('Files in', gs_folder_bert)
gs_files = tf.io.gfile.listdir(gs_folder_bert)
print(' ', '\n '.join(gs_files))
print('Create reversible tokenizer')
tokenizer = bert.tokenization.FullTokenizer(
vocab_file=os.path.join(gs_folder_bert, "vocab.txt"),
do_lower_case=True)
def to_token_ids(s):
Converts 'FUN stuffing' into ['fun', 'stuff', '##ing'] and then [7, 2089, 88].
return tokenizer.convert_tokens_to_ids(tokenizer.tokenize(s))
def from_token_ids(ids, lossy=True):
Converts [7, 2089, 88] into ['fun', 'stuff', '##ing'] and then 'fun stuff ##ing' or 'fun stuffing'.
s = ' '.join(tokenizer.convert_ids_to_tokens(ids))
if lossy:
s = s.replace('[CLS] ', '').replace(' [PAD]', '').replace(' [SEP]', '\n\n').replace(' ##', '')
return s
print('String to token id list:')
orig = 'This is a very interesting sentence.'
ids = to_token_ids(orig)
print(orig, 'becomes:', ids)
print('Token ids to string:')
s = from_token_ids(ids)
print(ids, 'becomes:', s)
print('Load squad from tfds...')
out = tfds.load('squad/v1.1', with_info=True, batch_size=-1) # -1 means whole dataset in mem
squad, info = out
print('Done!')
squad.keys(), squad['train'].keys()
print(info)
def decode(t):
Decode a tensor string into printable one.
return t.numpy().decode('utf-8')
patches = {} # No patches, see -Copy1.ipynb, TODO(fejta): add these
# BERT uses [CLS] for the start and [SEP] separates the context and question
tok_cls, tok_sep = tokenizer.convert_tokens_to_ids(['[CLS]', '[SEP]'])
seq_length = 384
zeros = np.zeros(seq_length, int)
SKIP = (zeros, zeros, zeros), (-1, -1)
def tokenize_example(ex):
Returns a packed example after tokenizing the input/finding output.
context = ex['context']
context_txt = decode(context)
context_ids = to_token_ids(context_txt)
question_ids = to_token_ids(decode(ex['question']))
# TODO(fejta): handle impossible questions
if 'answers' not in ex: # Probably an example we are predicting.
start_idx, end_idx = 0, 0
else: # Try and identify the start and end index.
# Check if this is a patched example
exid = decode(ex['id'])
ctx_start, atext_txt = patches.get(exid, (None, None))
if ctx_start and ctx_start < 0: # patch says to SKIP
return SKIP
# Now identify where the answer appears in the context.
atext_txt = atext_txt or decode(ex['answers']['text'][0])
answer_ids = to_token_ids(atext_txt)
if not ctx_start:
astart = ex['answers']['answer_start']
ctx_start = int(astart[0])
if ctx_start == 1:
ctx_start = 0
ctx_left = context_txt[:ctx_start]
left_ids = to_token_ids(ctx_left)
start_idx = len(left_ids)
end_idx = start_idx + len(answer_ids)
context_answer_ids = context_ids[start_idx:end_idx]
# Make sure have the answer
if context_answer_ids != answer_ids:
return SKIP
return pack_example(context_ids, question_ids, start_idx, end_idx)
def pack_example(context_ids, question_ids, start_idx, end_idx):
Returns a ((words, types, mask), (start, end) tuple given the inputs
# Format is [CLS, CTX1, CTX2, ..., CTXN, SEP, Q1, Q2, ..., SEP, 0, 0, ...]
# AKA, CLS token, context tokens, SEP token, question tokens, SEP token, padding.
# The CLS and SEP tokens are special tokens BERT expects.
# NOTE: the BERT paper puts the question before the context, but
# this seems easier.
words = [tok_cls] + context_ids + [tok_sep] + question_ids + [tok_sep]
# NOTE: the BERT paper does something fancier here, we just SKIP inputs that
# are too long for now.
if len(words) > seq_length:
return SKIP
# The types input distinguishes context and question.
types = [0] * (len(context_ids)+2) + [1] * (len(question_ids) + 1)
# The mask input specifies non-padding tokens.
masks = [1] * len(types)
# Padding ensures that it is exactly seq_length.
pad_len = seq_length - len(masks)
padding = [0] * pad_len
types += padding
masks += padding
words += padding
# Sanity check the input
assert len(words) == len(types) == len(masks) == seq_length
if start_idx or end_idx:
# Sanity check the output
assert start_idx >= 0 and end_idx >= 0, (start_idx, end_idx)
assert start_idx < seq_length
assert end_idx < seq_length
ans_start = start_idx + 1
ans_end = end_idx + 1
else:
ans_start = -1
ans_end = -1
return (words, types, masks), (ans_start, ans_end)
def yield_examples(ds, stop=None):
for (i, ex) in enumerate(ds):
if i % 1000 == 0:
print(i, end=' ', flush=True)
if i == stop:
print('Stopping early')
break
yield tokenize_example(ex)
print('Done!')
def process_examples(*a, **kw):
Returns an input, output for each example in the dataset.
inputs = []
labels = []
for x, y in yield_examples(*a, **kw):
inputs.append(x)
labels.append(y)
i = tf.constant(inputs, tf.int32)
words, types, masks = tf.unstack(tf.transpose(i, [1,0,2]))
l = tf.constant(labels)
starts, ends = tf.unstack(tf.transpose(l, [1, 0]))
assert len(words) == len(starts)
return {
'input_word_ids': words,
'input_type_ids': types,
'input_mask': masks,
}, {
'label_start': starts,
'label_end': ends,
}
squad['train']['question']
print('Pack the dictionary of tensors into an iterable dataset')
raw_train_ds = tf.data.Dataset.from_tensor_slices(squad['train'])
raw_valid_ds = tf.data.Dataset.from_tensor_slices(squad['validation'])
print('Computing validation labels')
valid_stop = None # 1000 to get started
valid_x, valid_y = process_examples(raw_valid_ds, stop=valid_stop)
print('Computing training labels')
train_stop = None # 10000 to get started
train_x, train_y = process_examples(raw_train_ds, stop=train_stop)
def decode_x(xs, ys=None, trueys=None, stop=None):
Decodes each x (and optional y, ground truth y) and prints it.
lastctx = None # Try and avoid repeating the same question.
for i, toks in enumerate(xs['input_word_ids']):
toks = toks.numpy()
q = from_token_ids(toks)
ctx = q.split('\n')[0]
if lastctx and lastctx == ctx:
q = '\n'.join(q.split('\n')[1:])
else:
if lastctx:
print('-'*40)
lastctx = ctx
if ys or trueys:
q = q and q[:-2]
if ys:
q += ' ' + answer(toks, ys, i)
if trueys:
a = answer(toks, trueys, i)
q += f' (GTRUTH: {a})'
print(q)
if i == stop:
break
def answer(toks, ys, i):
Returns the answer extracted from the input tokens.
s, e = ys['label_start'][i], ys['label_end'][i]
return from_token_ids(toks[s:e])
print('Training example')
print('='*80)
decode_x(train_x,train_y, stop=1)
print('Validation example')
print('='*80)
decode_x(valid_x, valid_y, stop=1)
train_ds = tf.data.Dataset.from_tensor_slices({'x': train_x, 'y': train_y})
valid_ds = tf.data.Dataset.from_tensor_slices({'x': valid_x, 'y': valid_y})
print('Dropping bad examples')
ignore_rejects = lambda ex: ex['y']['label_end']>=0
filt_valid_ds = valid_ds.filter(ignore_rejects)
filt_train_ds = train_ds.filter(ignore_rejects)
# Who knows what this does! But AUTO sounds promising so haven't bothered to figure it out
AUTOTUNE = tf.data.AUTOTUNE
def softmax(name, inp):
# The input here will be something like (seq_length, hidden)
# So we'll wind up doing (seq_len, hidden) * (hidden, 1)
# and wind up with (seq_len, 1), aka an output for each
# position in the input sequence.
net = tf.keras.layers.Dense(1, name=name, use_bias=False)(inp)
# Flatten this, aka change (seq_len, 1) to (seq_len,)
net = tf.keras.layers.Flatten()(net)
# Now apply softmax, aka an S-ish shape with a min of 0 and max of 1
net = tf.keras.layers.Activation(tf.keras.activations.softmax)(net)
return net
def build_highlighter_model():
sentence_features = [
'input_word_ids',
'input_type_ids',
'input_mask',
]
# Input tells the network what it should expect.
# The (None,) here that it will be* a rank 1 tensor of integers
# This represents the input token ids, and should match seq_length
# of 384
#
# *: actually this should be a batch of rank 1 tensors, making this
# a rank two tensor of (batch_size, seq_length).
inp = {
ft: tf.keras.layers.Input(shape=(None,), dtype=tf.int32, name=ft)
for ft in sentence_features
}
# This handle is a URL to a pretrained BERT model.
# It will cause tensorflow_hub load the right architecture
# And preconfigure all the weights.
encodings = hub.KerasLayer(tfhub_handle_encoder, trainable=True, name='BERT')(inp)
#
net = encodings['sequence_output']
net = tf.keras.layers.Dropout(0.1)(net)
start = softmax('start_logit', net)
end = softmax('end_logit', net)
# so they have a name
outs = {
'label_start': tf.keras.layers.Lambda(tf.identity, name='start_pos')(start),
'label_end': tf.keras.layers.Lambda(tf.identity, name='end_pos')(end),
}
return tf.keras.Model(inputs=inp, outputs=outs, name='highlighter')
highlighter = build_highlighter_model()
highlighter.summary()
tf.keras.utils.plot_model(highlighter)
for ex in filt_valid_ds.batch(1):
predy = highlighter(ex['x']) # Print this to see the prediction for all 384 positions
print(np.argmax(predy['label_start']), np.argmax(predy['label_end']))
break
def prepare_ds(dataset, batch_size, training):
num_examples = len(list(dataset)) # Maybe there's a faster way to do this...
if training:
dataset = dataset.shuffle(num_examples)
dataset = dataset.repeat()
dataset = dataset.batch(batch_size)
dataset = dataset.map(lambda ex: (ex['x'], ex['y']))
dataset = dataset.cache().prefetch(buffer_size=AUTOTUNE)
return dataset, num_examples
def train(model, batch_size, epochs, init_lr):
print('Preparing validation data...')
vds, valid_len = prepare_ds(filt_valid_ds, batch_size, training=False)
print('Preparing training data...')
tds, train_len = prepare_ds(filt_train_ds, batch_size, training=True)
steps_per_epoch = train_len / batch_size
num_train_steps = steps_per_epoch * epochs
num_warmup_steps = num_train_steps / 10
validation_steps = valid_len / batch_size
print('Ready to train!')
with default_strategy.scope():
optimizer = optimization.create_optimizer(
init_lr=init_lr,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
optimizer_type='adamw',
)
loss = {
'label_start': tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
'label_end': tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
}
metrics = ['accuracy']
model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
model.fit(
x=tds,
validation_data=vds,
steps_per_epoch=steps_per_epoch,
epochs=epochs,
validation_steps=validation_steps,
)
train(
model=highlighter,
epochs=3,
batch_size=16,
init_lr=5e-5,
)
def save_model(tfds_name='squad'):
main_save_path = './my_models'
bert_type = tfhub_handle_encoder.split('/')[-2]
saved_model_name = f'{tfds_name.replace("/", "_")}_{bert_type}'
saved_model_path = os.path.join(main_save_path, saved_model_name)
print('Saving', saved_model_path)
# Save everything on the Colab host (even the variables from TPU memory)
save_options = tf.saved_model.SaveOptions(experimental_io_device='/job:localhost')
highlighter.save(saved_model_path, include_optimizer=True,options=save_options)
return saved_model_path
saved_model_path = save_model()
with tf.device('/job:localhost'):
default_model_path = './my_models/squad_bert_en_uncased_L-12_H-768_A-12'
reloaded_model = tf.saved_model.load(globals().get('saved_model_path', default_model_path))
def evaluate(xs, true_ys=None, stop=None, batch=16, model=reloaded_model):
ds = tf.data.Dataset.from_tensor_slices({'x': xs, 'y': true_ys}).batch(batch)
for i, batch in enumerate(ds):
x = batch['x']
pred = model(x)
y = {k: np.argmax(pred[k], 1) for k in ['label_start', 'label_end']}
true_y = batch.get('y')
if i == 0:
print('INPUT')
print(ex)
print('RAW OUTPUT PREDICTIONS')
print(pred)
print('ARGMAX OUTPUT')
print(y)
print('='*40)
decode_x(x, y, true_y)
if i == stop:
print('Stopping early')
break
else:
print('Done!')
print('Training example')
print('='*80)
evaluate(train_x,train_y, stop=1, batch=4)
print('Validation example')
print('='*80)
evaluate(valid_x, valid_y, stop=1, batch=4)
# Helper function to help write multiple questions about a single context paragraph.
def user_dataset(contexts):
Converts a [(ctx, (q1, q2, q3)), ...] into [{'context': ctx, 'question': q1}].
for context, questions in contexts:
for q in questions:
yield {'context': context, 'question': q}
user_ds = user_dataset([
(
'''
Spyglass is a pluggable artifact viewer framework for Prow.
It collects artifacts (usually files in a storage bucket) from various sources and distributes them to registered viewers,
which are responsible for consuming them and rendering a view.
''',
(
'What does spyglass collect?',
'Where does spyglass collect artifacts from?',
'What is spyglass?',
'What are the registered viewers responsible for?',
'What is spyglass a framework for?',
),
),
(
'''
The HTML generated by a lens can reference static assets that will be served by Deck on behalf of your lens.
Scripts and stylesheets can be referenced in the output of the Header() function
(which is inserted into the <head> element).
Relative references into your directory will work:
spyglass adds a <base> tag that references the expected output directory.
Spyglass lenses have access to a spyglass global that provides a number of APIs
to interact with your lens backend and the rest of the world.
Your lens is rendered in a sandboxed iframe, so you generally cannot interact without using these APIs.
''',
(
'What can lenses reference?',
'What serves the HTML generated by the lens?',
'How do relative references work?',
'What provides the spyglass APIs?',
'What do the spyglass APIs allow?',
'Where is your lens rendered?',
),
),
(
'''
Fragment URLs (the part after the #) are supported fairly transparently, despite being in an iframe.
The parent page muxes all the lens's fragments and ensures that if the page is loaded,
each lens receives the fragment it expects.
Changing your fragment will automatically update the parent page's fragment.
If the fragment matches the ID or name of an element, the page will scroll such that that element is visible.
Anchor links (<a href="#something">) would usually not work well in conjunction with the <base> tag.
To resolve this, we rewrite all links of this form to behave as expected both on page load and on DOM modification.
In most cases, this should be transparent.
If you want users to copy links via right click -> copy link, however, this will not work nicely.
Instead, consider setting the href attribute to something from spyglass.makeFragmentLink,
but handling clicks by manually setting location.hash to the desired fragment.
''',
(
'What is a fragment URL?',
'When a fragment matches the ID, what does the page do?',
'How well does copying via right click work?',
'What should you set the href attribute to?',
),
),
(
'''
The three sizes are Standard, Compact, and Super Compact.
You can also specify width=X in the URL (X > 3) to customize the width.
For small widths, this may mean the date and/or changelist, or other custom headers, are no longer visible.
''',
(
'How many sizes are there?',
'How do you customize the width?',
'What might happen when the width is small?',
),
),
])
def repackage():
out = {}
for ex in user_ds:
for key in ex:
out.setdefault(key, []).append(ex[key])
return out
raw_user_ds = tf.data.Dataset.from_tensor_slices(repackage())
user_x, user_y = process_examples(raw_user_ds)
user_ds = tf.data.Dataset.from_tensor_slices({'x': user_x})
evaluate(user_x)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Transform the full JSON file into a CSV, removing any stuff that we won't need
Step3: Creates CSVs of text from comments made by users who have posted about anorexia or obesity.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
json_file = 'sample_data'
list(pd.read_json(json_file, lines=True))
import csv
import json
from nltk.tokenize import TweetTokenizer
from tqdm import tqdm
MIN_NUM_WORD_TOKENS = 10
TOTAL_NUM_LINES = 53851542 # $ wc -l data_full.json
PBAR_UPDATE_SIZE = 10000
tokenizer = TweetTokenizer()
def _ok_to_write(entries):
if entries['author'] == '[deleted]':
return False
if entries['body'] == '[deleted]' or len(tokenizer.tokenize(entries['body'])) < MIN_NUM_WORD_TOKENS:
return False
return True
out_columns = [
'author',
'body',
'subreddit',
'subreddit_id',
'score',
]
in_filename = 'data_full.json'
out_filename = 'data_full_preprocessed.csv'
count = 0
pbar = tqdm(total=TOTAL_NUM_LINES)
with open(out_filename, 'w') as o:
writer = csv.DictWriter(o, fieldnames=out_columns, extrasaction='ignore',
delimiter=',', quoting=csv.QUOTE_MINIMAL)
writer.writeheader()
with open(in_filename, 'r') as f:
for line in f:
count += 1
if count % PBAR_UPDATE_SIZE == 0:
pbar.update(PBAR_UPDATE_SIZE)
entries = json.loads(line)
if _ok_to_write(entries):
writer.writerow(entries)
print('Done. Processed {} lines total.'.format(count))
import pandas as pd
from tqdm import tqdm
from nltk.corpus import wordnet
from nltk.stem.porter import *
from nltk.stem import WordNetLemmatizer
from nltk.tokenize import TweetTokenizer
tokenizer = TweetTokenizer()
wordnet_lemmatizer = WordNetLemmatizer()
# Create synonym sets for obesity and anorexia
def syn_set(word_list):
syns = set()
for word in word_list:
for synset in wordnet.synsets(word):
for lemma in synset.lemmas():
syns.add(lemma.name())
return syns
OBESITY_SYNS = syn_set(['obesity'])
ANOREXIA_SYNS = syn_set(['anorexia'])
def row_filter_fn(df, syns):
Returns True if the row should be included, False otherwise.
# Check if any synonyms can be found.
if set([wordnet_lemmatizer.lemmatize(token.lower()) for token in tokenizer.tokenize(df)]) & syns:
return True
return False
csv_filename = 'data_full_preprocessed.csv'
chunksize = 10000
count = 0
obesity_data_frames = []
anorexia_data_frames = []
for chunk in tqdm(pd.read_csv(csv_filename, chunksize=chunksize)):
obesity_df = chunk[chunk['body'].apply(row_filter_fn, syns=OBESITY_SYNS)]
if not obesity_df.empty:
obesity_data_frames.append(obesity_df)
anorexia_df = chunk[chunk['body'].apply(row_filter_fn, syns=ANOREXIA_SYNS)]
if not anorexia_df.empty:
anorexia_data_frames.append(anorexia_df)
count += 1
#if count == 100: break
print('Total # chunks processed: {}.'.format(count))
# Write out to CSVs.
pd.concat(obesity_data_frames).to_csv('obesity.csv', index=False)
pd.concat(anorexia_data_frames).to_csv('anorexia.csv', index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 创建客户端,填入服务器地址:
Step2: 调用parse接口,传入一篇文章,得到HanLP精准的分析结果。
Step3: 可视化
Step4: 申请秘钥
Step5: 加载模型
Step6: 调用hanlp.load进行加载,模型会自动下载到本地缓存。自然语言处理分为许多任务,分词只是最初级的一个。与其每个任务单独创建一个模型,不如利用HanLP的联合模型一次性完成多个任务:
Step7: 多任务批量分析
Step8: 可视化
Step9: 指定任务
Step10: 执行粗颗粒度分词
Step11: 执行分词和PKU词性标注
Step12: 执行粗颗粒度分词和PKU词性标注
Step13: 执行分词和MSRA标准NER
Step14: 执行分词、词性标注和依存句法分析
Step15: 转换为CoNLL格式:
Step16: 执行分词、词性标注和短语成分分析
Step17: 将短语结构树以bracketed形式打印
Step18: 关于标注集含义,请参考《语言学标注规范》及《格式规范》。我们购买、标注或采用了世界上量级最大、种类最多的语料库用于联合多语种多任务学习,所以HanLP的标注集也是覆盖面最广的。
Step19: 以及支持104种语言的多语种联合模型:
|
<ASSISTANT_TASK:>
Python Code:
!pip install hanlp_restful
from hanlp_restful import HanLPClient
HanLP = HanLPClient('https://www.hanlp.com/api', auth=None, language='zh') # auth不填则匿名,zh中文,mul多语种
doc = HanLP.parse("2021年HanLPv2.1为生产环境带来次世代最先进的多语种NLP技术。阿婆主来到北京立方庭参观自然语义科技公司。")
print(doc)
doc.pretty_print()
!pip install hanlp -U
import hanlp
hanlp.pretrained.mtl.ALL # MTL多任务,具体任务见模型名称,语种见名称最后一个字段或相应语料库
HanLP = hanlp.load(hanlp.pretrained.mtl.CLOSE_TOK_POS_NER_SRL_DEP_SDP_CON_ELECTRA_BASE_ZH)
doc = HanLP(['2021年HanLPv2.1为生产环境带来次世代最先进的多语种NLP技术。', '阿婆主来到北京立方庭参观自然语义科技公司。'])
print(doc)
doc.pretty_print()
HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks='tok').pretty_print()
HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks='tok/coarse').pretty_print()
HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks='pos/pku').pretty_print()
HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks=['tok/coarse', 'pos/pku'], skip_tasks='tok/fine').pretty_print()
HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks='ner/msra').pretty_print()
doc = HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks=['pos', 'dep'])
doc.pretty_print()
print(doc.to_conll())
doc = HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks=['pos', 'con'])
doc.pretty_print()
print(doc['con']) # str(doc['con'])会将短语结构列表转换为括号形式
ja = hanlp.load(hanlp.pretrained.mtl.NPCMJ_UD_KYOTO_TOK_POS_CON_BERT_BASE_CHAR_JA)
ja(['2021年、HanLPv2.1は次世代の最先端多言語NLP技術を本番環境に導入します。',
'奈須きのこは1973年11月28日に千葉県円空山で生まれ、ゲーム制作会社「ノーツ」の設立者だ。',]).pretty_print()
from hanlp.utils.torch_util import gpus_available
if gpus_available():
mul = hanlp.load(hanlp.pretrained.mtl.UD_ONTONOTES_TOK_POS_LEM_FEA_NER_SRL_DEP_SDP_CON_XLMR_BASE)
mul(['In 2021, HanLPv2.1 delivers state-of-the-art multilingual NLP techniques to production environments.',
'2021年、HanLPv2.1は次世代の最先端多言語NLP技術を本番環境に導入します。',
'2021年 HanLPv2.1为生产环境带来次世代最先进的多语种NLP技术。']).pretty_print()
else:
print(f'建议在GPU环境中运行XLMR_BASE。')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Solution
Step2: Use matching indices
Step3: Use a library
Step4: Numpy Magic
Step5: Compare methods
|
<ASSISTANT_TASK:>
Python Code:
np.random.seed(10)
p, q = (np.random.rand(i, 2) for i in (4, 5))
p_big, q_big = (np.random.rand(i, 80) for i in (100, 120))
print(p, "\n\n", q)
def naive(p, q):
''' fill your code in here...
'''
rows, cols = np.indices((p.shape[0], q.shape[0]))
print(rows, end='\n\n')
print(cols)
print(p[rows.ravel()], end='\n\n')
print(q[cols.ravel()])
def with_indices(p, q):
''' fill your code in here...
'''
from scipy.spatial.distance import cdist
def scipy_version(p, q):
return cdist(p, q)
def tensor_broadcasting(p, q):
return np.sqrt(np.sum((p[:,np.newaxis,:]-q[np.newaxis,:,:])**2, axis=2))
methods = [naive, with_indices, scipy_version, tensor_broadcasting]
timers = []
for f in methods:
r = %timeit -o f(p_big, q_big)
timers.append(r)
plt.figure(figsize=(10,6))
plt.bar(np.arange(len(methods)), [r.best*1000 for r in timers], log=False) # Set log to True for logarithmic scale
plt.xticks(np.arange(len(methods))+0.2, [f.__name__ for f in methods], rotation=30)
plt.xlabel('Method')
plt.ylabel('Time (ms)')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data preprocessing
Step2: Encoding the words
Step3: Encoding the labels
Step4: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Step5: Exercise
Step6: Training, Validation, Test
Step7: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step8: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Step9: Embedding
Step10: LSTM cell
Step11: RNN forward pass
Step12: Output
Step13: Validation accuracy
Step14: Batching
Step15: Training
Step16: Testing
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
with open('reviews.txt', 'r') as f:
reviews = f.read()
with open('labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
from collections import Counter
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels])
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
# Filter out that review with 0 length
reviews_ints = [each for each in reviews_ints if len(each) > 0]
seq_len = 200
features = np.zeros((len(reviews), seq_len), dtype=int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
features[:10,:100]
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
n_words = len(vocab)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed,
initial_state=initial_state)
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('/output/checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Benchmark
Step2: Implement Levenshtein term similarity matrix
Step3: Director class benchmark
Step4: The following tables show how long it takes to construct a term similarity matrix (the duration column), how many nonzero elements there are in the matrix (the matrix_nonzero column) and the mean term similarity consumption speed (the consumption_speed column) as we vary the dictionary size (the dictionary_size column) the maximum number of nonzero elements outside the diagonal in every column of the matrix (the nonzero_limit column), the matrix symmetry constraint (the symmetric column), and the matrix positive definiteness constraing (the positive_definite column). Ten independendent measurements were taken. The top table shows the mean values and the bottom table shows the standard deviations.
Step5: Builder class benchmark
Step6: The following tables show how long it takes to retrieve the most similar terms for all terms in a dictionary (the production_duration column) and the mean term similarity production speed (the production_speed column) as we vary the dictionary size (the dictionary_size column), and the maximum number of most similar terms that will be retrieved (the nonzero_limit column). Ten independendent measurements were taken. The top table shows the mean values and the bottom table shows the standard deviations.
Step7: LevenshteinSimilarityIndex
Step8: The following tables show how long it takes to retrieve the most similar terms for ten randomly sampled terms from a dictionary (the production_duration column), the mean term similarity production speed (the production_speed column) and the mean term similarity processing speed (the processing_speed column) as we vary the dictionary size (the dictionary_size column), and the maximum number of most similar terms that will be retrieved (the nonzero_limit column). Ten independendent measurements were taken. The top table shows the mean values and the bottom table shows the standard deviations.
Step9: WordEmbeddingSimilarityIndex
Step10: The following tables show how long it takes to construct an ANNOY index and the builder class instance (the constructor_duration column), how long it takes to retrieve the most similar terms for 1,000 randomly sampled terms from a dictionary (the production_duration column), the mean term similarity production speed (the production_speed column) and the mean term similarity processing speed (the processing_speed column) as we vary the dictionary size (the dictionary_size column), the maximum number of most similar terms that will be retrieved (the nonzero_limit column), and the number of constructed ANNOY trees (the annoy_n_trees column). Ten independendent measurements were taken. The top table shows the mean values and the bottom table shows the standard deviations.
Step11: Implement fast SCM between corpora
Step12: SCM between two documents
Step13: The following tables show how long it takes to compute the inner_product method between all document vectors in a corpus (the duration column), how many nonzero elements there are in a corpus matrix (the corpus_nonzero column), how many nonzero elements there are in a term similarity matrix (the matrix_nonzero column) and the mean document similarity production speed (the speed column) as we vary the dictionary size (the dictionary_size column), the size of the corpus (the corpus_size column), the maximum number of nonzero elements in a single column of the matrix (the nonzero_limit column), and the matrix symmetry constraint (the symmetric column). Ten independendent measurements were taken. The top table shows the mean values and the bottom table shows the standard deviations.
Step14: SCM between a document and a corpus
Step15: The speed is inversely proportional to matrix_nonzero. Computing a normalized inner product (normalized${}={}$True) results in a constant speed decrease.
Step16: SCM between two corpora
|
<ASSISTANT_TASK:>
Python Code:
!git rev-parse HEAD
from copy import deepcopy
from datetime import timedelta
from itertools import product
import logging
from math import floor, ceil, log10
import pickle
from random import sample, seed, shuffle
from time import time
import numpy as np
import pandas as pd
from tqdm import tqdm_notebook
def tqdm(iterable, total=None, desc=None):
if total is None:
total = len(iterable)
for num_done, element in enumerate(tqdm_notebook(iterable, total=total)):
logger.info("%s: %d / %d", desc, num_done, total)
yield element
from gensim.corpora import Dictionary
import gensim.downloader as api
from gensim.similarities.index import AnnoyIndexer
from gensim.similarities import SparseTermSimilarityMatrix
from gensim.similarities import UniformTermSimilarityIndex
from gensim.similarities import LevenshteinSimilarityIndex
from gensim.models import WordEmbeddingSimilarityIndex
from gensim.utils import simple_preprocess
RANDOM_SEED = 12345
logger = logging.getLogger()
fhandler = logging.FileHandler(filename='matrix_speed.log', mode='a')
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fhandler.setFormatter(formatter)
logger.addHandler(fhandler)
logger.setLevel(logging.INFO)
pd.set_option('display.max_rows', None, 'display.max_seq_items', None)
Repeatedly run a benchmark callable given various configurations and
get a list of results.
Return a list of results of repeatedly running a benchmark callable.
Parameters
----------
benchmark : callable tuple -> dict
A benchmark callable that accepts a configuration and returns results.
configurations : iterable of tuple
An iterable of configurations that are used for calling the benchmark function.
results_filename : str
A filename of a file that will be used to persistently store the results using
pickle. If the file exists, then the function will load the stored results
instead of calling the benchmark callable.
Returns
-------
iterable of tuple
The return values of the individual invocations of the benchmark callable.
def benchmark_results(benchmark, configurations, results_filename):
try:
with open(results_filename, "rb") as file:
results = pickle.load(file)
except IOError:
configurations = list(configurations)
shuffle(configurations)
results = list(tqdm(
(benchmark(configuration) for configuration in configurations),
total=len(configurations), desc="benchmark"))
with open(results_filename, "wb") as file:
pickle.dump(results, file)
return results
full_model = api.load("word2vec-google-news-300")
try:
full_dictionary = Dictionary.load("matrix_speed.dictionary")
except IOError:
full_dictionary = Dictionary([[term] for term in full_model.vocab.keys()])
full_dictionary.save("matrix_speed.dictionary")
def benchmark(configuration):
dictionary, nonzero_limit, symmetric, positive_definite, repetition = configuration
index = UniformTermSimilarityIndex(dictionary)
start_time = time()
matrix = SparseTermSimilarityMatrix(
index, dictionary, nonzero_limit=nonzero_limit, symmetric=symmetric,
positive_definite=positive_definite, dtype=np.float16).matrix
end_time = time()
duration = end_time - start_time
return {
"dictionary_size": len(dictionary),
"nonzero_limit": nonzero_limit,
"matrix_nonzero": matrix.nnz,
"repetition": repetition,
"symmetric": symmetric,
"positive_definite": positive_definite,
"duration": duration, }
dictionary_sizes = [10**k for k in range(3, int(ceil(log10(len(full_dictionary)))))]
seed(RANDOM_SEED)
dictionaries = []
for size in tqdm(dictionary_sizes, desc="dictionaries"):
dictionary = Dictionary([sample(list(full_dictionary.values()), size)])
dictionaries.append(dictionary)
dictionaries.append(full_dictionary)
nonzero_limits = [1, 10, 100]
symmetry = (True, False)
positive_definiteness = (True, False)
repetitions = range(10)
configurations = product(dictionaries, nonzero_limits, symmetry, positive_definiteness, repetitions)
results = benchmark_results(benchmark, configurations, "matrix_speed.director_results")
df = pd.DataFrame(results)
df["consumption_speed"] = df.dictionary_size * df.nonzero_limit / df.duration
df = df.groupby(["dictionary_size", "nonzero_limit", "symmetric", "positive_definite"])
def display(df):
df["duration"] = [timedelta(0, duration) for duration in df["duration"]]
df["matrix_nonzero"] = [int(nonzero) for nonzero in df["matrix_nonzero"]]
df["consumption_speed"] = ["%.02f Kword pairs / s" % (speed / 1000) for speed in df["consumption_speed"]]
return df
display(df.mean()).loc[
[10000, len(full_dictionary)], :, :].loc[
:, ["duration", "matrix_nonzero", "consumption_speed"]]
display(df.apply(lambda x: (x - x.mean()).std())).loc[
[10000, len(full_dictionary)], :, :].loc[
:, ["duration", "matrix_nonzero", "consumption_speed"]]
def benchmark(configuration):
dictionary, nonzero_limit, repetition = configuration
start_time = time()
index = UniformTermSimilarityIndex(dictionary)
end_time = time()
constructor_duration = end_time - start_time
start_time = time()
for term in dictionary.values():
for _j, _k in zip(index.most_similar(term, topn=nonzero_limit), range(nonzero_limit)):
pass
end_time = time()
production_duration = end_time - start_time
return {
"dictionary_size": len(dictionary),
"nonzero_limit": nonzero_limit,
"repetition": repetition,
"constructor_duration": constructor_duration,
"production_duration": production_duration, }
nonzero_limits = [1, 10, 100, 1000]
configurations = product(dictionaries, nonzero_limits, repetitions)
results = benchmark_results(benchmark, configurations, "matrix_speed.builder_results.uniform")
df = pd.DataFrame(results)
df["processing_speed"] = df.dictionary_size ** 2 / df.production_duration
df["production_speed"] = df.dictionary_size * df.nonzero_limit / df.production_duration
df = df.groupby(["dictionary_size", "nonzero_limit"])
def display(df):
df["constructor_duration"] = [timedelta(0, duration) for duration in df["constructor_duration"]]
df["production_duration"] = [timedelta(0, duration) for duration in df["production_duration"]]
df["processing_speed"] = ["%.02f Kword pairs / s" % (speed / 1000) for speed in df["processing_speed"]]
df["production_speed"] = ["%.02f Kword pairs / s" % (speed / 1000) for speed in df["production_speed"]]
return df
display(df.mean()).loc[
[1000, len(full_dictionary)], :, :].loc[
:, ["production_duration", "production_speed"]]
display(df.apply(lambda x: (x - x.mean()).std())).loc[
[1000, len(full_dictionary)], :, :].loc[
:, ["production_duration", "production_speed"]]
def benchmark(configuration):
dictionary, nonzero_limit, query_terms, repetition = configuration
start_time = time()
index = LevenshteinSimilarityIndex(dictionary)
end_time = time()
constructor_duration = end_time - start_time
start_time = time()
for term in query_terms:
for _j, _k in zip(index.most_similar(term, topn=nonzero_limit), range(nonzero_limit)):
pass
end_time = time()
production_duration = end_time - start_time
return {
"dictionary_size": len(dictionary),
"mean_query_term_length": np.mean([len(term) for term in query_terms]),
"nonzero_limit": nonzero_limit,
"repetition": repetition,
"constructor_duration": constructor_duration,
"production_duration": production_duration, }
nonzero_limits = [1, 10, 100]
seed(RANDOM_SEED)
min_dictionary = sorted((len(dictionary), dictionary) for dictionary in dictionaries)[0][1]
query_terms = sample(list(min_dictionary.values()), 10)
configurations = product(dictionaries, nonzero_limits, [query_terms], repetitions)
results = benchmark_results(benchmark, configurations, "matrix_speed.builder_results.levenshtein")
df = pd.DataFrame(results)
df["processing_speed"] = df.dictionary_size * len(query_terms) / df.production_duration
df["production_speed"] = df.nonzero_limit * len(query_terms) / df.production_duration
df = df.groupby(["dictionary_size", "nonzero_limit"])
def display(df):
df["constructor_duration"] = [timedelta(0, duration) for duration in df["constructor_duration"]]
df["production_duration"] = [timedelta(0, duration) for duration in df["production_duration"]]
df["processing_speed"] = ["%.02f Kword pairs / s" % (speed / 1000) for speed in df["processing_speed"]]
df["production_speed"] = ["%.02f word pairs / s" % speed for speed in df["production_speed"]]
return df
display(df.mean()).loc[
[1000, 1000000, len(full_dictionary)], :].loc[
:, ["production_duration", "production_speed", "processing_speed"]]
display(df.apply(lambda x: (x - x.mean()).std())).loc[
[1000, 1000000, len(full_dictionary)], :].loc[
:, ["production_duration", "production_speed", "processing_speed"]]
def benchmark(configuration):
(model, dictionary), nonzero_limit, annoy_n_trees, query_terms, repetition = configuration
use_annoy = annoy_n_trees > 0
model.init_sims()
start_time = time()
if use_annoy:
annoy = AnnoyIndexer(model, annoy_n_trees)
kwargs = {"indexer": annoy}
else:
kwargs = {}
index = WordEmbeddingSimilarityIndex(model, kwargs=kwargs)
end_time = time()
constructor_duration = end_time - start_time
start_time = time()
for term in query_terms:
for _j, _k in zip(index.most_similar(term, topn=nonzero_limit), range(nonzero_limit)):
pass
end_time = time()
production_duration = end_time - start_time
return {
"dictionary_size": len(dictionary),
"mean_query_term_length": np.mean([len(term) for term in query_terms]),
"nonzero_limit": nonzero_limit,
"use_annoy": use_annoy,
"annoy_n_trees": annoy_n_trees,
"repetition": repetition,
"constructor_duration": constructor_duration,
"production_duration": production_duration, }
models = []
for dictionary in tqdm(dictionaries, desc="models"):
if dictionary == full_dictionary:
models.append(full_model)
continue
model = full_model.__class__(full_model.vector_size)
model.vocab = {word: deepcopy(full_model.vocab[word]) for word in dictionary.values()}
model.index2entity = []
vector_indices = []
for index, word in enumerate(full_model.index2entity):
if word in model.vocab.keys():
model.index2entity.append(word)
model.vocab[word].index = len(vector_indices)
vector_indices.append(index)
model.vectors = full_model.vectors[vector_indices]
models.append(model)
annoy_n_trees = [0] + [10**k for k in range(3)]
seed(RANDOM_SEED)
query_terms = sample(list(min_dictionary.values()), 1000)
configurations = product(zip(models, dictionaries), nonzero_limits, annoy_n_trees, [query_terms], repetitions)
results = benchmark_results(benchmark, configurations, "matrix_speed.builder_results.wordembeddings")
df = pd.DataFrame(results)
df["processing_speed"] = df.dictionary_size * len(query_terms) / df.production_duration
df["production_speed"] = df.nonzero_limit * len(query_terms) / df.production_duration
df = df.groupby(["dictionary_size", "nonzero_limit", "annoy_n_trees"])
def display(df):
df["constructor_duration"] = [timedelta(0, duration) for duration in df["constructor_duration"]]
df["production_duration"] = [timedelta(0, duration) for duration in df["production_duration"]]
df["processing_speed"] = ["%.02f Kword pairs / s" % (speed / 1000) for speed in df["processing_speed"]]
df["production_speed"] = ["%.02f Kword pairs / s" % (speed / 1000) for speed in df["production_speed"]]
return df
display(df.mean()).loc[
[1000000, len(full_dictionary)], [1, 100], [0, 1, 100]].loc[
:, ["constructor_duration", "production_duration", "production_speed", "processing_speed"]]
display(df.apply(lambda x: (x - x.mean()).std())).loc[
[1000000, len(full_dictionary)], [1, 100], [0, 1, 100]].loc[
:, ["constructor_duration", "production_duration", "production_speed", "processing_speed"]]
full_model = api.load("word2vec-google-news-300")
try:
with open("matrix_speed.corpus", "rb") as file:
full_corpus = pickle.load(file)
except IOError:
original_corpus = list(tqdm(api.load("wiki-english-20171001"), desc="original_corpus", total=4924894))
seed(RANDOM_SEED)
full_corpus = [
simple_preprocess(u'\n'.join(article["section_texts"]))
for article in tqdm(sample(original_corpus, 10**5), desc="full_corpus", total=10**5)]
del original_corpus
with open("matrix_speed.corpus", "wb") as file:
pickle.dump(full_corpus, file)
try:
full_dictionary = Dictionary.load("matrix_speed.dictionary")
except IOError:
full_dictionary = Dictionary([[term] for term in full_model.vocab.keys()])
full_dictionary.save("matrix_speed.dictionary")
def benchmark(configuration):
(matrix, dictionary, nonzero_limit), corpus, normalized, repetition = configuration
corpus_size = len(corpus)
corpus = [dictionary.doc2bow(doc) for doc in corpus]
corpus = [vec for vec in corpus if len(vec) > 0]
start_time = time()
for vec1 in corpus:
for vec2 in corpus:
matrix.inner_product(vec1, vec2, normalized=normalized)
end_time = time()
duration = end_time - start_time
return {
"dictionary_size": matrix.matrix.shape[0],
"matrix_nonzero": matrix.matrix.nnz,
"nonzero_limit": nonzero_limit,
"normalized": normalized,
"corpus_size": corpus_size,
"corpus_actual_size": len(corpus),
"corpus_nonzero": sum(len(vec) for vec in corpus),
"mean_document_length": np.mean([len(doc) for doc in corpus]),
"repetition": repetition,
"duration": duration, }
seed(RANDOM_SEED)
dictionary_sizes = [1000, 100000]
dictionaries = []
for size in tqdm(dictionary_sizes, desc="dictionaries"):
dictionary = Dictionary([sample(list(full_dictionary.values()), size)])
dictionaries.append(dictionary)
min_dictionary = sorted((len(dictionary), dictionary) for dictionary in dictionaries)[0][1]
corpus_sizes = [100, 1000]
corpora = []
for size in tqdm(corpus_sizes, desc="corpora"):
corpus = sample(full_corpus, size)
corpora.append(corpus)
models = []
for dictionary in tqdm(dictionaries, desc="models"):
if dictionary == full_dictionary:
models.append(full_model)
continue
model = full_model.__class__(full_model.vector_size)
model.vocab = {word: deepcopy(full_model.vocab[word]) for word in dictionary.values()}
model.index2entity = []
vector_indices = []
for index, word in enumerate(full_model.index2entity):
if word in model.vocab.keys():
model.index2entity.append(word)
model.vocab[word].index = len(vector_indices)
vector_indices.append(index)
model.vectors = full_model.vectors[vector_indices]
models.append(model)
nonzero_limits = [1, 10, 100]
matrices = []
for (model, dictionary), nonzero_limit in tqdm(
list(product(zip(models, dictionaries), nonzero_limits)), desc="matrices"):
annoy = AnnoyIndexer(model, 1)
index = WordEmbeddingSimilarityIndex(model, kwargs={"indexer": annoy})
matrix = SparseTermSimilarityMatrix(index, dictionary, nonzero_limit=nonzero_limit)
matrices.append((matrix, dictionary, nonzero_limit))
del annoy
normalization = (True, False)
repetitions = range(10)
configurations = product(matrices, corpora, normalization, repetitions)
results = benchmark_results(benchmark, configurations, "matrix_speed.inner-product_results.doc_doc")
df = pd.DataFrame(results)
df["speed"] = df.corpus_actual_size**2 / df.duration
del df["corpus_actual_size"]
df = df.groupby(["dictionary_size", "corpus_size", "nonzero_limit", "normalized"])
def display(df):
df["duration"] = [timedelta(0, duration) for duration in df["duration"]]
df["speed"] = ["%.02f Kdoc pairs / s" % (speed / 1000) for speed in df["speed"]]
return df
display(df.mean()).loc[
[1000, 100000], :, [1, 100], :].loc[
:, ["duration", "corpus_nonzero", "matrix_nonzero", "speed"]]
display(df.apply(lambda x: (x - x.mean()).std())).loc[
[1000, 100000], :, [1, 100], :].loc[
:, ["duration", "corpus_nonzero", "matrix_nonzero", "speed"]]
def benchmark(configuration):
(matrix, dictionary, nonzero_limit), corpus, normalized, repetition = configuration
corpus_size = len(corpus)
corpus = [dictionary.doc2bow(doc) for doc in corpus if doc]
start_time = time()
for vec in corpus:
matrix.inner_product(vec, corpus, normalized=normalized)
end_time = time()
duration = end_time - start_time
return {
"dictionary_size": matrix.matrix.shape[0],
"matrix_nonzero": matrix.matrix.nnz,
"nonzero_limit": nonzero_limit,
"normalized": normalized,
"corpus_size": corpus_size,
"corpus_actual_size": len(corpus),
"corpus_nonzero": sum(len(vec) for vec in corpus),
"mean_document_length": np.mean([len(doc) for doc in corpus]),
"repetition": repetition,
"duration": duration, }
configurations = product(matrices, corpora, normalization, repetitions)
results = benchmark_results(benchmark, configurations, "matrix_speed.inner-product_results.doc_corpus")
df = pd.DataFrame(results)
df["speed"] = df.corpus_actual_size**2 / df.duration
del df["corpus_actual_size"]
df = df.groupby(["dictionary_size", "corpus_size", "nonzero_limit", "normalized"])
def display(df):
df["duration"] = [timedelta(0, duration) for duration in df["duration"]]
df["speed"] = ["%.02f Kdoc pairs / s" % (speed / 1000) for speed in df["speed"]]
return df
display(df.mean()).loc[
[1000, 100000], :, [1, 100], :].loc[
:, ["duration", "corpus_nonzero", "matrix_nonzero", "speed"]]
display(df.apply(lambda x: (x - x.mean()).std())).loc[
[1000, 100000], :, [1, 100], :].loc[
:, ["duration", "corpus_nonzero", "matrix_nonzero", "speed"]]
def benchmark(configuration):
(matrix, dictionary, nonzero_limit), corpus, normalized, repetition = configuration
corpus_size = len(corpus)
corpus = [dictionary.doc2bow(doc) for doc in corpus]
corpus = [vec for vec in corpus if len(vec) > 0]
start_time = time()
matrix.inner_product(corpus, corpus, normalized=normalized)
end_time = time()
duration = end_time - start_time
return {
"dictionary_size": matrix.matrix.shape[0],
"matrix_nonzero": matrix.matrix.nnz,
"nonzero_limit": nonzero_limit,
"normalized": normalized,
"corpus_size": corpus_size,
"corpus_actual_size": len(corpus),
"corpus_nonzero": sum(len(vec) for vec in corpus),
"mean_document_length": np.mean([len(doc) for doc in corpus]),
"repetition": repetition,
"duration": duration, }
nonzero_limits = [1000]
dense_matrices = []
for (model, dictionary), nonzero_limit in tqdm(
list(product(zip(models, dictionaries), nonzero_limits)), desc="matrices"):
annoy = AnnoyIndexer(model, 1)
index = WordEmbeddingSimilarityIndex(model, kwargs={"indexer": annoy})
matrix = SparseTermSimilarityMatrix(index, dictionary, nonzero_limit=nonzero_limit)
matrices.append((matrix, dictionary, nonzero_limit))
del annoy
configurations = product(matrices + dense_matrices, corpora + [full_corpus], normalization, repetitions)
results = benchmark_results(benchmark, configurations, "matrix_speed.inner-product_results.corpus_corpus")
df = pd.DataFrame(results)
df["speed"] = df.corpus_actual_size**2 / df.duration
del df["corpus_actual_size"]
df = df.groupby(["dictionary_size", "corpus_size", "nonzero_limit", "normalized"])
def display(df):
df["duration"] = [timedelta(0, duration) for duration in df["duration"]]
df["speed"] = ["%.02f Kdoc pairs / s" % (speed / 1000) for speed in df["speed"]]
return df
display(df.mean()).loc[
[1000, 100000], :, [1, 10, 100, 1000], :].loc[
:, ["duration", "corpus_nonzero", "matrix_nonzero", "speed"]]
display(df.apply(lambda x: (x - x.mean()).std())).loc[
[1000, 100000], :, [1, 100], :].loc[
:, ["duration", "corpus_nonzero", "matrix_nonzero", "speed"]]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoding Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Sentence to Sequence
Step48: Translate
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# implementation
source_sentences = source_text.split('\n')
source_id_text = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_sentences]
target_sentences = target_text.split('\n')
target_id_text = [[target_vocab_to_int[word] for word in sentence.split()] for sentence in target_sentences]
#append the '<EOS>' at the end of sentence
int_EOS = target_vocab_to_int['<EOS>']
target_id_text = [int_sentence + [int_EOS] for int_sentence in target_id_text]
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
#
inputs = tf.placeholder(tf.int32,shape=(None,None),name="input")
targets = tf.placeholder(tf.int32,shape=(None,None),name="targets")
learning_rate = tf.placeholder(tf.float32,name="learning_rate")
keep_prob = tf.placeholder(tf.float32,name="keep_prob")
return inputs, targets, learning_rate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for decoding
:param target_data: Target Placeholder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
#
int_GO = target_vocab_to_int['<GO>']
target_data =tf.reshape(target_data,[batch_size,-1])
#get data removing last column.
target_data_no_ending = tf.strided_slice(target_data,[0,0],[batch_size,-1],[1,1])
#create rist column with GO ID
target_data_head = tf.fill([batch_size, 1], int_GO)
#concatenate two parts
decoding_input = tf.concat([target_data_head, target_data_no_ending], 1)
return decoding_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
#
single_cell = tf.contrib.rnn.LSTMCell(rnn_size)
single_cell = tf.contrib.rnn.DropoutWrapper(single_cell, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([single_cell] * num_layers)
outputs,final_state = tf.nn.dynamic_rnn(cell,rnn_inputs,dtype = tf.float32)
return final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
#with tf.variable_scope(decoding_scope,reuse=True):
# Training Decoder
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
# Apply output function
train_logits = output_fn(train_pred)
return train_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
#
#with tf.variable_scope(decoding_scope,reuse=True):
# Inference Decoder
decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn, encoder_state,dec_embeddings,
start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, decoder_fn, scope=decoding_scope)
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
#TODO: maximum length for decoder inference??
with tf.variable_scope("decoding") as decoding_scope:
# decode cell, shallbe put into scope?
single_cell = tf.contrib.rnn.LSTMCell(rnn_size)
# add dropout here
single_cell = tf.contrib.rnn.DropoutWrapper(single_cell, output_keep_prob=keep_prob)
dec_cell = tf.contrib.rnn.MultiRNNCell([single_cell] * num_layers)
# Output Layer
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
# get decoding train logits
train_logits = decoding_layer_train(encoder_state,dec_cell,dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob)
# share variables
decoding_scope.reuse_variables()
infer_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'],sequence_length, vocab_size, decoding_scope, output_fn, keep_prob)
return train_logits, infer_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
#
# Apply embedding to the input data for the encoder
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
# Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
enc_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
# Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size)
target_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
# Apply embedding to the target data for the decoder.
# Decoder Embedding: different with encode embedding, the dec_embeddings is required
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size], minval=-1, maxval=1))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, target_input)
# Decode the encoded input using your decoding_layer
train_logits, infer_logits = decoding_layer(dec_embed_input, dec_embeddings, enc_state, target_vocab_size,
sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)
return train_logits, infer_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 5
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 128
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 256
decoding_embedding_size = 256
# Learning Rate
learning_rate = 0.002
# Dropout Keep Probability
keep_probability = 0.5
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
#convert letters to lower-case
sentence_lower = sentence.lower()
sentence_int = [vocab_to_int.get(word,vocab_to_int['<UNK>']) for word in sentence_lower.split()]
return sentence_int
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we need to create an instance of Share. Using that instance we will get prices, volumes, ratios and all other company information
Step2: Price Information
Step3: Moving averages. Get a peek of what prices have been like in the past.
Step4: Volume Information
Step5: Ratios are important for technical analysis. Price to Earning ratio is the most important of them all. Value investors like Warren Buffet use this to their analysis.
Step6: Book Value
Step7: Dividends
Step8: Historical Prices
Step9: More on historical prices coming soon
|
<ASSISTANT_TASK:>
Python Code:
from yahoo_finance import Share
import numpy as np
#for this Example I will use google's finances
#create an instance of Share
google = Share('GOOG')
#now that an instance of Share is created (google), we will call its functions to get the prices
#date and time of the trade
date = google.get_trade_datetime()
#opening price
opening_price = google.get_open()
#Price right now (Yahoo finance is delayed by 15 mins)
current_price = google.get_price()
#Day's high and low prices
day_high = google.get_days_high()
day_low = google.get_days_low()
#price changes from opening price
price_change = google.get_change()
print "trading date: ", date
print "current price: ", current_price
print "opening price: $" , opening_price
print "day high: $", day_high
print "day low: $", day_low
print "print price change: $", price_change
#Refresh to get a new price
# Note that after the market closes @ 4PM EST, the price will stay the same
google.refresh()
date = google.get_trade_datetime()
current_price = google.get_price()
price_change = google.get_change()
print "\n########## After refreshing ####################"
print "trading date: ", date
print "current price: ", current_price
print "opening price: $" , opening_price
print "print price change: $", price_change
#If current prices are higher than 50 or 200 days moving average, that means prices are going up
#200 days moving average
th_moving_avg = google.get_200day_moving_avg()
#50 days moving average
fifty_moving_avg = google.get_50day_moving_avg()
print "200 days moving average: $", th_moving_avg
print "50 days moving average: $", fifty_moving_avg
#Volume speaks (If more people are trading, there's gotta be something good or bad happening)
volume = google.get_volume()
#compare this days volume with average volume
average_daily_volume = google.get_avg_daily_volume()
print "Today's volume: ", volume
print "Average volume: ", average_daily_volume
#PE ratio ---> price per share divided by earnings per share
#Lower PE the better
PE = google.get_price_earnings_ratio()
#PEG ratio ---> pe ratio divided by 1-reinvestment (growth)
PEG = google.get_price_earnings_growth_ratio()
print "Price to earning (PE) ratio : ", PE
print "Price earning to growth (PEG) ratio: ", PEG
#book value -> what the numbers say this company is worth
print "book value", google.get_book_value()
div_per_share = google.get_dividend_share()
div_yield = google.get_dividend_yield()
#for some reason Google's dividend information was not available
print "dividend per share: $", div_per_share
print "divident yield: ", div_yield
historical = google.get_historical('2015-07-28', '2015-09-08')
print len(historical)
print len(historical)
#To get the closing price for first day
print historical[0]['Close']
#opening price for first day
print historical[0]['Open']
#to get all opening prices together
opening = [] #is a dynamic array (list) for python
for i in range(len(historical)):
x = historical[i]['Open']
opening.append(x)
closing = [] #is a dynamic array (list) for python
for i in range(len(historical)):
x = historical[i]['Close']
closing.append(x)
x_axis = np.arange(0+1, len(historical)+1)
#print opening
#print closing
#print x_axis
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(x_axis,opening, 'b', x_axis, closing, 'r')
plt.xlabel('Day')
plt.ylabel('Price ($)')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data
Step2: Timing Convolutions
Step3: The rotational convolution in eniric is ~10x faster than
Step4: Resolution convolution is around 500x slower
Step5: PyAstronomy slow and eniric are identical (within 1e-13%) (except for edge effects).
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import PyAstronomy.pyasl as pyasl
import eniric
from eniric import config
from eniric.broaden import rotational_convolution, resolution_convolution
from eniric.utilities import band_limits, load_aces_spectrum, wav_selector
from scripts.phoenix_precision import convolve_and_resample
# config.cache["location"] = None # Disable caching for these tests
config.cache["location"] = ".joblib" # Enable caching
wav1, flux1 = load_aces_spectrum([3900, 4.5, 0.0, 0])
# wav2, flux2 = load_aces_spectrum([2600, 4.5, 0.0, 0])
wav1, flux1 = wav_selector(wav1, flux1, *band_limits("K"))
# wav2, flux2 = wav_selector(wav2, flux2, *band_limits("K"))
# PyAstronomy requires even spaced waelength (eniric does not)
wav = np.linspace(wav1[0], wav1[-1], len(wav1))
flux1 = np.interp(wav, wav1, flux1)
#flux2 = np.interp(wav, wav2, flux2)
# Convolution settings
epsilon = 0.6
vsini = 10.0
R = 40000
%%time
rot_fast = pyasl.fastRotBroad(wav, flux1, epsilon, vsini)
## Wall time: 15.2 ms
%%time
rot_slow = pyasl.rotBroad(wav, flux1, epsilon, vsini)
## Wall time: 36 s
# Convolution settings
epsilon = 0.6
vsini = 10.0
R = 40000
%%time
# After caching
eniric_rot = rotational_convolution(wav, wav, flux1, vsini, epsilon=epsilon)
## Wall time: 4.2 ms
%%time
res_fast = pyasl.instrBroadGaussFast(wav, flux1, R, maxsig=5)
## Wall time: 19.2 ms
%%time
# Before caching
eniric_res = resolution_convolution(
wavelength=wav,
extended_wav=wav,
extended_flux=flux1,
R=R,
fwhm_lim=5,
num_procs=4,
normalize=True,
)
## Wall time: 3.07 s
%%time
# Same calculation with cached result.
eniric_res = resolution_convolution(
wavelength=wav,
extended_wav=wav,
extended_flux=flux1,
R=R,
fwhm_lim=5,
normalize=True,
)
## Wall time: 8.9 ms
plt.plot(wav, flux1, label="Original Flux")
plt.plot(wav[100:-100], eniric_res[100:-100], "-.", label="Eniric")
plt.plot(wav[100:-100], res_fast[100:-100], "--", label="PyAstronomy Fast")
plt.xlim([2.116, 2.118])
plt.xlabel("wavelength")
plt.title("Resolution convolution R={}".format(R))
plt.legend()
plt.show()
plt.plot(wav, flux1, label="Original")
plt.plot(wav, rot_fast, ":", label="PyAstronomy Fast")
plt.plot(wav, rot_slow, "--", label="PyAstronomy Slow")
plt.plot(wav, eniric_rot, "-.", label="Eniric")
plt.xlabel("Wavelength")
plt.title("Rotational Convolution vsini={}".format(vsini))
plt.xlim((2.116, 2.118))
plt.legend()
plt.show()
plt.plot(
wav[100:-100],
(eniric_rot[100:-100] - rot_fast[100:-100]) / eniric_rot[100:-100],
label="Eniric - PyA Fast",
)
plt.plot(
wav[100:-100],
(eniric_rot[100:-100] - rot_slow[100:-100]) / eniric_rot[100:-100],
"--",
label="Eniric - PyA Slow",
)
plt.xlabel("Wavelength")
plt.ylabel("Fractional difference")
plt.title("Rotational Convolution Differenes")
# plt.xlim((2.3, 2.31))
plt.legend()
plt.show()
plt.plot(
wav[50:-50],
(eniric_rot[50:-50] - rot_slow[50:-50]) / eniric_rot[50:-50],
"--",
label="Eniric - PyA Slow",
)
plt.xlabel("Wavelength")
plt.ylabel("Fractional difference")
plt.title("Rotational Convolution Differenes")
plt.legend()
plt.show()
assert np.allclose(eniric_rot[50:-50], rot_slow[50:-50])
plt.plot(
wav[100:-100],
(eniric_res[100:-100] - res_fast[100:-100]) / eniric_res[100:-100],
label="(Eniric-PyA Fast)/Eniric",
)
plt.xlabel("Wavelength")
plt.ylabel("Fractional difference")
plt.title("Resolution Convolution Differenes, R={}".format(R))
# plt.xlim((2.3, 2.31))
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Obtain data for normalizing labels and define function to denormalize labels**
Step2: Define functions to obtain test data
Step3: Load test data and model
Step4: Define a function that predicts on a test set by using batches
Step5: Predict on test set
Step6: Show residuals on test set label predictions
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import keras
import h5py
import time
from matplotlib import gridspec
datadir = ""
mean_and_std = np.load(datadir + 'mean_and_std.npy')
mean_labels = mean_and_std[0]
std_labels = mean_and_std[1]
num_labels = mean_and_std.shape[1]
def denormalize(lb_norm):
return ((lb_norm*std_labels) + mean_labels)
def get_data(filename):
f = h5py.File(datadir + filename, 'r')
spectra_array = f['spectrum']
ap_ids = f['Ap_ID'][:]
labels_array = np.column_stack((f['TEFF'][:],f['LOGG'][:],f['FE_H'][:]))
snr_array = f['combined_snr'][:]
return (ap_ids, snr_array, spectra_array, labels_array)
test_ap_ids, test_snr, test_spectra, test_labels = get_data('test_data.h5')
print('Test set contains ' + str(len(test_spectra))+' stars')
model = keras.models.load_model(datadir + 'starnet_cnn.h5')
def batch_predictions(model, spectra, batch_size, denormalize):
predictions = np.zeros((len(spectra),3))
num_batches = int(len(spectra)/batch_size)
for i in range(num_batches):
inputs = spectra[i*batch_size:(i+1)*batch_size]
# Mask any nan values
indices_nan = np.where(np.isnan(inputs))
inputs[indices_nan]=0.
predictions[i*batch_size:(i+1)*batch_size] = denormalize(model.predict(inputs))
num_remainder = int(len(spectra)%batch_size)
if num_remainder>0:
inputs = spectra[-num_remainder:]
indices_nan = np.where(np.isnan(inputs))
inputs[indices_nan]=0.
predictions[-num_remainder:] = denormalize(model.predict(inputs))
return predictions
time1 = time.time()
test_predictions = batch_predictions(model, test_spectra, 500, denormalize)
print("{0:.2f}".format(time.time()-time1)+' seconds to make '+str(len(test_spectra))+' predictions')
# Some plotting variables for asthetics
%matplotlib inline
# Label names
label_names = ['$T_{\mathrm{eff}}$', '$\log(g)$', '$[Fe/H]$']
# Pipeline names
x_lab = 'ASPCAP'
y_lab = 'StarNet'
plt.rcParams['axes.facecolor']='white'
sns.set_style("ticks")
plt.rcParams['axes.grid']=True
plt.rcParams['grid.color']='gray'
plt.rcParams['grid.alpha']='0.4'
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
all_targets = test_labels
all_pred = test_predictions
z = test_snr
z[z>250]=250
resid = all_pred - all_targets
# Overplot high S/N
order = (z).reshape(z.shape[0],).argsort()
all_targets = all_targets[order]
resid = resid[order]
z = z[order,0]
bias = np.median(resid, axis=0)
scatter = np.std(resid, axis=0)
indices_a = np.where(z>=150)
indices_b = np.where(z<100)
resid_a = resid[indices_a,:]
resid_b = resid[indices_b,:]
cmap = sns.cubehelix_palette(8, start=2.8, rot=.1, dark=0, light=.95, as_cmap=True)
lims = [[(3800,5800),(-0.50,4.50),(-2.5,0.6)],[(-1000,1000),(-2.0,2.0),(-1.,1.)]]
ditribution_lims = [(-200,200),(-0.4,0.4),(-0.2,0.2)]
fig = plt.figure(figsize=(38, 30))
gs = gridspec.GridSpec(3, 2, width_ratios=[4., 1])
for i in range(num_labels):
ax0 = plt.subplot(gs[i,0])
points = ax0.scatter(all_targets[:,i], resid[:,i], c=z, s=100, cmap=cmap)
ax0.set_xlabel(x_lab + ' ' + label_names[i], fontsize=70)
ax0.set_ylabel(r'$\Delta$ %s ' % (label_names[i]) +
'\n' + r'(%s - %s)' % (y_lab, x_lab), fontsize=70)
ax0.tick_params(labelsize=50, width=1, length=10)
ax0.set_xlim(lims[0][i])
ax0.set_ylim(lims[1][i])
ax0.plot([lims[0][i][0],lims[0][i][1]], [0,0], 'k--', lw=2)
xmin, xmax = ditribution_lims[i]
y_a = resid_a[0,:,i][(resid_a[0,:,i]>=xmin)&(resid_a[0,:,i]<=xmax)]
y_b = resid_b[0,:,i][(resid_b[0,:,i]>=xmin)&(resid_b[0,:,i]<=xmax)]
ax1 = plt.subplot(gs[i,1])
a = sns.distplot(y_a, vertical=True,hist=False, rug=False, ax=ax1,kde_kws={"color": cmap(200), "lw": 10})
b = sns.distplot(y_b,vertical=True,hist=False, rug=False, ax=ax1,kde_kws={"color": cmap(100), "lw": 10})
a.set_ylim(ditribution_lims[i])
b.set_ylim(ditribution_lims[i])
ax1.tick_params(
axis='x',
which='both',
bottom='off',
top='off',
labelbottom='off',width=1,length=10)
ax1.tick_params(
axis='y',
which='both',
left='off',
right='on',
labelleft='off',
labelright='on',
labelsize=50,width=1,length=10)
bbox_props = dict(boxstyle="square,pad=0.3", fc="w", ec="k", lw=3)
if i==0:
plt.figtext(0.185, (1-((i*0.332)+0.24)),
'$\widetilde{m}$='+'{0:.2f}'.format(bias[i])+' $s$='+'{0:.2f}'.format(scatter[i]),
size=70, bbox=bbox_props)
else:
plt.figtext(0.185, (1-((i*0.332)+0.24)),
'$\widetilde{m}$='+'{0:.3f}'.format(bias[i])+' $s$='+'{0:.3f}'.format(scatter[i]),
size=70, bbox=bbox_props)
cbar_ax = fig.add_axes([0.9, 0.1, 0.02, 0.83])
fig.colorbar(points,cax=cbar_ax)
cbar = fig.colorbar(points, cax=cbar_ax, extend='neither', spacing='proportional', orientation='vertical', format="%.0f")
cbar.set_label('SNR', size=65)
cbar.ax.tick_params(labelsize=50,width=1,length=10)
cbar_ax.set_yticklabels(['','100','','150','','200','','$>$250'])
plt.tight_layout()
fig.subplots_adjust(right=0.8)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set up inline matplotlib
Step2: Import Game Modules From a Given Path
Step3: Setting Up Game Parameters
Step4: seed PRNG
Step5: Set up the state of the system
Step6: User Defined States and parameters Can go in the following cell
Step7: Plot the experiment done above
Step8: Skew Uniqueness Tendency Driver
Step10: Initiate State
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import matplotlib.image as mpimg
from matplotlib import rcParams
import seaborn as sb
%matplotlib inline
rcParams['figure.figsize'] = 5, 4
sb.set_style('whitegrid')
import sys
# search path for modules
sys.path.append('/Users/hn/Documents/GitHub/PyOpinionGame/')
import opiniongame.config as og_cfg
import opiniongame.IO as og_io
import opiniongame.coupling as og_coupling
import opiniongame.state as og_state
import opiniongame.adjacency as og_adj
import opiniongame.selection as og_select
import opiniongame.potentials as og_pot
import opiniongame.core as og_core
import opiniongame.stopping as og_stop
import opiniongame.opinions as og_opinions
config = og_cfg.staticParameters()
path = '/Users/hn/Documents/GitHub/PyOpinionGame/' # path to the 'staticParameters.cfg'
staticParameters = path + 'staticParameters.cfg'
config.readFromFile(staticParameters) # Read static parameters
config.threshold = 0.0001
config.Kthreshold = 0.00001
config.startingseed = 10
config.learning_rate = 0.1
tau = 0.62 #tip of the tent potential function
config.printOut()
print("SEEDING PRNG: "+str(config.startingseed))
np.random.seed(config.startingseed)
# These are the default matrices for the state of the system:
# If you want to change them, you can generate a new one in the following cell
default_weights = og_coupling.weights_no_coupling(config.popSize, config.ntopics)
default_initialOpinions = og_opinions.initialize_opinions(config.popSize, config.ntopics)
default_adj = og_adj.make_adj(config.popSize, 'full')
state = og_state.WorldState(adj=default_adj,
couplingWeights=default_weights,
initialOpinions=default_initialOpinions,
initialHistorySize=100,
historyGrowthScale=2)
state.validate()
numberOfCommunities = 3
communityPopSize = 25
config.popSize = numberOfCommunities * communityPopSize
# List of upper bound probability of interaction between communities
uppBound_list = [0.0]
# List of uniqueness Strength parameter
individStrength = [0.0]
config.learning_rate = 0.1
config.iterationMax = 10000
tau = 0.62
config.printOut()
#
# functions for use by the simulation engine
#
ufuncs = og_cfg.UserFunctions(og_select.PickTwoWeighted,
og_stop.iterationStop,
og_pot.createTent(tau))
# Number of different initial opinions,
# i.e. number of different games with different initials.
noInitials = np.arange(1)
noGames = np.arange(1) # Number of different game orders.
# Run experiments with different adjacencies, different initials, and different order of games.
for uniqForce in individStrength:
config.uniqstrength = uniqForce
for upperBound in uppBound_list:
# Generate different adjacency matrix with different prob. of interaction
# between different communities
state.adj = og_adj.CommunitiesMatrix(communityPopSize, numberOfCommunities, upperBound)
for countInitials in noInitials:
# Pick three communities with similar opinions to begin with!
state.initialOpinions = np.zeros((config.popSize, 1))
state.initialOpinions[0:25] = np.random.uniform(low=0.0, high=.25, size=(25,1))
state.initialOpinions[25:50] = np.random.uniform(low=0.41, high=.58, size=(25,1))
state.initialOpinions[50:75] = np.random.uniform(low=0.74, high= 1, size=(25,1))
state.couplingWeights = og_coupling.weights_no_coupling(config.popSize, config.ntopics)
all_experiments_history = {}
print "(uniqForce, upperBound) = ({}, {})".format(uniqForce, upperBound)
print "countInitials = {}".format(countInitials + 1)
for gameOrders in noGames:
#cProfile.run('og_core.run_until_convergence(config, state, ufuncs)')
state = og_core.run_until_convergence(config, state, ufuncs)
print("One Experiment Done" , "gameOrders = " , gameOrders+1)
all_experiments_history[ 'experiment' + str(gameOrders+1)] = state.history[0:state.nextHistoryIndex,:,:]
og_io.saveMatrix('uB' + str(upperBound) + '*uS' + str(config.uniqstrength) +
'*initCount' + str(countInitials+21) + '.mat', all_experiments_history)
print all_experiments_history.keys()
print all_experiments_history['experiment1'].shape
time, population_size, no_of_topics = evolution = all_experiments_history['experiment1'].shape
evolution = all_experiments_history['experiment1'].reshape(time, population_size)
fig = plt.figure()
plt.plot(evolution)
plt.xlabel('Time')
plt.ylabel('Opinionds')
plt.title('Evolution of Opinions')
fig.set_size_inches(10,5)
plt.show()
state = og_state.WorldState(adj=default_adj,
couplingWeights=default_weights,
initialOpinions=default_initialOpinions,
initialHistorySize=100,
historyGrowthScale=2)
state.validate()
#
# load configuration
#
config = og_cfg.staticParameters()
config.readFromFile('staticParameters.cfg')
config.threshold = 0.01
config.printOut()
#
# seed PRNG: must do this before any random numbers are
# ever sampled during default generation
#
print(("SEEDING PRNG: "+str(config.startingseed)))
np.random.seed(config.startingseed)
# These are the default matrices for the state of the system:
# If you want to change them, you can generate a new one in the following cell
default_weights = og_coupling.weights_no_coupling(config.popSize, config.ntopics)
default_initialOpinions = og_opinions.initialize_opinions(config.popSize, config.ntopics)
default_adj = og_adj.make_adj(config.popSize, 'full')
state = og_state.WorldState(adj=default_adj,
couplingWeights=default_weights,
initialOpinions=default_initialOpinions,
initialHistorySize=100,
historyGrowthScale=2)
state.validate()
#
# run
#
numberOfCommunities = 3
communityPopSize = 25
config.popSize = numberOfCommunities * communityPopSize
# List of upper bound probability of interaction between communities
uppBound_list = np.array([.001, 0.004, 0.007, 0.01, 0.013, 0.016, 0.019])
#
# List of uniqueness Strength parameter
#
individStrength = np.arange(0.00001, 0.000251, 0.00006)
individStrength = np.append(0, individStrength)
individStrength = np.array([0.0])
skewstrength = 2.0
tau = 0.62
config.iterationMax = 30000
config.printOut()
#
# functions for use by the simulation engine
#
ufuncs = og_cfg.UserFunctions(og_select.PickTwoWeighted,
og_stop.iterationStop,
og_pot.createTent(tau))
noInitials = np.arange(1) # Number of different initial opinions.
noGames = np.arange(1) # Number of different game orders.
# Run experiments with different adjacencies, different initials, and different order of games.
for uniqForce in individStrength:
config.uniqstrength = uniqForce
for upperBound in uppBound_list:
Generate different adjacency matrix with different prob. of interaction
between different communities
state.adj = og_adj.CommunitiesMatrix(communityPopSize, numberOfCommunities, upperBound)
print"(upperBound, uniqForce) = (", upperBound, "," , uniqForce , ")"
for countInitials in noInitials:
# Pick three communities with similar opinions (stable state) to begin with!
state.initialOpinions = np.zeros((config.popSize, 1))
state.initialOpinions[0:25] = np.random.uniform(low=0.08, high=.1, size=(25,1))
state.initialOpinions[25:50] = np.random.uniform(low=0.49, high=.51, size=(25,1))
state.initialOpinions[50:75] = np.random.uniform(low=0.9, high= .92, size=(25,1))
state.couplingWeights = og_coupling.weights_no_coupling(config.popSize, config.ntopics)
all_experiments_history = {}
print "countInitials=", countInitials + 1
for gameOrders in noGames:
#cProfile.run('og_core.run_until_convergence(config, state, ufuncs)')
state = og_core.run_until_convergence(config, state, ufuncs)
state.history = state.history[0:state.nextHistoryIndex,:,:]
idx_IN_columns = [i for i in xrange(np.shape(state.history)[0]) if (i % (config.popSize)) == 0]
state.history = state.history[idx_IN_columns,:,:]
all_experiments_history[ 'experiment' + str(gameOrders+1)] = state.history
og_io.saveMatrix('uB' + str(upperBound) + '*uS' + str(config.uniqstrength) +
'*initCount' + str(countInitials+1) + '.mat', all_experiments_history)
all_experiments_history.keys()
time, population_size, no_of_topics = all_experiments_history['experiment1'].shape
evolution = all_experiments_history['experiment1'].reshape(time, population_size)
fig = plt.figure()
plt.plot(evolution)
plt.xlabel('Time')
plt.ylabel('Opinionds')
plt.title('Evolution of Opinions of 3 communities')
fig.set_size_inches(10, 5)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Explore the Dataset
Step2: Plotting Using Seaborn and PyPlot
Step3: Pairplot
Step4: Radial Visualization
Step5: Vertical Barchart
Step6: Horizontal Barchart
Step7: Histogram
Step8: Andrews Curves
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
porsche = pd.read_csv("PorschePrice.csv")
porsche.shape
porsche.head(5)
porsche = porsche.rename(columns = {'Unnamed: 0':'Number'})
porsche.head(5)
porsche.describe()
import seaborn as sns
import matplotlib.pyplot as plt
sns.pairplot(porsche[["Price", "Age", "Mileage"]])
plt.show()
from pandas.tools.plotting import radviz
plt.figure()
radviz(porsche, 'Age')
plt.show()
plt.figure();
porsche.plot(kind = 'bar', stacked = True);
plt.show()
porsche.plot(kind='barh', stacked=True);
plt.show()
plt.figure();
porsche['Mileage'].diff().hist(bins = 7)
plt.show()
from pandas.tools.plotting import andrews_curves
plt.figure()
andrews_curves(porsche, 'Age', colormap = 'autumn')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ~mne.Annotations in MNE-Python are a way of storing short strings of
Step2: Notice that orig_time is None, because we haven't specified it. In
Step3: Since the example data comes from a Neuromag system that starts counting
Step4: If you know that your annotation onsets are relative to some other time, you
Step5: <div class="alert alert-info"><h4>Note</h4><p>If your annotations fall outside the range of data times in the
Step6: The three annotations appear as differently colored rectangles because they
Step7: The drop-down-menu on the left determines which existing label will be
Step8: Notice that it is possible to create overlapping annotations, even when they
Step9: You can also iterate over the annotations within an ~mne.Annotations
Step10: Note that iterating, indexing and slicing ~mne.Annotations all
Step11: Reading and writing Annotations to/from a file
|
<ASSISTANT_TASK:>
Python Code:
import os
from datetime import timedelta
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
raw.crop(tmax=60).load_data()
my_annot = mne.Annotations(onset=[3, 5, 7], # in seconds
duration=[1, 0.5, 0.25], # in seconds, too
description=['AAA', 'BBB', 'CCC'])
print(my_annot)
raw.set_annotations(my_annot)
print(raw.annotations)
# convert meas_date (a tuple of seconds, microseconds) into a float:
meas_date = raw.info['meas_date']
orig_time = raw.annotations.orig_time
print(meas_date == orig_time)
time_of_first_sample = raw.first_samp / raw.info['sfreq']
print(my_annot.onset + time_of_first_sample)
print(raw.annotations.onset)
time_format = '%Y-%m-%d %H:%M:%S.%f'
new_orig_time = (meas_date + timedelta(seconds=50)).strftime(time_format)
print(new_orig_time)
later_annot = mne.Annotations(onset=[3, 5, 7],
duration=[1, 0.5, 0.25],
description=['DDD', 'EEE', 'FFF'],
orig_time=new_orig_time)
raw2 = raw.copy().set_annotations(later_annot)
print(later_annot.onset)
print(raw2.annotations.onset)
fig = raw.plot(start=2, duration=6)
fig = raw.plot(start=2, duration=6)
fig.fake_keypress('a')
new_annot = mne.Annotations(onset=3.75, duration=0.75, description='AAA')
raw.set_annotations(my_annot + new_annot)
raw.plot(start=2, duration=6)
print(raw.annotations[0]) # just the first annotation
print(raw.annotations[:2]) # the first two annotations
print(raw.annotations[(3, 2)]) # the fourth and third annotations
for ann in raw.annotations:
descr = ann['description']
start = ann['onset']
end = ann['onset'] + ann['duration']
print("'{}' goes from {} to {}".format(descr, start, end))
# later_annot WILL be changed, because we're modifying the first element of
# later_annot.onset directly:
later_annot.onset[0] = 99
# later_annot WILL NOT be changed, because later_annot[0] returns a copy
# before the 'onset' field is changed:
later_annot[0]['onset'] = 77
print(later_annot[0]['onset'])
raw.annotations.save('saved-annotations.csv', overwrite=True)
annot_from_file = mne.read_annotations('saved-annotations.csv')
print(annot_from_file)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Open the SPI rack connection and unlock the controller. This is necessary after bootup of the controller module. If not unlocked, no communication with the modules can take place. The virtual COM port baud rate is irrelevant as it doesn't change the actual speed. Timeout can be changed, but 1 second is a good value.
Step2: Read back the version of the microcontroller software. This should return 1.6 or higher to be able to use the B2b properly. Als read the temperature and the battery voltages through the C1b, this way we verify that the connection with the SPI Rack is working.
Step3: Create a new B2b module object at the correct module address using the SPI object. If we set calibrate=True, the module will run a calibration routine at initialisation. This takes about 2 seconds, during which the python code will stall all operations.
Step4: FFT
Step5: To get the B2b module to do anything, it needs to be triggered. There are three ways of triggering the module
Step6: We'll measure on channel one (zero in software), so we need to enable it. For the FFT we'll take 10000 measurements with filter setting 0 on the sinc5 filter. This will give a datarate of 50 kSPS and a resolution of 16.8 bit. For details on all the filter settings, see the excel sheet for the D4_filter.
Step7: Measurement and plotting
Step8: We use the periodogram from scipy, which will give the power spectral density. Before we do that we have to take the gain of the M1f module into account. It has a gain of 10 MV/A and a postgain of 10.
Step9: D5a sweep
Step10: To get nice equidistant voltage steps, we will use integer multiples of the smallest step the DAC can do in the current range setting.
Step11: We now have to tell the B2b module to look out for the controller trigger, with an amount equal to the sweep length. Additionally we will also set a holdoff time of 1ms. This to compensate for any delays through the circuit (due to line length and/or filters).
Step12: We will keep the filter at sinc5, but the rate at 10
Step13: Here we see how we can synchronise the updating of the DAC with the triggering of the B2b module. Before we set the net output voltage, we arm the spi_rack controller. This means that it will send a trigger on the next SPI command it receives
Step14: Compensating for the gain of the M1 (a factor 10e6), we get the IV curve for our 'sample'. In this case the sample simulator was set to a series resistance of 10 MOhm with all capacitors at minimum value.
Step15: When done with this example, it is recommended to close the SPI Rack connection. This will allow other measurement scripts to access the device.
|
<ASSISTANT_TASK:>
Python Code:
from spirack import SPI_rack, B2b_module, D5a_module, D4b_module
import logging
from time import sleep
from tqdm import tqdm_notebook
import numpy as np
from scipy import signal
from plotly.offline import init_notebook_mode, iplot, plot
import plotly.graph_objs as go
init_notebook_mode(connected=True)
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
COM_port = 'COM4' # COM port of the SPI rack
COM_speed = 1e6 # Baud rate, not of much importance
timeout = 1 # Timeout value in seconds
spi_rack = SPI_rack(COM_port, COM_speed, timeout)
spi_rack.unlock() # Unlock the controller to be able to send data to the rack
print('Version: ' + spi_rack.get_firmware_version())
print('Temperature: {:.2f} C'.format(spi_rack.get_temperature()))
battery_v = spi_rack.get_battery()
print('Battery: {:.3f}V, {:.3f}V'.format(battery_v[0], battery_v[1]))
B2b = B2b_module(spi_rack, module=4, calibrate=False)
print("Firmware version: {}".format(B2b.get_firmware_version()))
B2b.set_clock_source('internal')
print("Clock source: {}".format(B2b.get_clock_source()))
B2b.set_trigger_input("None")
B2b.set_trigger_amount(1)
B2b.set_trigger_holdoff_time(0)
filter_type = 'sinc5'
filter_setting = 0
B2b.set_ADC_enable(0, True)
B2b.set_sample_amount(0, 10000)
B2b.set_filter_type(0, filter_type)
B2b.set_filter_rate(0, filter_setting)
B2b.software_trigger()
while B2b.is_running():
sleep(0.1)
ADC_data, _ = B2b.get_data()
#Calculate periodogram
T = B2b.sample_time[filter_type][filter_setting]
fs = 1/T
N = len(ADC_data)
gain = 10*10e6
f0, Pxx_den0 = signal.periodogram(ADC_data/gain, fs)
#Plot the FFT data
pldata0 = go.Scattergl(x=f0, y=np.sqrt(Pxx_den0), mode='lines+markers', name='ADC1')
plot_data = [pldata0]
layout = go.Layout(
title = dict(text='Spectral Density'),
xaxis = dict(title=r'$\text{Frequency [Hz]}$', type='log'),
yaxis = dict(title=r'$\text{PSD [} \text{A/}\sqrt{\text{Hz}} \text{]} $')
)
fig = go.Figure(data=plot_data, layout=layout)
iplot(fig)
D5a = D5a_module(spi_rack, module=2, reset_voltages=True)
smallest_step = D5a.get_stepsize(0)
sweep_voltages = np.arange(-3000*smallest_step, 3001*smallest_step, 100*smallest_step)
print('Smallest step: {0:.3f} uV'.format(smallest_step*1e6))
print('Start voltage: {0:.4f} V. Stop voltage: {0:.4f} V'.format(sweep_voltages[0], sweep_voltages[-1]))
print('Sweep length: {} steps'.format(len(sweep_voltages)))
B2b.set_trigger_input("Controller")
B2b.set_trigger_amount(len(sweep_voltages))
B2b.set_trigger_holdoff_time(10e-3)
filter_type = 'sinc5'
filter_setting = 10
B2b.set_ADC_enable(0, True)
B2b.set_sample_amount(0, 1)
B2b.set_filter_type(0, filter_type)
B2b.set_filter_rate(0, filter_setting)
for value in tqdm_notebook(sweep_voltages):
spi_rack.trigger_arm()
D5a.set_voltage(0, value)
while B2b.is_running():
sleep(1e-3)
ADC_data_sweep, _ = B2b.get_data()
gain = 10e6
pldata = go.Scattergl(x=sweep_voltages, y=ADC_data_sweep/gain, mode='lines+markers', name='ADC_data')
plot_data = [pldata]
layout = go.Layout(
title = dict(text='10 MOhm IV Curve'),
xaxis = dict(title='D5a voltage (V)'),
yaxis = dict(title='Current (A)')
)
fig = go.Figure(data=plot_data, layout=layout)
iplot(fig)
spi_rack.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import section specific modules
Step2: 4.5.2 $uv$ coverage
Step3: From the list above, you can select different configurations corresponding to real instrumental layouts.
Step4: Let's plot the distribution of the antennas from the selected (or customized) interferometer
Step5: <a id="fig
Step6: 4.5.2.1.2 The snapshot $\boldsymbol{uv}$ coverage
Step7: <a id="vis
Step8: <a id="fig
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
from IPython.display import display
from ipywidgets import *
from mpl_toolkits.mplot3d import Axes3D
import plotBL
HTML('../style/code_toggle.html')
config = widgets.Dropdown(
options={'VLAa':'configs/vlaa.enu.txt',
'VLAb':'configs/vlab.enu.txt',
'VLAc':'configs/vlac.enu.txt',
'VLAd':'configs/vlad.enu.txt',
'WSRT':'configs/wsrt.enu.txt',
'kat7':'configs/kat-7.enu.txt',
'meerkat':'configs/meerkat.enu.txt'},
value="configs/vlaa.enu.txt",
Description="Antennas:")
display(config)
# you need to re-evaluate this box if you modify the array.
antennaPosition=np.genfromtxt(config.value)
# custom antenna distribution
custom=0
if (custom):
antennaPosition = np.zeros((10, 2), dtype=float)
antennaPosition[0,:] = [0,0]
antennaPosition[1,:] = [-4, 5]
antennaPosition[2,:] = [4, 5]
antennaPosition[3,:] = [-10,0]
antennaPosition[4,:] = [-8,-3]
antennaPosition[5,:] = [-4,-5]
antennaPosition[6,:] = [0,-6]
antennaPosition[7,:] = [4,-5]
antennaPosition[8,:] = [8,-3]
antennaPosition[9,:] = [10,0]
%matplotlib inline
mxabs = np.max(abs(antennaPosition[:]))*1.1;
# make use of pylab librery to plot
fig=plt.figure(figsize=(6,6))
plt.plot((antennaPosition[:,0]-np.mean(antennaPosition[:,0]))/1e3, \
(antennaPosition[:,1]-np.mean(antennaPosition[:,1]))/1e3, 'o')
plt.axes().set_aspect('equal')
plt.xlim(-mxabs/1e3, mxabs/1e3)
plt.ylim(-mxabs/1e3, (mxabs+5)/1e3)
plt.xlabel("E (km)")
plt.ylabel("N (km)")
plt.title("Antenna positions")
# Observation parameters
c=3e8 # Speed of light
f=1420e6 # Frequency
lam = c/f # Wavelength
time_steps = 1200 # time steps
h = np.linspace(-6,6,num=time_steps)*np.pi/12 # Hour angle window
# declination convert in radian
L = np.radians(34.0790) # Latitude of the VLA
dec = np.radians(34.)
%matplotlib inline
Ntimes=3
plotBL.plotuv(antennaPosition,L,dec,h,Ntimes,lam)
from ipywidgets import *
from IPython.display import display
def Interactplot(key,Ntimes):
print("Ntimes="+str(Ntimes))
plotBL.plotuv(antennaPosition,L,dec,h,Ntimes,lam)
slider=IntSlider(description="Ntimes",min=2,max=1200,step=100,continuous_update=False)
slider.on_trait_change(Interactplot,'value')
display(slider)
Interactplot("",2)
df=10e6 # frequency step
f0=c/lam # starting frequency
lamb0=lam # starting wavelength
def Interactplot(key,Nfreqs):
print("Nfreqs="+str(Nfreqs))
plotBL.plotuv_freq(antennaPosition,L,dec,h,Nfreqs,lamb0,df)
slider=IntSlider(description="Nfreqs",min=1,max=200,step=1,continuous_update=False)
slider.on_trait_change(Interactplot,'value')
display(slider)
Interactplot("",1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading continuous data
Step2: As you can see above,
Step3: By default, the
Step4: Querying the Raw object
Step5: <div class="alert alert-info"><h4>Note</h4><p>Most of the fields of ``raw.info`` reflect metadata recorded at
Step6: Modifying Raw objects
Step7: Similar to the
Step8: If you want the channels in a specific order (e.g., for plotting),
Step9: Changing channel name and type
Step10: This next example replaces spaces in the channel names with underscores,
Step11: If for some reason the channel types in your
Step12: Selection in the time domain
Step13:
Step14: Remember that sample times don't always align exactly with requested tmin
Step15: <div class="alert alert-danger"><h4>Warning</h4><p>Be careful when concatenating
Step16: You can see that it contains 2 arrays. This combination of data and times
Step17: Extracting channels by name
Step18: Extracting channels by type
Step19: Some of the parameters of
Step20: If you want the array of times,
Step21: The
Step22: Summary of ways to extract data from Raw objects
Step23: It is also possible to export the data to a
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
print(raw)
raw.crop(tmax=60).load_data()
n_time_samps = raw.n_times
time_secs = raw.times
ch_names = raw.ch_names
n_chan = len(ch_names) # note: there is no raw.n_channels attribute
print('the (cropped) sample data object has {} time samples and {} channels.'
''.format(n_time_samps, n_chan))
print('The last time sample is at {} seconds.'.format(time_secs[-1]))
print('The first few channel names are {}.'.format(', '.join(ch_names[:3])))
print() # insert a blank line in the output
# some examples of raw.info:
print('bad channels:', raw.info['bads']) # chs marked "bad" during acquisition
print(raw.info['sfreq'], 'Hz') # sampling frequency
print(raw.info['description'], '\n') # miscellaneous acquisition info
print(raw.info)
print(raw.time_as_index(20))
print(raw.time_as_index([20, 30, 40]), '\n')
print(np.diff(raw.time_as_index([1, 2, 3])))
eeg_and_eog = raw.copy().pick_types(meg=False, eeg=True, eog=True)
print(len(raw.ch_names), '→', len(eeg_and_eog.ch_names))
raw_temp = raw.copy()
print('Number of channels in raw_temp:')
print(len(raw_temp.ch_names), end=' → drop two → ')
raw_temp.drop_channels(['EEG 037', 'EEG 059'])
print(len(raw_temp.ch_names), end=' → pick three → ')
raw_temp.pick_channels(['MEG 1811', 'EEG 017', 'EOG 061'])
print(len(raw_temp.ch_names))
channel_names = ['EOG 061', 'EEG 003', 'EEG 002', 'EEG 001']
eog_and_frontal_eeg = raw.copy().reorder_channels(channel_names)
print(eog_and_frontal_eeg.ch_names)
raw.rename_channels({'EOG 061': 'blink detector'})
print(raw.ch_names[-3:])
channel_renaming_dict = {name: name.replace(' ', '_') for name in raw.ch_names}
raw.rename_channels(channel_renaming_dict)
print(raw.ch_names[-3:])
raw.set_channel_types({'EEG_001': 'eog'})
print(raw.copy().pick_types(meg=False, eog=True).ch_names)
raw_selection = raw.copy().crop(tmin=10, tmax=12.5)
print(raw_selection)
print(raw_selection.times.min(), raw_selection.times.max())
raw_selection.crop(tmin=1)
print(raw_selection.times.min(), raw_selection.times.max())
raw_selection1 = raw.copy().crop(tmin=30, tmax=30.1) # 0.1 seconds
raw_selection2 = raw.copy().crop(tmin=40, tmax=41.1) # 1.1 seconds
raw_selection3 = raw.copy().crop(tmin=50, tmax=51.3) # 1.3 seconds
raw_selection1.append([raw_selection2, raw_selection3]) # 2.5 seconds total
print(raw_selection1.times.min(), raw_selection1.times.max())
sampling_freq = raw.info['sfreq']
start_stop_seconds = np.array([11, 13])
start_sample, stop_sample = (start_stop_seconds * sampling_freq).astype(int)
channel_index = 0
raw_selection = raw[channel_index, start_sample:stop_sample]
print(raw_selection)
x = raw_selection[1]
y = raw_selection[0].T
plt.plot(x, y)
channel_names = ['MEG_0712', 'MEG_1022']
two_meg_chans = raw[channel_names, start_sample:stop_sample]
y_offset = np.array([5e-11, 0]) # just enough to separate the channel traces
x = two_meg_chans[1]
y = two_meg_chans[0].T + y_offset
lines = plt.plot(x, y)
plt.legend(lines, channel_names)
eeg_channel_indices = mne.pick_types(raw.info, meg=False, eeg=True)
eeg_data, times = raw[eeg_channel_indices]
print(eeg_data.shape)
data = raw.get_data()
print(data.shape)
data, times = raw.get_data(return_times=True)
print(data.shape)
print(times.shape)
first_channel_data = raw.get_data(picks=0)
eeg_and_eog_data = raw.get_data(picks=['eeg', 'eog'])
two_meg_chans_data = raw.get_data(picks=['MEG_0712', 'MEG_1022'],
start=1000, stop=2000)
print(first_channel_data.shape)
print(eeg_and_eog_data.shape)
print(two_meg_chans_data.shape)
data = raw.get_data()
np.save(file='my_data.npy', arr=data)
sampling_freq = raw.info['sfreq']
start_end_secs = np.array([10, 13])
start_sample, stop_sample = (start_end_secs * sampling_freq).astype(int)
df = raw.to_data_frame(picks=['eeg'], start=start_sample, stop=stop_sample)
# then save using df.to_csv(...), df.to_hdf(...), etc
print(df.head())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ..that was returned from a function called at subscribe-time
Step2: ..that was returned from an Action, Callable, Runnable, or something of that sort, called at subscribe-time
Step3: ...after a specified delay
Step4: ...that emits a sequence of items repeatedly
Step6: ...from scratch, with custom logic and cleanup (calling a function again and again)
Step7: ...for each observer that subscribes OR according to a condition at subscription time
Step8: ...that emits a sequence of integers
Step9: ...at particular intervals of time
Step10: ...after a specified delay (see timer)
Step11: ...that does nothing at all
Step12: ...that excepts
|
<ASSISTANT_TASK:>
Python Code:
reset_start_time(O.just)
stream = O.just({'answer': rand()})
disposable = subs(stream)
sleep(0.5)
disposable = subs(stream) # same answer
# all stream ops work, its a real stream:
disposable = subs(stream.map(lambda x: x.get('answer', 0) * 2))
print('There is a little API difference to RxJS, see Remarks:\n')
rst(O.start)
def f():
log('function called')
return rand()
stream = O.start(func=f)
d = subs(stream)
d = subs(stream)
header("Exceptions are handled correctly (an observable should never except):")
def breaking_f():
return 1 / 0
stream = O.start(func=breaking_f)
d = subs(stream)
d = subs(stream)
# startasync: only in python3 and possibly here(?) http://www.tornadoweb.org/en/stable/concurrent.html#tornado.concurrent.Future
#stream = O.start_async(f)
#d = subs(stream)
rst(O.from_iterable)
def f():
log('function called')
return rand()
# aliases: O.from_, O.from_list
# 1.: From a tuple:
stream = O.from_iterable((1,2,rand()))
d = subs(stream)
# d = subs(stream) # same result
# 2. from a generator
gen = (rand() for j in range(3))
stream = O.from_iterable(gen)
d = subs(stream)
rst(O.from_callback)
# in my words: In the on_next of the subscriber you'll have the original arguments,
# potentially objects, e.g. user original http requests.
# i.e. you could merge those with the result stream of a backend call to
# a webservice or db and send the request.response back to the user then.
def g(f, a, b):
f(a, b)
log('called f')
stream = O.from_callback(lambda a, b, f: g(f, a, b))('fu', 'bar')
d = subs(stream.delay(200))
# d = subs(stream.delay(200)) # does NOT work
rst()
# start a stream of 0, 1, 2, .. after 200 ms, with a delay of 100 ms:
stream = O.timer(200, 100).time_interval()\
.map(lambda x: 'val:%s dt:%s' % (x.value, x.interval))\
.take(3)
d = subs(stream, name='observer1')
# intermix directly with another one
d = subs(stream, name='observer2')
rst(O.repeat)
# repeat is over *values*, not function calls. Use generate or create for function calls!
subs(O.repeat({'rand': time.time()}, 3))
header('do while:')
l = []
def condition(x):
l.append(1)
return True if len(l) < 2 else False
stream = O.just(42).do_while(condition)
d = subs(stream)
rx = O.create
rst(rx)
def f(obs):
# this function is called for every observer
obs.on_next(rand())
obs.on_next(rand())
obs.on_completed()
def cleanup():
log('cleaning up...')
return cleanup
stream = O.create(f).delay(200) # the delay causes the cleanup called before the subs gets the vals
d = subs(stream)
d = subs(stream)
sleep(0.5)
rst(title='Exceptions are handled nicely')
l = []
def excepting_f(obs):
for i in range(3):
l.append(1)
obs.on_next('%s %s (observer hash: %s)' % (i, 1. / (3 - len(l)), hash(obs) ))
obs.on_completed()
stream = O.create(excepting_f)
d = subs(stream)
d = subs(stream)
rst(title='Feature or Bug?')
print('(where are the first two values?)')
l = []
def excepting_f(obs):
for i in range(3):
l.append(1)
obs.on_next('%s %s (observer hash: %s)' % (i, 1. / (3 - len(l)), hash(obs) ))
obs.on_completed()
stream = O.create(excepting_f).delay(100)
d = subs(stream)
d = subs(stream)
# I think its an (amazing) feature, preventing to process functions results of later(!) failing functions
rx = O.generate
rst(rx)
The basic form of generate takes four parameters:
the first item to emit
a function to test an item to determine whether to emit it (true) or terminate the Observable (false)
a function to generate the next item to test and emit based on the value of the previous item
a function to transform items before emitting them
def generator_based_on_previous(x): return x + 1.1
def doubler(x): return 2 * x
d = subs(rx(0, lambda x: x < 4, generator_based_on_previous, doubler))
rx = O.generate_with_relative_time
rst(rx)
stream = rx(1, lambda x: x < 4, lambda x: x + 1, lambda x: x, lambda t: 100)
d = subs(stream)
rst(O.defer)
# plural! (unique per subscription)
streams = O.defer(lambda: O.just(rand()))
d = subs(streams)
d = subs(streams) # gets other values - created by subscription!
# evaluating a condition at subscription time in order to decide which of two streams to take.
rst(O.if_then)
cond = True
def should_run():
return cond
streams = O.if_then(should_run, O.return_value(43), O.return_value(56))
d = subs(streams)
log('condition will now evaluate falsy:')
cond = False
streams = O.if_then(should_run, O.return_value(43), O.return_value(rand()))
d = subs(streams)
d = subs(streams)
rst(O.range)
d = subs(O.range(0, 3))
rst(O.interval)
d = subs(O.interval(100).time_interval()\
.map(lambda x, v: '%(interval)s %(value)s' \
% ItemGetter(x)).take(3))
rst(O.empty)
d = subs(O.empty())
rst(O.never)
d = subs(O.never())
rst(O.on_error)
d = subs(O.on_error(ZeroDivisionError))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Agora é hora de por as mãos na massa! Vamos do começo
Step2: Agora vamos fazer uma requisição HTTP ao Github para baixar o conteúdo do repositório deste curso (através da URL https
Step3: 200 é o código de resposta que informa que a requisição foi bem sucedida (ou OK).
Step4: O atributo headers de response (response.headers) nos trouxe os cabeçalhos em forma de um dicionário (ou dict). O dict é uma estrutura de dados do Python utilizada para armazenar informações na forma de chave e valor envoltas por chaves {}. Dicts serão explicados mais para frente.
Step5: Para acessar um cabeçalho específico por sua chave fazemos
Step6: Como visto anteriormente a biblioteca requests permite o envio de requisições HTTP de maneira extremamante fácil. Para obter mais informações sobre a requests acesse a doumentação oficial.
Step7: A resposta trouxe todos os repositórios do usuário lamenezes em uma string no padrão JSON.
Step8: A resposta nos trouxe uma lista de dicionários com as informações dos repositórios do usuário lamenezes. Agora fica a dúvida
Step9: Listas podem armazenar qualquer tipo de dados
Step10: Fazemos assim para verificar se um elemento faz parte de uma lista
Step11: Para saber o tamanho de uma lista basta usar a função len()
Step12: Para remover elementos de uma lista existe a palavra reservada del que é utilizada assim
Step13: Iterar uma lista é simples
Step14: O for do Python, diferentemente de outras linguagens como C e Java, faz o controle dos índices internamente.
Step15: A função range(inicio, fim) cria listas de valores no intervalo de inicio até fim - 1. Ao lidar com intervalos no python o primeiro número é sempre incluso e o último excluído. Segue alguns exemplos de uso da função range()
Step16: Exercício
Step17: Exercicio
Step18: Dicionários
Step19: Para acessar os elementos basta usar sua chave
Step20: Para alterar algum valor fazemos
Step21: Verificamos se uma chave existe no dicionário da seguinte maneira
Step22: Para acessar somente as chaves fazemos
Step23: Para acessar somente os valores
Step24: Para ter uma lista contendo as chaves e valores
Step25: Para iterar dicionáros é preciso ter cuidado. Por padrão as chaves do dicionário são iteradas
Step26: Caso você queira imprimir os valores é preciso usar notas.values()
Step27: Para iterar tanto a chave quanto o valor use a função notas.items() como mostrado a seguir
Step28: Para ficar mais claro podemos mudar os nomes das variáveis e para ficar mais inteligível formatar a saída
Step29: Como visto acima o notas.items() retorna uma lista de chaves e valores. Por esse motivo a cada iteração temos acesso a cada chave e valor do dicionário notas, tornando possível essa iteração mais simples e semântica.
Step30: Também temos os dados do dono (owner) do repositório
Step31: 'owner' é um dicionário dentro do dicionário do repositório (sim, é possível guardar dicionários dentro de dicionários)
Step32: Exercicios
Step33: Imprima as URLs de todos os repositórios
Step34: Para os próximos exercícios será necessáro pegar os repositórios do usuário gvanrossum. Use a biblioteca requests e faça uma requição à API do github (https
Step35: Agora pegue o conteúdo da resposta em formato JSON e atribua à variável repos
Step36: Imprima o nome e descrição de todos os repositórios do gvanrossum
Step37: Quantos forks o repositório gvanrossum/asyncio possui?
Step38: Qual o link do perfil do dono dos repositórios?
|
<ASSISTANT_TASK:>
Python Code:
import this
import requests
url = 'https://github.com/lamenezes/python-intro' # não é necessário declarar variáveis em python
response = requests.get(url) # e nem especificar seu tipo
response.status_code
print(response.headers)
dict(response.headers)
response.headers['content-type']
response.headers['date']
response.text[:1000] # retorna os 1000 primeiros caracteres do conteúdo da resposta
response = requests.get('https://api.github.com/users/lamenezes/repos')
response.status_code
response.text[:1000] # pegando os primeiros 1000 caracters do conteúdo da resposta
repositorios = response.json()
repo = repositorios[0] # pegando apenas o primeiro repositório por brevidade
repo
numeros = [1, 2.5, 3, 4.5, 5]
numeros
numeros[0]
numeros[3]
numeros[-1] # -1 acessa o último elemento da lista!
lista = ['foobar', False, ['a', 'b', 'c'], {'foo': 'bar'}, 10, -0.5]
lista
'foobar' in lista
'abc' in lista
-0.5 in lista
len(lista)
len(lista[2]) # o segundo elemento da lista é uma lista com 3 elementos!
len(numeros)
lista
del lista[3]
lista
del lista[-1] # remove último elemento
lista
for numero in numeros:
print(numero)
numeros = range(1, 11) # cria uma lista de números de 1 a 10
list(numeros)
for numero in numeros:
print(numero ** 2) # numero elevado a segunda potência
list(range(10)) # números de 0 a 9
list(range(10, 20)) # números de 10 a 19
list(range(10, 21)) # números de 10 a 20
from math import pi
numeros = ... # crie a lista de números de 2 a 8
for numero in numeros:
print( ... ) # imprime o número vezes pi
taxa_dolar = 3.53 # mude esse valor caso o valor do dólar tenha mudado
preços = ... # python 3 permite declarar variáveis com acentos
for preço in preço:
print(...)
notas = {'joao': 5, 'maria': 9, 'ana': 7}
notas
notas['ana']
notas['joao']
notas['joao'] = 6.5
notas
notas['ana'] = 7.5
notas
'joao' in notas # verifica se o valor é uma chave do dicionário
'joana' in notas
'ana' in notas
list(notas.keys())
list(notas.values())
list(notas.items())
for chave in notas:
print(chave)
for valor in notas.values():
print(valor)
for chave, valor in notas.items():
print(chave, valor)
for nome, nota in notas.items():
print('{0} tirou {1}.'.format(nome.capitalize(), nota))
list(notas.items())
repositorios = response.json() # lista de dicionários com dados de cada repositório
repo = repositorios[11] # vamos analisar o décimo primeiro repositório
repo
repo['full_name']
repo['description']
repo['created_at'] # data de criação
repo['html_url'] # URL da página principal do repositório
dono = repo['owner']
dono['login']
repo['owner']['login']
# digite o código aqui
# digite o código aqui
response = ...
response.status_code # status_code deve ser 200
repos = ...
len(repos) # esta linha deve retornar 5
import requests
response = requests.get('https://api.github.com/users/gvanrossum/repos')
repositorios = response.json()
for repositorio in repositorios:
print(repositorio['full_name'], repositorio['description'])
# digite o código aqui
# digite o código aqui
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Variational Inference on Probabilistic Graphical Models with Joint Distributions
Step2: The regression model is specified as follows
Step3: Expressive surrogate posteriors
Step4: Construct a JointDistribution with vector-valued standard Normal components, with sizes determined by the corresponding prior components. The components should be vector-valued so they can be transformed by the linear operator.
Step5: Build a trainable blockwise lower-triangular linear operator. We'll apply it to the standard Normal distribution to implement a (trainable) blockwise matrix transformation and induce the correlation structure of the posterior.
Step6: After applying the linear operator to the standard Normal distribution, apply a multipart Shift bijector to allow the mean to take nonzero values.
Step7: The resulting multivariate Normal distribution, obtained by transforming the standard Normal distribution with the scale and location bijectors, must be reshaped and restructured to match the prior, and finally constrained to the support of the prior.
Step8: Now, put it all together -- chain the trainable bijectors together and apply them to the base standard Normal distribution to construct the surrogate posterior.
Step9: Train the multivariate Normal surrogate posterior.
Step10: Since the trained surrogate posterior is a TFP distribution, we can take samples from it and process them to produce posterior credible intervals for the parameters.
Step11: Inverse Autoregressive Flow surrogate posterior
Step12: Train the IAF surrogate posterior.
Step13: The credible intervals for the IAF surrogate posterior appear similar to those of the constrained multivariate Normal.
Step14: Baseline
Step15: In this case, the mean field surrogate posterior gives similar results to the more expressive surrogate posteriors, indicating that this simpler model may be adequate for the inference task.
Step16: Ground truth
Step17: Plot sample traces to sanity-check HMC results.
Step18: All three surrogate posteriors produced credible intervals that are visually similar to the HMC samples, though sometimes under-dispersed due to the effect of the ELBO loss, as is common in VI.
Step19: Additional results
Step20: Evidence Lower Bound (ELBO)
Step21: Posterior samples
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip3 install -q tf-nightly tfp-nightly
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_probability as tfp
import warnings
tfd = tfp.distributions
tfb = tfp.bijectors
plt.rcParams['figure.facecolor'] = '1.'
# Load the Radon dataset from `tensorflow_datasets` and filter to data from
# Minnesota.
dataset = tfds.as_numpy(
tfds.load('radon', split='train').filter(
lambda x: x['features']['state'] == 'MN').batch(10**9))
# Dependent variable: Radon measurements by house.
dataset = next(iter(dataset))
radon_measurement = dataset['activity'].astype(np.float32)
radon_measurement[radon_measurement <= 0.] = 0.1
log_radon = np.log(radon_measurement)
# Measured uranium concentrations in surrounding soil.
uranium_measurement = dataset['features']['Uppm'].astype(np.float32)
log_uranium = np.log(uranium_measurement)
# County indicator.
county_strings = dataset['features']['county'].astype('U13')
unique_counties, county = np.unique(county_strings, return_inverse=True)
county = county.astype(np.int32)
num_counties = unique_counties.size
# Floor on which the measurement was taken.
floor_of_house = dataset['features']['floor'].astype(np.int32)
# Average floor by county (contextual effect).
county_mean_floor = []
for i in range(num_counties):
county_mean_floor.append(floor_of_house[county == i].mean())
county_mean_floor = np.array(county_mean_floor, dtype=log_radon.dtype)
floor_by_county = county_mean_floor[county]
# Create variables for fixed effects.
floor_weight = tf.Variable(0.)
bias = tf.Variable(0.)
# Variables for scale parameters.
log_radon_scale = tfp.util.TransformedVariable(1., tfb.Exp())
county_effect_scale = tfp.util.TransformedVariable(1., tfb.Exp())
# Define the probabilistic graphical model as a JointDistribution.
@tfd.JointDistributionCoroutineAutoBatched
def model():
uranium_weight = yield tfd.Normal(0., scale=1., name='uranium_weight')
county_floor_weight = yield tfd.Normal(
0., scale=1., name='county_floor_weight')
county_effect = yield tfd.Sample(
tfd.Normal(0., scale=county_effect_scale),
sample_shape=[num_counties], name='county_effect')
yield tfd.Normal(
loc=(log_uranium * uranium_weight + floor_of_house* floor_weight
+ floor_by_county * county_floor_weight
+ tf.gather(county_effect, county, axis=-1)
+ bias),
scale=log_radon_scale[..., tf.newaxis],
name='log_radon')
# Pin the observed `log_radon` values to model the un-normalized posterior.
target_model = model.experimental_pin(log_radon=log_radon)
# Determine the `event_shape` of the posterior, and calculate the size of each
# `event_shape` component. These determine the sizes of the components of the
# underlying standard Normal distribution, and the dimensions of the blocks in
# the blockwise matrix transformation.
event_shape = target_model.event_shape_tensor()
flat_event_shape = tf.nest.flatten(event_shape)
flat_event_size = tf.nest.map_structure(tf.reduce_prod, flat_event_shape)
# The `event_space_bijector` maps unconstrained values (in R^n) to the support
# of the prior -- we'll need this at the end to constrain Multivariate Normal
# samples to the prior's support.
event_space_bijector = target_model.experimental_default_event_space_bijector()
base_standard_dist = tfd.JointDistributionSequential(
[tfd.Sample(tfd.Normal(0., 1.), s) for s in flat_event_size])
operators = (
(tf.linalg.LinearOperatorDiag,), # Variance of uranium weight (scalar).
(tf.linalg.LinearOperatorFullMatrix, # Covariance between uranium and floor-by-county weights.
tf.linalg.LinearOperatorDiag), # Variance of floor-by-county weight (scalar).
(None, # Independence between uranium weight and county effects.
None, # Independence between floor-by-county and county effects.
tf.linalg.LinearOperatorDiag) # Independence among the 85 county effects.
)
block_tril_linop = (
tfp.experimental.vi.util.build_trainable_linear_operator_block(
operators, flat_event_size))
scale_bijector = tfb.ScaleMatvecLinearOperatorBlock(block_tril_linop)
loc_bijector = tfb.JointMap(
tf.nest.map_structure(
lambda s: tfb.Shift(
tf.Variable(tf.random.uniform(
(s,), minval=-2., maxval=2., dtype=tf.float32))),
flat_event_size))
# Reshape each component to match the prior, using a nested structure of
# `Reshape` bijectors wrapped in `JointMap` to form a multipart bijector.
reshape_bijector = tfb.JointMap(
tf.nest.map_structure(tfb.Reshape, flat_event_shape))
# Restructure the flat list of components to match the prior's structure
unflatten_bijector = tfb.Restructure(
tf.nest.pack_sequence_as(
event_shape, range(len(flat_event_shape))))
surrogate_posterior = tfd.TransformedDistribution(
base_standard_dist,
bijector = tfb.Chain( # Note that the chained bijectors are applied in reverse order
[
event_space_bijector, # constrain the surrogate to the support of the prior
unflatten_bijector, # pack the reshaped components into the `event_shape` structure of the posterior
reshape_bijector, # reshape the vector-valued components to match the shapes of the posterior components
loc_bijector, # allow for nonzero mean
scale_bijector # apply the block matrix transformation to the standard Normal distribution
]))
optimizer = tf.optimizers.Adam(learning_rate=1e-2)
mvn_loss = tfp.vi.fit_surrogate_posterior(
target_model.unnormalized_log_prob,
surrogate_posterior,
optimizer=optimizer,
num_steps=10**4,
sample_size=16,
jit_compile=True)
mvn_samples = surrogate_posterior.sample(1000)
mvn_final_elbo = tf.reduce_mean(
target_model.unnormalized_log_prob(*mvn_samples)
- surrogate_posterior.log_prob(mvn_samples))
print('Multivariate Normal surrogate posterior ELBO: {}'.format(mvn_final_elbo))
plt.plot(mvn_loss)
plt.xlabel('Training step')
_ = plt.ylabel('Loss value')
st_louis_co = 69 # Index of St. Louis, the county with the most observations.
hennepin_co = 25 # Index of Hennepin, with the second-most observations.
def pack_samples(samples):
return {'County effect (St. Louis)': samples.county_effect[..., st_louis_co],
'County effect (Hennepin)': samples.county_effect[..., hennepin_co],
'Uranium weight': samples.uranium_weight,
'Floor-by-county weight': samples.county_floor_weight}
def plot_boxplot(posterior_samples):
fig, axes = plt.subplots(1, 4, figsize=(16, 4))
# Invert the results dict for easier plotting.
k = list(posterior_samples.values())[0].keys()
plot_results = {
v: {p: posterior_samples[p][v] for p in posterior_samples} for v in k}
for i, (var, var_results) in enumerate(plot_results.items()):
sns.boxplot(data=list(var_results.values()), ax=axes[i],
width=0.18*len(var_results), whis=(2.5, 97.5))
# axes[i].boxplot(list(var_results.values()), whis=(2.5, 97.5))
axes[i].title.set_text(var)
fs = 10 if len(var_results) < 4 else 8
axes[i].set_xticklabels(list(var_results.keys()), fontsize=fs)
results = {'Multivariate Normal': pack_samples(mvn_samples)}
print('Bias is: {:.2f}'.format(bias.numpy()))
print('Floor fixed effect is: {:.2f}'.format(floor_weight.numpy()))
plot_boxplot(results)
# Build a standard Normal with a vector `event_shape`, with length equal to the
# total number of degrees of freedom in the posterior.
base_distribution = tfd.Sample(
tfd.Normal(0., 1.), sample_shape=[tf.reduce_sum(flat_event_size)])
# Apply an IAF to the base distribution.
num_iafs = 2
iaf_bijectors = [
tfb.Invert(tfb.MaskedAutoregressiveFlow(
shift_and_log_scale_fn=tfb.AutoregressiveNetwork(
params=2, hidden_units=[256, 256], activation='relu')))
for _ in range(num_iafs)
]
# Split the base distribution's `event_shape` into components that are equal
# in size to the prior's components.
split = tfb.Split(flat_event_size)
# Chain these bijectors and apply them to the standard Normal base distribution
# to build the surrogate posterior. `event_space_bijector`,
# `unflatten_bijector`, and `reshape_bijector` are the same as in the
# multivariate Normal surrogate posterior.
iaf_surrogate_posterior = tfd.TransformedDistribution(
base_distribution,
bijector=tfb.Chain([
event_space_bijector, # constrain the surrogate to the support of the prior
unflatten_bijector, # pack the reshaped components into the `event_shape` structure of the prior
reshape_bijector, # reshape the vector-valued components to match the shapes of the prior components
split] + # Split the samples into components of the same size as the prior components
iaf_bijectors # Apply a flow model to the Tensor-valued standard Normal distribution
))
optimizer=tf.optimizers.Adam(learning_rate=1e-2)
iaf_loss = tfp.vi.fit_surrogate_posterior(
target_model.unnormalized_log_prob,
iaf_surrogate_posterior,
optimizer=optimizer,
num_steps=10**4,
sample_size=4,
jit_compile=True)
iaf_samples = iaf_surrogate_posterior.sample(1000)
iaf_final_elbo = tf.reduce_mean(
target_model.unnormalized_log_prob(*iaf_samples)
- iaf_surrogate_posterior.log_prob(iaf_samples))
print('IAF surrogate posterior ELBO: {}'.format(iaf_final_elbo))
plt.plot(iaf_loss)
plt.xlabel('Training step')
_ = plt.ylabel('Loss value')
results['IAF'] = pack_samples(iaf_samples)
plot_boxplot(results)
# A block-diagonal linear operator, in which each block is a diagonal operator,
# transforms the standard Normal base distribution to produce a mean-field
# surrogate posterior.
operators = (tf.linalg.LinearOperatorDiag,
tf.linalg.LinearOperatorDiag,
tf.linalg.LinearOperatorDiag)
block_diag_linop = (
tfp.experimental.vi.util.build_trainable_linear_operator_block(
operators, flat_event_size))
mean_field_scale = tfb.ScaleMatvecLinearOperatorBlock(block_diag_linop)
mean_field_loc = tfb.JointMap(
tf.nest.map_structure(
lambda s: tfb.Shift(
tf.Variable(tf.random.uniform(
(s,), minval=-2., maxval=2., dtype=tf.float32))),
flat_event_size))
mean_field_surrogate_posterior = tfd.TransformedDistribution(
base_standard_dist,
bijector = tfb.Chain( # Note that the chained bijectors are applied in reverse order
[
event_space_bijector, # constrain the surrogate to the support of the prior
unflatten_bijector, # pack the reshaped components into the `event_shape` structure of the posterior
reshape_bijector, # reshape the vector-valued components to match the shapes of the posterior components
mean_field_loc, # allow for nonzero mean
mean_field_scale # apply the block matrix transformation to the standard Normal distribution
]))
optimizer=tf.optimizers.Adam(learning_rate=1e-2)
mean_field_loss = tfp.vi.fit_surrogate_posterior(
target_model.unnormalized_log_prob,
mean_field_surrogate_posterior,
optimizer=optimizer,
num_steps=10**4,
sample_size=16,
jit_compile=True)
mean_field_samples = mean_field_surrogate_posterior.sample(1000)
mean_field_final_elbo = tf.reduce_mean(
target_model.unnormalized_log_prob(*mean_field_samples)
- mean_field_surrogate_posterior.log_prob(mean_field_samples))
print('Mean-field surrogate posterior ELBO: {}'.format(mean_field_final_elbo))
plt.plot(mean_field_loss)
plt.xlabel('Training step')
_ = plt.ylabel('Loss value')
results['Mean Field'] = pack_samples(mean_field_samples)
plot_boxplot(results)
num_chains = 8
num_leapfrog_steps = 3
step_size = 0.4
num_steps=20000
flat_event_shape = tf.nest.flatten(target_model.event_shape)
enum_components = list(range(len(flat_event_shape)))
bijector = tfb.Restructure(
enum_components,
tf.nest.pack_sequence_as(target_model.event_shape, enum_components))(
target_model.experimental_default_event_space_bijector())
current_state = bijector(
tf.nest.map_structure(
lambda e: tf.zeros([num_chains] + list(e), dtype=tf.float32),
target_model.event_shape))
hmc = tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=target_model.unnormalized_log_prob,
num_leapfrog_steps=num_leapfrog_steps,
step_size=[tf.fill(s.shape, step_size) for s in current_state])
hmc = tfp.mcmc.TransformedTransitionKernel(
hmc, bijector)
hmc = tfp.mcmc.DualAveragingStepSizeAdaptation(
hmc,
num_adaptation_steps=int(num_steps // 2 * 0.8),
target_accept_prob=0.9)
chain, is_accepted = tf.function(
lambda current_state: tfp.mcmc.sample_chain(
current_state=current_state,
kernel=hmc,
num_results=num_steps // 2,
num_burnin_steps=num_steps // 2,
trace_fn=lambda _, pkr:
(pkr.inner_results.inner_results.is_accepted),
),
autograph=False,
jit_compile=True)(current_state)
accept_rate = tf.reduce_mean(tf.cast(is_accepted, tf.float32))
ess = tf.nest.map_structure(
lambda c: tfp.mcmc.effective_sample_size(
c,
cross_chain_dims=1,
filter_beyond_positive_pairs=True),
chain)
r_hat = tf.nest.map_structure(tfp.mcmc.potential_scale_reduction, chain)
hmc_samples = pack_samples(
tf.nest.pack_sequence_as(target_model.event_shape, chain))
print('Acceptance rate is {}'.format(accept_rate))
def plot_traces(var_name, samples):
fig, axes = plt.subplots(1, 2, figsize=(14, 1.5), sharex='col', sharey='col')
for chain in range(num_chains):
s = samples.numpy()[:, chain]
axes[0].plot(s, alpha=0.7)
sns.kdeplot(s, ax=axes[1], shade=False)
axes[0].title.set_text("'{}' trace".format(var_name))
axes[1].title.set_text("'{}' distribution".format(var_name))
axes[0].set_xlabel('Iteration')
warnings.filterwarnings('ignore')
for var, var_samples in hmc_samples.items():
plot_traces(var, var_samples)
results['HMC'] = hmc_samples
plot_boxplot(results)
#@title Plotting functions
plt.rcParams.update({'axes.titlesize': 'medium', 'xtick.labelsize': 'medium'})
def plot_loss_and_elbo():
fig, axes = plt.subplots(1, 2, figsize=(12, 4))
axes[0].scatter([0, 1, 2],
[mvn_final_elbo.numpy(),
iaf_final_elbo.numpy(),
mean_field_final_elbo.numpy()])
axes[0].set_xticks(ticks=[0, 1, 2])
axes[0].set_xticklabels(labels=[
'Multivariate Normal', 'IAF', 'Mean Field'])
axes[0].title.set_text('Evidence Lower Bound (ELBO)')
axes[1].plot(mvn_loss, label='Multivariate Normal')
axes[1].plot(iaf_loss, label='IAF')
axes[1].plot(mean_field_loss, label='Mean Field')
axes[1].set_ylim([1000, 4000])
axes[1].set_xlabel('Training step')
axes[1].set_ylabel('Loss (negative ELBO)')
axes[1].title.set_text('Loss')
plt.legend()
plt.show()
plt.rcParams.update({'axes.titlesize': 'medium', 'xtick.labelsize': 'small'})
def plot_kdes(num_chains=8):
fig, axes = plt.subplots(2, 2, figsize=(12, 8))
k = list(results.values())[0].keys()
plot_results = {
v: {p: results[p][v] for p in results} for v in k}
for i, (var, var_results) in enumerate(plot_results.items()):
ax = axes[i % 2, i // 2]
for posterior, posterior_results in var_results.items():
if posterior == 'HMC':
label = posterior
for chain in range(num_chains):
sns.kdeplot(
posterior_results[:, chain],
ax=ax, shade=False, color='k', linestyle=':', label=label)
label=None
else:
sns.kdeplot(
posterior_results, ax=ax, shade=False, label=posterior)
ax.title.set_text('{}'.format(var))
ax.legend()
plot_loss_and_elbo()
plot_kdes()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Выполним загрузку данных
Step2: В качестве альтернативного варианта, можно выполнить загрузку данных напрямую из репозитория UCI, воспользовавшись библиотекой urllib.
Step3: Выделим из данных целевую переменную. Классы в задаче являются несбалинсированными
Step4: Двуслойная нейронная сеть
Step5: Инициализируем основные параметры задачи
Step6: Инициализируем структуру данных ClassificationDataSet, используемую библиотекой pybrain. Для инициализации структура принимает два аргумента
Step7: Инициализируем двуслойную сеть и произведем оптимизацию ее параметров. Аргументами для инициализации являются
Step8: Выполним оптимизацию параметров сети. График ниже показывает сходимость функции ошибки на обучающей/контрольной части.
Step9: Рассчитаем значение доли неправильных ответов на обучающей и контрольной выборке.
Step10: Задание. Определение оптимального числа нейронов.
|
<ASSISTANT_TASK:>
Python Code:
# Выполним инициализацию основных используемых модулей
%matplotlib inline
import random
import matplotlib.pyplot as plt
from sklearn.preprocessing import normalize
import numpy as np
with open('winequality-red.csv') as f:
f.readline() # пропуск заголовочной строки
data = np.loadtxt(f, delimiter=';')
import urllib
# URL for the Wine Quality Data Set (UCI Machine Learning Repository)
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv"
# загрузка файла
f = urllib.urlopen(url)
f.readline() # пропуск заголовочной строки
data = np.loadtxt(f, delimiter=';')
TRAIN_SIZE = 0.7 # Разделение данных на обучающую и контрольную части в пропорции 70/30%
from sklearn.model_selection import train_test_split
y = data[:, -1]
np.place(y, y < 5, 5)
np.place(y, y > 7, 7)
y -= min(y)
X = data[:, :-1]
X = normalize(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=TRAIN_SIZE, random_state=0)
from pybrain.datasets import ClassificationDataSet # Структура данных pybrain
from pybrain.tools.shortcuts import buildNetwork
from pybrain.supervised.trainers import BackpropTrainer
from pybrain.structure.modules import SoftmaxLayer
from pybrain.utilities import percentError
# Определение основных констант
HIDDEN_NEURONS_NUM = 100 # Количество нейронов, содержащееся в скрытом слое сети
MAX_EPOCHS = 100 # Максимальное число итераций алгоритма оптимизации параметров сети
# Конвертация данных в структуру ClassificationDataSet
# Обучающая часть
ds_train = ClassificationDataSet(np.shape(X)[1], nb_classes=len(np.unique(y_train)))
# Первый аргумент -- количество признаков np.shape(X)[1], второй аргумент -- количество меток классов len(np.unique(y_train)))
ds_train.setField('input', X_train) # Инициализация объектов
ds_train.setField('target', y_train[:, np.newaxis]) # Инициализация ответов; np.newaxis создает вектор-столбец
ds_train._convertToOneOfMany( ) # Бинаризация вектора ответов
# Контрольная часть
ds_test = ClassificationDataSet(np.shape(X)[1], nb_classes=len(np.unique(y_train)))
ds_test.setField('input', X_test)
ds_test.setField('target', y_test[:, np.newaxis])
ds_test._convertToOneOfMany( )
np.random.seed(0) # Зафиксируем seed для получения воспроизводимого результата
# Построение сети прямого распространения (Feedforward network)
net = buildNetwork(ds_train.indim, HIDDEN_NEURONS_NUM, ds_train.outdim, outclass=SoftmaxLayer)
# ds.indim -- количество нейронов входного слоя, равное количеству признаков
# ds.outdim -- количество нейронов выходного слоя, равное количеству меток классов
# SoftmaxLayer -- функция активации, пригодная для решения задачи многоклассовой классификации
init_params = np.random.random((len(net.params))) # Инициализируем веса сети для получения воспроизводимого результата
net._setParameters(init_params)
random.seed(0)
# Модуль настройки параметров pybrain использует модуль random; зафиксируем seed для получения воспроизводимого результата
trainer = BackpropTrainer(net, dataset=ds_train) # Инициализируем модуль оптимизации
err_train, err_val = trainer.trainUntilConvergence(maxEpochs=MAX_EPOCHS)
line_train = plt.plot(err_train, 'b', err_val, 'r') # Построение графика
xlab = plt.xlabel('Iterations')
ylab = plt.ylabel('Error')
res_train = net.activateOnDataset(ds_train).argmax(axis=1) # Подсчет результата на обучающей выборке
print 'Error on train: ', percentError(res_train, ds_train['target'].argmax(axis=1)), '%' # Подсчет ошибки
res_test = net.activateOnDataset(ds_test).argmax(axis=1) # Подсчет результата на тестовой выборке
print 'Error on test: ', percentError(res_test, ds_test['target'].argmax(axis=1)), '%' # Подсчет ошибки
random.seed(0) # Зафиксируем seed для получния воспроизводимого результата
np.random.seed(0)
def plot_classification_error(hidden_neurons_num, res_train_vec, res_test_vec):
# hidden_neurons_num -- массив размера h, содержащий количество нейронов, по которому предполагается провести перебор,
# hidden_neurons_num = [50, 100, 200, 500, 700, 1000];
# res_train_vec -- массив размера h, содержащий значения доли неправильных ответов классификации на обучении;
# res_train_vec -- массив размера h, содержащий значения доли неправильных ответов классификации на контроле
plt.figure()
plt.plot(hidden_neurons_num, res_train_vec)
plt.plot(hidden_neurons_num, res_test_vec, '-r')
def write_answer_nn(optimal_neurons_num):
with open("nnets_answer1.txt", "w") as fout:
fout.write(str(optimal_neurons_num))
hidden_neurons_num = [50, 100, 200, 500, 700, 1000]
res_train_vec = list()
res_test_vec = list()
for nnum in hidden_neurons_num:
net = buildNetwork(ds_train.indim, nnum, ds_train.outdim, outclass=SoftmaxLayer)
init_params = np.random.random((len(net.params))) # Инициализируем веса сети для получения воспроизводимого результата
net._setParameters(init_params)
trainer = BackpropTrainer(net, dataset=ds_train)
trainer.trainUntilConvergence(maxEpochs=MAX_EPOCHS)
res_train = net.activateOnDataset(ds_train).argmax(axis=1) # Подсчет результата на обучающей выборке
res_test = net.activateOnDataset(ds_test).argmax(axis=1) # Подсчет результата на тестовой выборке
train_err = percentError(res_train, ds_train['target'].argmax(axis=1))
test_err = percentError(res_test, ds_test['target'].argmax(axis=1))
res_train_vec.append(train_err)
res_test_vec.append(test_err)
# Постройте график зависимости ошибок на обучении и контроле в зависимости от количества нейронов
plot_classification_error(hidden_neurons_num, res_train_vec, res_test_vec)
# Запишите в файл количество нейронов, при котором достигается минимум ошибки на контроле
write_answer_nn(hidden_neurons_num[res_test_vec.index(min(res_test_vec))])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use this to automate the process. Be carefull it can overwrite current results
Step2: Now we will obtain the data from the calculated empirical variogram.
Step3: Instantiating the variogram object
Step4: Instantiating theoretical variogram model
|
<ASSISTANT_TASK:>
Python Code:
# Load Biospytial modules and etc.
%matplotlib inline
import sys
sys.path.append('/apps')
sys.path.append('..')
sys.path.append('../spystats')
import django
django.setup()
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
## Use the ggplot style
plt.style.use('ggplot')
import tools
from HEC_runs.fit_fia_logbiomass_logspp_GLS import initAnalysis
from HEC_runs.fit_fia_logbiomass_logspp_GLS import prepareDataFrame,loadVariogramFromData,buildSpatialStructure, calculateGLS, initAnalysis, fitGLSRobust
section = initAnalysis("/RawDataCSV/idiv_share/FIA_Plots_Biomass_11092017.csv",
"/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv",
-130,-60,30,40)
#section = initAnalysis("/RawDataCSV/idiv_share/plotsClimateData_11092017.csv",
# "/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv",
# -85,-80,30,35)
# IN HEC
#section = initAnalysis("/home/hpc/28/escamill/csv_data/idiv/FIA_Plots_Biomass_11092017.csv","/home/hpc/28/escamill/spystats/HEC_runs/results/variogram/data_envelope.csv",-85,-80,30,35)
section.shape
gvg,tt = loadVariogramFromData("/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv",section)
gvg.plot(refresh=False,with_envelope=True)
resum,gvgn,resultspd,results = fitGLSRobust(section,gvg,num_iterations=10,distance_threshold=1000000)
resum.as_text
plt.plot(resultspd.rsq)
plt.title("GLS feedback algorithm")
plt.xlabel("Number of iterations")
plt.ylabel("R-sq fitness estimator")
resultspd.columns
a = map(lambda x : x.to_dict(), resultspd['params'])
paramsd = pd.DataFrame(a)
paramsd
plt.plot(paramsd.Intercept.loc[1:])
plt.get_yaxis().get_major_formatter().set_useOffset(False)
fig = plt.figure(figsize=(10,10))
plt.plot(paramsd.logSppN.iloc[1:])
variogram_data_path = "/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv"
thrs_dist = 100000
emp_var_log_log = pd.read_csv(variogram_data_path)
gvg = tools.Variogram(section,'logBiomass',using_distance_threshold=thrs_dist)
gvg.envelope = emp_var_log_log
gvg.empirical = emp_var_log_log.variogram
gvg.lags = emp_var_log_log.lags
#emp_var_log_log = emp_var_log_log.dropna()
#vdata = gvg.envelope.dropna()
matern_model = tools.MaternVariogram(sill=0.34,range_a=100000,nugget=0.33,kappa=4)
whittle_model = tools.WhittleVariogram(sill=0.340246718396,range_a=41188.0234423,nugget=0.329937603763,alpha=1.12143687914)
exp_model = tools.ExponentialVariogram(sill=0.34,range_a=100000,nugget=0.33)
gaussian_model = tools.GaussianVariogram(sill=0.34,range_a=100000,nugget=0.33)
spherical_model = tools.SphericalVariogram(sill=0.34,range_a=100000,nugget=0.33)
gvg.model = whittle_model
#gvg.model = matern_model
#models = map(lambda model : gvg.fitVariogramModel(model),[matern_model,whittle_model,exp_model,gaussian_model,spherical_model])
gvg.fitVariogramModel(whittle_model)
import numpy as np
xx = np.linspace(0,1000000,1000)
gvg.plot(refresh=False,with_envelope=True)
plt.plot(xx,gvg.model.f(xx),lw=2.0,c='k')
plt.title("Empirical Variogram with fitted Whittle Model")
expdat = pd.DataFrame({'x':xx,'tvar':gvg.model.f(xx)})
expdat.to_csv('/outputs/theoretical_var.csv')
def randomSelection(n,p):
idxs = np.random.choice(n,p,replace=False)
random_sample = new_data.iloc[idxs]
return random_sample
#################
n = len(new_data)
p = 3000 # The amount of samples taken (let's do it without replacement)
random_sample = randomSelection(n,100)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: But, this is a painful way to construct request packets. Hence, other high level abstractions are available.
Step2: build_http_request ensures a User-Agent header. You can provide your own too
Step3: Or, if you don't want a User-Agent header at all
Step4: To add a connection close header, simply pass conn_close=True
Step5: For POST requests, provide the body attribute
Step6: For chunked data, simply include a Transfer-Encoding header. This will omit the Content-Length header then
|
<ASSISTANT_TASK:>
Python Code:
from proxy.http.parser import HttpParser, httpParserTypes
from proxy.http import httpMethods
from proxy.common.utils import HTTP_1_1
request = HttpParser(httpParserTypes.REQUEST_PARSER)
request.path, request.method, request.version = b'/', httpMethods.GET, HTTP_1_1
request.add_header(b'Host', b'jaxl.com')
print(request.build())
from proxy.common.utils import build_http_request
build_http_request(
method=httpMethods.GET,
url=b'/',
headers={b'Host': b'jaxl.com'},
)
build_http_request(
method=httpMethods.GET,
url=b'/',
headers={
b'Host': b'jaxl.com',
b'User-Agent': b'my app v1'
},
)
build_http_request(
method=httpMethods.GET,
url=b'/',
headers={b'Host': b'jaxl.com'},
no_ua=True,
)
build_http_request(
method=httpMethods.GET,
url=b'/',
headers={b'Host': b'jaxl.com'},
conn_close=True,
)
build_http_request(
method=httpMethods.POST,
url=b'/',
headers={b'Host': b'jaxl.com'},
body=b'key=value&hello=world',
content_type=b'application/x-www-form-urlencoded',
conn_close=True,
)
from proxy.http.parser import ChunkParser
build_http_request(
method=httpMethods.POST,
url=b'/',
headers={
b'Host': b'jaxl.com',
b'Transfer-Encoding': b'chunked',
},
body=ChunkParser.to_chunks(b'key=value&hello=world', chunk_size=5),
content_type=b'application/x-www-form-urlencoded',
conn_close=True,
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's make sure we install the necessary version of tensorflow. After doing the pip install above, click Restart the kernel on the notebook so that the Python environment picks up the new packages.
Step2: Locating the CSV files
Step3: Use tf.data to read the CSV files
Step4: Note that this is a prefetched dataset. If you loop over the dataset, you'll get the rows one-by-one. Let's convert each row into a Python dictionary
Step5: What we really need is a dictionary of features + a label. So, we have to do two things to the above dictionary. (1) remove the unwanted column "key" and (2) keep the label separate from the features.
Step6: Batching
Step7: Shuffling
|
<ASSISTANT_TASK:>
Python Code:
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
!pip install tensorflow==2.1.0 --user
import os, json, math
import numpy as np
import shutil
import logging
# SET TF ERROR LOG VERBOSITY
logging.getLogger("tensorflow").setLevel(logging.ERROR)
import tensorflow as tf
print("TensorFlow version: ",tf.version.VERSION)
PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["BUCKET"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID
if PROJECT == "your-gcp-project-here":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
# If you're not using TF 2.0+, let's enable eager execution
if tf.version.VERSION < '2.0':
print('Enabling v2 behavior and eager execution; if necessary restart kernel, and rerun notebook')
tf.enable_v2_behavior()
!ls -l ../../data/*.csv
CSV_COLUMNS = ['fare_amount', 'pickup_datetime',
'pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0],['na'],[0.0],[0.0],[0.0],[0.0],[0.0],['na']]
# load the training data
def load_dataset(pattern):
# TODO 1: Use tf.data to read CSV files
# Tip: Refer to: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/data/experimental/make_csv_dataset
return tf.data. # complete this line
# TODO 2: Load the training data into memory
tempds = load_dataset('') # find and load the taxi-train* into memory
print(tempds)
# print a few of the rows
for n, data in enumerate(tempds):
row_data = {k: v.numpy() for k,v in data.items()}
print(n, row_data)
if n > 2:
break
# get features, label
def features_and_labels(row_data):
# TODO 3: Prune the data by removing column named 'key'
for unwanted_col in ['pickup_datetime', '']: # specify column to remove
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# print a few rows to make it sure works
for n, data in enumerate(tempds):
row_data = {k: v.numpy() for k,v in data.items()}
features, label = features_and_labels(row_data)
print(n, label, features)
if n > 2:
break
def load_dataset(pattern, batch_size):
return (
# TODO 4: Use tf.data to map features and labels
tf.data.experimental.make_csv_dataset() # complete parameters
.map() # complete with name of features and labels
)
# TODO 5: Experiment by adjusting batch size
# try changing the batch size and watch what happens.
tempds = load_dataset('../../data/taxi-train*', batch_size=2)
print(list(tempds.take(3))) # truncate and print as a list
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
.map(features_and_labels) # features, label
.cache())
if mode == tf.estimator.ModeKeys.TRAIN:
# TODO 6: Add dataset.shuffle 1000 to our dataset and have it repeat
# Tip: Refer to https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/data/Dataset#shuffle
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
tempds = load_dataset('../../data/taxi-train*', 2, tf.estimator.ModeKeys.TRAIN)
print(list(tempds.take(1)))
tempds = load_dataset('../../data/taxi-valid*', 2, tf.estimator.ModeKeys.EVAL)
print(list(tempds.take(1)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Activation Functions
Step2: Q2. Apply sigmoid and tanh to x.
Step3: Q3. Apply softmax to x.
Step4: Q4. Apply dropout with keep_prob=.5 to x.
Step5: Fully Connected
Step6: Convolution
Step7: Q7. Apply 3 kernels of width-height (2, 2), stride 1, dilation_rate 2 and valid padding to x.
Step8: Q8. Apply 4 kernels of width-height (3, 3), stride 2, and same padding to x.
Step9: Q9. Apply 4 times of kernels of width-height (3, 3), stride 2, and same padding to x, depth-wise.
Step10: Q10. Apply 5 kernels of height 3, stride 2, and valid padding to x.
Step11: Q11. Apply conv2d transpose with 5 kernels of width-height (3, 3), stride 2, and same padding to x.
Step12: Q12. Apply conv2d transpose with 5 kernels of width-height (3, 3), stride 2, and valid padding to x.
Step13: Q13. Apply max pooling and average pooling of window size 2, stride 1, and valid padding to x.
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline
from datetime import date
date.today()
author = "kyubyong. https://github.com/Kyubyong/tensorflow-exercises"
tf.__version__
np.__version__
_x = np.linspace(-10., 10., 1000)
x = tf.convert_to_tensor(_x)
relu = ...
elu = ...
softplus = ...
with tf.Session() as sess:
_relu, _elu, _softplus = sess.run([relu, elu, softplus])
plt.plot(_x, _relu, label='relu')
plt.plot(_x, _elu, label='elu')
plt.plot(_x, _softplus, label='softplus')
plt.legend(bbox_to_anchor=(0.5, 1.0))
plt.show()
_x = np.linspace(-10., 10., 1000)
x = tf.convert_to_tensor(_x)
sigmoid = ...
tanh = ...
with tf.Session() as sess:
_sigmoid, _tanh = sess.run([sigmoid, tanh])
plt.plot(_x, _sigmoid, label='sigmoid')
plt.plot(_x, _tanh, label='tanh')
plt.legend(bbox_to_anchor=(0.5, 1.0))
plt.grid()
plt.show()
_x = np.array([[1, 2, 4, 8], [2, 4, 6, 8]], dtype=np.float32)
x = tf.convert_to_tensor(_x)
out = ...
with tf.Session() as sess:
_out = sess.run(out)
print(_out)
assert np.allclose(np.sum(_out, axis=-1), 1)
_x = np.array([[1, 2, 4, 8], [2, 4, 6, 8]], dtype=np.float32)
print("_x =\n" , _x)
x = tf.convert_to_tensor(_x)
out = ...
with tf.Session() as sess:
_out = sess.run(out)
print("_out =\n", _out)
x = tf.random_normal([8, 10])
tf.reset_default_graph()
x = tf.random_uniform(shape=(2, 3, 3, 3), dtype=tf.float32)
filter = tf.get_variable("filter", shape=..., dtype=tf.float32,
initializer=tf.random_uniform_initializer())
out = ...
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
_out = sess.run(out)
print(_out.shape)
tf.reset_default_graph()
x = tf.random_uniform(shape=(4, 10, 10, 3), dtype=tf.float32)
filter = tf.get_variable("filter", shape=..., dtype=tf.float32,
initializer=tf.random_uniform_initializer())
out = ...
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
_out = sess.run(out)
print(_out.shape)
tf.reset_default_graph()
x = tf.random_uniform(shape=(4, 10, 10, 5), dtype=tf.float32)
filter = tf.get_variable("filter", shape=..., dtype=tf.float32,
initializer=tf.random_uniform_initializer())
out = ...
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
_out = sess.run(out)
print(_out.shape)
tf.reset_default_graph()
x = tf.random_uniform(shape=(4, 10, 10, 5), dtype=tf.float32)
filter = tf.get_variable("filter", shape=..., dtype=tf.float32,
initializer=tf.random_uniform_initializer())
out = ...
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
_out = sess.run(out)
print(_out.shape)
tf.reset_default_graph()
x = tf.random_uniform(shape=(4, 10, 5), dtype=tf.float32)
filter = tf.get_variable("filter", shape=..., dtype=tf.float32,
initializer=tf.random_uniform_initializer())
out = ...
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
_out = sess.run(out)
print(_out.shape)
tf.reset_default_graph()
x = tf.random_uniform(shape=(4, 5, 5, 4), dtype=tf.float32)
filter = tf.get_variable("filter", shape=..., dtype=tf.float32,
initializer=tf.random_uniform_initializer())
shp = x.get_shape().as_list()
output_shape = ...
out = ...
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
_out = sess.run(out)
print(_out.shape)
tf.reset_default_graph()
x = tf.random_uniform(shape=(4, 5, 5, 4), dtype=tf.float32)
filter = tf.get_variable("filter", shape=(3, 3, 5, 4), dtype=tf.float32,
initializer=tf.random_uniform_initializer())
shp = x.get_shape().as_list()
output_shape = ...
out = ...
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
_out = sess.run(out)
print(_out.shape)
_x = np.zeros((1, 3, 3, 3), dtype=np.float32)
_x[0, :, :, 0] = np.arange(1, 10, dtype=np.float32).reshape(3, 3)
_x[0, :, :, 1] = np.arange(10, 19, dtype=np.float32).reshape(3, 3)
_x[0, :, :, 2] = np.arange(19, 28, dtype=np.float32).reshape(3, 3)
print("1st channel of x =\n", _x[:, :, :, 0])
print("\n2nd channel of x =\n", _x[:, :, :, 1])
print("\n3rd channel of x =\n", _x[:, :, :, 2])
x = tf.constant(_x)
maxpool = ...
avgpool = ...
with tf.Session() as sess:
_maxpool, _avgpool = sess.run([maxpool, avgpool])
print("\n1st channel of max pooling =\n", _maxpool[:, :, :, 0])
print("\n2nd channel of max pooling =\n", _maxpool[:, :, :, 1])
print("\n3rd channel of max pooling =\n", _maxpool[:, :, :, 2])
print("\n1st channel of avg pooling =\n", _avgpool[:, :, :, 0])
print("\n2nd channel of avg pooling =\n", _avgpool[:, :, :, 1])
print("\n3rd channel of avg pooling =\n", _avgpool[:, :, :, 2])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1:
Step2: Vertreter für Wert
Step3: Lösung
Step4: Vertreter für Spaltenfunktionen
Step5: Aufgabe
Step6: Bemerkung
Step7: Aufgabe
Step8: Vertreter für eine Tabelle
Step9: Durch eine Gruppierung werden alle Jahreszahlen und die durchschnittlichen Schadenshöhen zusammengestellt (Teil 1 der Lösung).
Step10: Übungen
|
<ASSISTANT_TASK:>
Python Code:
%load_ext sql
%sql mysql://steinam:steinam@localhost/versicherung_complete
% load_ext sql
%%sql
select Personalnummer, Name, Vorname
from Mitarbeiter
where Abteilung_ID =
( select ID from Abteilung
where Kuerzel = 'Schadensabwicklung' );
%%sql
select Personalnummer, Name, Vorname
from Mitarbeiter
where Abteilung_ID =
( select ID from Abteilung
where Kuerzel = 'ScAb' );
%%sql
SELECT ID, Datum, Ort, Schadenshoehe
from Schadensfall
where Schadenshoehe < (
select AVG(Schadenshoehe) from Schadensfall
);
%%sql
select sf.ID, sf.Datum, sf.Schadenshoehe, EXTRACT(YEAR from
sf.Datum) AS Jahr
from Schadensfall sf
where ABS(Schadenshoehe - (
select AVG(sf2.Schadenshoehe)
from Schadensfall sf2
where YEAR(sf2.Datum) = YEAR(sf.Datum)
)
) <= 300;
%%sql
select ID, Kennzeichen, Fahrzeugtyp_ID as TypID
from Fahrzeug
where Fahrzeugtyp_ID in(
select ID
from Fahrzeugtyp
where Hersteller_ID = (
select ID
from Fahrzeughersteller
where Name = 'Volkswagen' ) );
%%sql
select *
from Schadensfall
where ID in ( SELECT ID
from Schadensfall
where ( ABS(Schadenshoehe - (
select AVG(sf2.Schadenshoehe)
from Schadensfall sf2
where YEAR(sf2.Datum) = 2008
)
) <= 300 )
and ( YEAR(Datum) = 2008 )
);
%sql
SELECT sf.ID, sf.Datum, sf.Schadenshoehe, temp.Jahr,
temp.Durchschnitt
FROM Schadensfall sf,
( SELECT AVG(sf2.Schadenshoehe) AS Durchschnitt,
EXTRACT(YEAR FROM sf2.Datum) as Jahr
FROM Schadensfall sf2
group by EXTRACT(YEAR FROM sf2.Datum)
) temp
WHERE temp.Jahr = EXTRACT(YEAR FROM sf.Datum)
and ABS(Schadenshoehe - temp.Durchschnitt) <= 300;
%%sql
SELECT Fahrzeug.ID, Kennzeichen, Typen.ID As TYP, Typen.Bezeichnung
FROM Fahrzeug,
(SELECT ID, Bezeichnung
FROM Fahrzeugtyp
WHERE Hersteller_ID =
(SELECT ID
FROM Fahrzeughersteller
WHERE Name = 'Volkswagen' )
) Typen
WHERE Fahrzeugtyp_ID = Typen.ID;
%sql mysql://steinam:steinam@localhost/so_2016
%%sql
-- Original Roth
Select Kurs.KursID, Kursart.Bezeichnung,
Kurs.DatumUhrzeitBeginn,
((count(KundeKurs.KundenID)/Kursart.TeilnehmerMax) * 100) as Auslastung
from Kurs, Kursart, Kundekurs
where KundeKurs.KursID = Kurs.KursID and Kursart.KursartID = Kurs.KursartID
group by Kurs.KursID, Kurs.DatumUhrzeitBeginn, Kursart.Bezeichnung
having Auslastung < 50;
%%sql
select kursid from kurs
where
((select teilnehmerMax from kursart where kursart.kursartId = kurs.kursartId) * 0.5)
>
(count(KundeKurs.kundenid) where KundeKurs.KursID = kurs.KursID);
%%sql
Select Kurs.KursID, Kursart.Bezeichnung,
Kurs.DatumUhrzeitBeginn,
((count(KundeKurs.KundenID)/Kursart.TeilnehmerMax) * 100) as Auslastung
from Kurs, Kursart, Kundekurs
where KundeKurs.KursID = Kurs.KursID and Kursart.KursartID = Kurs.KursartID
group by Kurs.KursID, Kurs.DatumUhrzeitBeginn, Kursart.Bezeichnung
having Auslastung < 50
%%sql
Select Kurs.KursID, Kursart.Bezeichnung,
Kurs.DatumUhrzeitBeginn,
((count(KundeKurs.KundenID)/Kursart.TeilnehmerMax) * 100) as Auslastung
from kurs left join kundekurs
on kurs.`kursid` = kundekurs.`Kursid`
inner join kursart
on `kurs`.`kursartid` = `kursart`.`kursartid`
group by Kurs.KursID, Kurs.DatumUhrzeitBeginn, Kursart.Bezeichnung
having Auslastung < 50
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: MarkerCluster
Step2: Terminator
Step3: BoatMarker
Step4: BeautifyIcon
Step5: Fullscreen
Step7: Timestamped GeoJSON
Step8: FeatureGroupSubGroup
Step9: Marker clusters across groups
Step10: Minimap
Step11: DualMap
Step12: Locate control
Step13: SemiCircle
Step14: Geocoder
|
<ASSISTANT_TASK:>
Python Code:
import folium
from folium import plugins
m = folium.Map([45, 3], zoom_start=4)
plugins.ScrollZoomToggler().add_to(m)
m
import numpy as np
N = 100
data = np.array(
[
np.random.uniform(low=35, high=60, size=N), # Random latitudes in Europe.
np.random.uniform(low=-12, high=30, size=N), # Random longitudes in Europe.
]
).T
popups = [str(i) for i in range(N)] # Popups texts are simple numbers.
m = folium.Map([45, 3], zoom_start=4)
plugins.MarkerCluster(data, popups=popups).add_to(m)
m
m = folium.Map([45, 3], zoom_start=1)
plugins.Terminator().add_to(m)
m
m = folium.Map([30, 0], zoom_start=3)
plugins.BoatMarker(
location=(34, -43), heading=45, wind_heading=150, wind_speed=45, color="#8f8"
).add_to(m)
plugins.BoatMarker(
location=(46, -30), heading=-20, wind_heading=46, wind_speed=25, color="#88f"
).add_to(m)
m
m = folium.Map([45.5, -122], zoom_start=3)
icon_plane = plugins.BeautifyIcon(
icon="plane", border_color="#b3334f", text_color="#b3334f", icon_shape="triangle"
)
icon_number = plugins.BeautifyIcon(
border_color="#00ABDC",
text_color="#00ABDC",
number=10,
inner_icon_style="margin-top:0;",
)
folium.Marker(location=[46, -122], popup="Portland, OR", icon=icon_plane).add_to(m)
folium.Marker(location=[50, -122], popup="Portland, OR", icon=icon_number).add_to(m)
m
m = folium.Map(location=[41.9, -97.3], zoom_start=4)
plugins.Fullscreen(
position="topright",
title="Expand me",
title_cancel="Exit me",
force_separate_button=True,
).add_to(m)
m
m = folium.Map(location=[35.68159659061569, 139.76451516151428], zoom_start=16)
# Lon, Lat order.
lines = [
{
"coordinates": [
[139.76451516151428, 35.68159659061569],
[139.75964426994324, 35.682590062684206],
],
"dates": ["2017-06-02T00:00:00", "2017-06-02T00:10:00"],
"color": "red",
},
{
"coordinates": [
[139.75964426994324, 35.682590062684206],
[139.7575843334198, 35.679505030038506],
],
"dates": ["2017-06-02T00:10:00", "2017-06-02T00:20:00"],
"color": "blue",
},
{
"coordinates": [
[139.7575843334198, 35.679505030038506],
[139.76337790489197, 35.678040905014065],
],
"dates": ["2017-06-02T00:20:00", "2017-06-02T00:30:00"],
"color": "green",
"weight": 15,
},
{
"coordinates": [
[139.76337790489197, 35.678040905014065],
[139.76451516151428, 35.68159659061569],
],
"dates": ["2017-06-02T00:30:00", "2017-06-02T00:40:00"],
"color": "#FFFFFF",
},
]
features = [
{
"type": "Feature",
"geometry": {
"type": "LineString",
"coordinates": line["coordinates"],
},
"properties": {
"times": line["dates"],
"style": {
"color": line["color"],
"weight": line["weight"] if "weight" in line else 5,
},
},
}
for line in lines
]
plugins.TimestampedGeoJson(
{
"type": "FeatureCollection",
"features": features,
},
period="PT1M",
add_last_point=True,
).add_to(m)
m
table = \
<table style=\'width:100%\'>
<tr>
<th>Firstname</th>
<th>Lastname</th>
<th>Age</th>
</tr>
<tr>
<td>Jill</td>
<td>Smith</td>
<td>50</td>
</tr>
<tr>
<td>Eve</td>
<td>Jackson</td>
<td>94</td>
</tr>
</table>
points = [
{
"time": "2017-06-02",
"popup": "<h1>address1</h1>",
"coordinates": [-2.548828, 51.467697],
},
{
"time": "2017-07-02",
"popup": "<h2 style='color:blue;'>address2<h2>",
"coordinates": [-0.087891, 51.536086],
},
{
"time": "2017-08-02",
"popup": "<h2 style='color:orange;'>address3<h2>",
"coordinates": [-6.240234, 53.383328],
},
{
"time": "2017-09-02",
"popup": "<h2 style='color:green;'>address4<h2>",
"coordinates": [-1.40625, 60.261617],
},
{"time": "2017-10-02", "popup": table, "coordinates": [-1.516113, 53.800651]},
]
features = [
{
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": point["coordinates"],
},
"properties": {
"time": point["time"],
"popup": point["popup"],
"id": "house",
"icon": "marker",
"iconstyle": {
"iconUrl": "https://leafletjs.com/examples/geojson/baseball-marker.png",
"iconSize": [20, 20],
},
},
}
for point in points
]
features.append(
{
"type": "Feature",
"geometry": {
"type": "LineString",
"coordinates": [
[-2.548828, 51.467697],
[-0.087891, 51.536086],
[-6.240234, 53.383328],
[-1.40625, 60.261617],
[-1.516113, 53.800651],
],
},
"properties": {
"popup": "Current address",
"times": [
"2017-06-02",
"2017-07-02",
"2017-08-02",
"2017-09-02",
"2017-10-02",
],
"icon": "circle",
"iconstyle": {
"fillColor": "green",
"fillOpacity": 0.6,
"stroke": "false",
"radius": 13,
},
"style": {"weight": 0},
"id": "man",
},
}
)
m = folium.Map(
location=[56.096555, -3.64746],
tiles="cartodbpositron",
zoom_start=5,
)
plugins.TimestampedGeoJson(
{"type": "FeatureCollection", "features": features},
period="P1M",
add_last_point=True,
auto_play=False,
loop=False,
max_speed=1,
loop_button=True,
date_options="YYYY/MM/DD",
time_slider_drag_update=True,
duration="P2M",
).add_to(m)
m
m = folium.Map(location=[0, 0], zoom_start=6)
fg = folium.FeatureGroup(name="groups")
m.add_child(fg)
g1 = plugins.FeatureGroupSubGroup(fg, "group1")
m.add_child(g1)
g2 = plugins.FeatureGroupSubGroup(fg, "group2")
m.add_child(g2)
folium.Marker([-1, -1]).add_to(g1)
folium.Marker([1, 1]).add_to(g1)
folium.Marker([-1, 1]).add_to(g2)
folium.Marker([1, -1]).add_to(g2)
folium.LayerControl(collapsed=False).add_to(m)
m
m = folium.Map(location=[0, 0], zoom_start=6)
mcg = folium.plugins.MarkerCluster(control=False)
m.add_child(mcg)
g1 = folium.plugins.FeatureGroupSubGroup(mcg, "group1")
m.add_child(g1)
g2 = folium.plugins.FeatureGroupSubGroup(mcg, "group2")
m.add_child(g2)
folium.Marker([-1, -1]).add_to(g1)
folium.Marker([1, 1]).add_to(g1)
folium.Marker([-1, 1]).add_to(g2)
folium.Marker([1, -1]).add_to(g2)
folium.LayerControl(collapsed=False).add_to(m)
m
m = folium.Map(location=(30, 20), zoom_start=4)
minimap = plugins.MiniMap()
m.add_child(minimap)
m
m = plugins.DualMap(location=(52.1, 5.1), tiles=None, zoom_start=8)
folium.TileLayer("cartodbpositron").add_to(m.m2)
folium.TileLayer("openstreetmap").add_to(m)
fg_both = folium.FeatureGroup(name="markers_both").add_to(m)
fg_1 = folium.FeatureGroup(name="markers_1").add_to(m.m1)
fg_2 = folium.FeatureGroup(name="markers_2").add_to(m.m2)
icon_red = folium.Icon(color="red")
folium.Marker((52, 5), tooltip="both", icon=icon_red).add_to(fg_both)
folium.Marker((52.4, 5), tooltip="left").add_to(fg_1)
folium.Marker((52, 5.4), tooltip="right").add_to(fg_2)
folium.LayerControl(collapsed=False).add_to(m)
m
m = folium.Map([41.97, 2.81])
plugins.LocateControl().add_to(m)
# If you want get the user device position after load the map, set auto_start=True
plugins.LocateControl(auto_start=True).add_to(m)
m
m = folium.Map([45, 3], zoom_start=5)
plugins.SemiCircle(
(45, 3),
radius=400000,
start_angle=50,
stop_angle=200,
color="green",
fill_color="green",
opacity=0,
popup="start angle - 50 degrees, stop angle - 200 degrees",
).add_to(m)
plugins.SemiCircle(
(46.5, 9.5),
radius=200000,
direction=360,
arc=90,
color="red",
fill_color="red",
opacity=0,
popup="Direction - 0 degrees, arc 90 degrees",
).add_to(m)
m
m = folium.Map()
plugins.Geocoder().add_to(m)
m
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the Dataset
Step2: Let's check for any null values.
Step3: Let's take a peek at the first and last five rows of the data for all columns.
Step4: Exploratory Data Analysis (EDA)
Step5: Lab Task 2
Step6: Training a Linear Regression Model
Step7: Train - Test - Split
Step8: Creating and Training the Model
Step9: Lab Task 3
Step10: Model Evaluation
Step11: Interpreting the coefficients
Step12: Residual Histogram
Step13: Regression Evaluation Metrics
|
<ASSISTANT_TASK:>
Python Code:
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns # Seaborn is a Python data visualization library based on matplotlib.
%matplotlib inline
df_USAhousing = pd.read_csv('../USA_Housing_toy.csv')
# Show the first five row.
df_USAhousing.head()
# The isnull() method is used to check and manage NULL values in a data frame.
df_USAhousing.isnull().sum()
# Pandas describe() is used to view some basic statistical details of a data frame or a series of numeric values.
df_USAhousing.describe()
# Pandas info() function is used to get a concise summary of the dataframe.
df_USAhousing.info()
# TODO 1 -- your code goes here
sns.pairplot(df_USAhousing)
sns.displot(df_USAhousing['Price'])
# TODO 2 -- your code goes here
X = df_USAhousing[['Avg. Area Income', 'Avg. Area House Age', 'Avg. Area Number of Rooms',
'Avg. Area Number of Bedrooms', 'Area Population']]
y = df_USAhousing['Price']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=101)
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
# TODO 3 -- your code goes here
# print the intercept
print(lm.intercept_)
coeff_df = pd.DataFrame(lm.coef_,X.columns,columns=['Coefficient'])
coeff_df
predictions = lm.predict(X_test)
plt.scatter(y_test,predictions)
sns.displot((y_test-predictions),bins=50);
from sklearn import metrics
print('MAE:', metrics.mean_absolute_error(y_test, predictions))
print('MSE:', metrics.mean_squared_error(y_test, predictions))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, predictions)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the Crisis Mapping Toolkit
Step2: Load a domain
Step3: Display the domain
Step4: A GUI should appear in a seperate window displaying the domain location. If the GUI does not appear, try restarting the IPython kernel and trying again. This is the default GUI used by the CMT. It is an enhanced version of the GUI provided with the Earth Engine Python API and behaves similarly to the Earth Engine online "playground" interface.
Step5: Classifier output
Step6: Interpreting results
|
<ASSISTANT_TASK:>
Python Code:
import sys
import os
import ee
# This script assumes your authentification credentials are stored as operatoring system
# environment variables.
__MY_SERVICE_ACCOUNT = os.environ.get('MY_SERVICE_ACCOUNT')
__MY_PRIVATE_KEY_FILE = os.environ.get('MY_PRIVATE_KEY_FILE')
# Initialize the Earth Engine object, using your authentication credentials.
ee.Initialize()
# Make sure that Python can find the CMT source files
CMT_INSTALL_FOLDER = '/home/smcmich1/repo/earthEngine/CrisisMappingToolkit/'
sys.path.append(CMT_INSTALL_FOLDER)
import cmt.util.evaluation
from cmt.mapclient_qt import centerMap, addToMap
import cmt.domain
domainPath = os.path.join(CMT_INSTALL_FOLDER, 'config/domains/modis/kashmore_2010_8.xml')
kashmore_domain = cmt.domain.Domain(domainPath)
import cmt.util.gui_util
cmt.util.gui_util.visualizeDomain(kashmore_domain)
from cmt.modis.flood_algorithms import *
# Select the algorithm to use and then call it
algorithm = DIFFERENCE
(alg, result) = detect_flood(kashmore_domain, algorithm)
# Get a color pre-associated with the algorithm, then draw it on the map
color = get_algorithm_color(algorithm)
addToMap(result.mask(result), {'min': 0, 'max': 1, 'opacity': 0.5, 'palette': '000000, ' + color}, alg, False)
precision, recall, eval_count, quality = cmt.util.evaluation.evaluate_approach(result, kashmore_domain.ground_truth, kashmore_domain.bounds, is_algorithm_fractional(algorithm))
print('For algorithm "%s", precision = %f and recall = %f' % (alg, precision, recall) )
# Access a specific parameter listed in the domain file
kashmore_domain.algorithm_parameters['modis_diff_threshold']
# Call this function to get whatever digital elevation map is available.
dem = kashmore_domain.get_dem()
# All the sensors included in the domain are stored as a list
first_sensor = kashmore_domain.sensor_list[0]
# If you know the name of a sensor you can access it like this
modis_sensor = kashmore_domain.modis
# Then you can access individual sensor bands like this
one_band = modis_sensor.sur_refl_b03
# To get the EE image object containing all the bands, do this
all_bands = modis.image
# The sensor contains some other information,
# but only if the information is present in the XML files
first_band_name = band_names[0]
first_band_resolution = modis.band_resolutions[first_band_name]
# Related domains have the same structure as the main domain
# and can be accessed like this
kashmore_domain.training_domain
kashmore_domain.unflooded_domain
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Можем да променим разделителя като добавим аргумент sep='<<разделител>>'.
Step2: По подразбиране print поставя символ за край на реда след като изведе задацените данни.
Step3: Форматиране на символни низове
|
<ASSISTANT_TASK:>
Python Code:
print(2)
print("is even.")
print(2, "is even.")
print(1, 2, 3)
print(1, 2, 3, sep='|')
print(1)
print(2)
print(3)
print(1, end=' ')
print(2, end=' ')
print(3, end=' ')
print('%d is odd, %d is even' % (3, 4))
print('Hello %s!' % 'Pesho')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we'll start by invoking the GPIO class, which will identify our board and initialize the pins. We will use two pins for input for scrolling through the slideshow. We default to the spidev device at <code>/dev/spidev0.0</code> for the minnow
Step2: The following functions collect all the images in the specified directory and place them into a list. It will filter out all the non-image files in the directory. It will fail if no images are found.
Step3: Now we'll initialize the TFT LCD display and clear it.
Step4: This long infinite loop will work like so
|
<ASSISTANT_TASK:>
Python Code:
import time
import sys
import os
from PIL import Image
from pyDrivers.ada_lcd import *
import pyDrivers.ILI9341 as TFT
import Adafruit_GPIO as GPIO
import Adafruit_GPIO.SPI as SPI
myGPIO = GPIO.get_platform_gpio()
myGPIO.setup(12,GPIO.IN)
myGPIO.setup(16,GPIO.IN)
lcd = ADA_LCD()
lcd.clear()
SPI_PORT = 0
SPI_DEVICE = 0
SPEED = 16000000
DC = 10
RST = 14
imageList = []
rawList = os.listdir("/notebooks")
for i in range(0,len(rawList)):
if (rawList[i].lower().endswith(('.png', '.jpg', '.jpeg', '.gif'))==True):
imageList.append("/notebooks" + "/" + rawList[i])
if len(imageList)==0:
print "No images found!"
exit(1)
count = 0
print imageList
disp = TFT.ILI9341(DC, rst=RST, spi=SPI.SpiDev(SPI_PORT,SPI_DEVICE,SPEED))
disp.begin()
while True:
lcd.clear()
time.sleep(0.25)
message = " Image " + str(count+1) + " of " + str(len(imageList)) + "\n" + imageList[count][len(sys.argv[1]):]
lcd.message(message)
lcd.scroll()
try:
image = Image.open(imageList[count])
except(IOError):
lcd.clear()
time.sleep(0.25)
message = " ERR: " + str(count+1) + " of " + str(len(imageList)) + "\n" + imageList[count][len(sys.argv[1]):]
lcd.scroll()
lcd.message(message)
if(count == len(imageList)-1):
image = Image.open(imageList[0])
else:
image = Image.open(imageList[count+1])
image = image.rotate(90).resize((240, 320))
disp.display(image)
try:
while True:
if (myGPIO.input(12) != 1 and count != 0):
count = count - 1
break
if (myGPIO.input(16) != 1 and count != len(imageList)-1):
count = count + 1
break
except (KeyboardInterrupt):
lcd.clear()
lcd.message("Terminated")
print
exit(0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This was developed using Python 3.6 (Anaconda) and TensorFlow version
Step2: Load Data
Step3: The MNIST data-set has now been loaded and consists of 70.000 images and class-numbers for the images. The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
Step4: Copy some of the data-dimensions for convenience.
Step5: Helper-function for plotting images
Step6: Plot a few images to see if data is correct
Step7: Input Functions for the Estimator
Step8: This actually returns a function
Step9: Calling this function returns a tuple with TensorFlow ops for returning the input and output data
Step10: Similarly we need to create a function for reading the data for the test-set. Note that we only want to process these images once so num_epochs=1 and we do not want the images shuffled so shuffle=False.
Step11: An input-function is also needed for predicting the class of new data. As an example we just use a few images from the test-set.
Step12: The class-numbers are actually not used in the input-function as it is not needed for prediction. However, the true class-number is needed when we plot the images further below.
Step13: Pre-Made / Canned Estimator
Step14: You can have several input features which would then be combined in a list
Step15: In this example we want to use a 3-layer DNN with 512, 256 and 128 units respectively.
Step16: The DNNClassifier then constructs the neural network for us. We can also specify the activation function and various other parameters (see the docs). Here we just specify the number of classes and the directory where the checkpoints will be saved.
Step17: Training
Step18: Evaluation
Step19: Predictions
Step20: New Estimator
Step21: Create an Instance of the Estimator
Step22: We can then create an instance of the new Estimator.
Step23: Training
Step24: Evaluation
Step25: Predictions
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
tf.__version__
from mnist import MNIST
data = MNIST(data_dir="data/MNIST/")
print("Size of:")
print("- Training-set:\t\t{}".format(data.num_train))
print("- Validation-set:\t{}".format(data.num_val))
print("- Test-set:\t\t{}".format(data.num_test))
# The number of pixels in each dimension of an image.
img_size = data.img_size
# The images are stored in one-dimensional arrays of this length.
img_size_flat = data.img_size_flat
# Tuple with height and width of images used to reshape arrays.
img_shape = data.img_shape
# Number of classes, one class for each of 10 digits.
num_classes = data.num_classes
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = data.num_channels
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
# Get the first images from the test-set.
images = data.x_test[0:9]
# Get the true classes for those images.
cls_true = data.y_test_cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": np.array(data.x_train)},
y=np.array(data.y_train_cls),
num_epochs=None,
shuffle=True)
train_input_fn
train_input_fn()
test_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": np.array(data.x_test)},
y=np.array(data.y_test_cls),
num_epochs=1,
shuffle=False)
some_images = data.x_test[0:9]
predict_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": some_images},
num_epochs=1,
shuffle=False)
some_images_cls = data.y_test_cls[0:9]
feature_x = tf.feature_column.numeric_column("x", shape=img_shape)
feature_columns = [feature_x]
num_hidden_units = [512, 256, 128]
model = tf.estimator.DNNClassifier(feature_columns=feature_columns,
hidden_units=num_hidden_units,
activation_fn=tf.nn.relu,
n_classes=num_classes,
model_dir="./checkpoints_tutorial17-1/")
model.train(input_fn=train_input_fn, steps=2000)
result = model.evaluate(input_fn=test_input_fn)
result
print("Classification accuracy: {0:.2%}".format(result["accuracy"]))
predictions = model.predict(input_fn=predict_input_fn)
cls = [p['classes'] for p in predictions]
cls_pred = np.array(cls, dtype='int').squeeze()
cls_pred
plot_images(images=some_images,
cls_true=some_images_cls,
cls_pred=cls_pred)
def model_fn(features, labels, mode, params):
# Args:
#
# features: This is the x-arg from the input_fn.
# labels: This is the y-arg from the input_fn,
# see e.g. train_input_fn for these two.
# mode: Either TRAIN, EVAL, or PREDICT
# params: User-defined hyper-parameters, e.g. learning-rate.
# Reference to the tensor named "x" in the input-function.
x = features["x"]
# The convolutional layers expect 4-rank tensors
# but x is a 2-rank tensor, so reshape it.
net = tf.reshape(x, [-1, img_size, img_size, num_channels])
# First convolutional layer.
net = tf.layers.conv2d(inputs=net, name='layer_conv1',
filters=16, kernel_size=5,
padding='same', activation=tf.nn.relu)
net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)
# Second convolutional layer.
net = tf.layers.conv2d(inputs=net, name='layer_conv2',
filters=36, kernel_size=5,
padding='same', activation=tf.nn.relu)
net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)
# Flatten to a 2-rank tensor.
net = tf.contrib.layers.flatten(net)
# Eventually this should be replaced with:
# net = tf.layers.flatten(net)
# First fully-connected / dense layer.
# This uses the ReLU activation function.
net = tf.layers.dense(inputs=net, name='layer_fc1',
units=128, activation=tf.nn.relu)
# Second fully-connected / dense layer.
# This is the last layer so it does not use an activation function.
net = tf.layers.dense(inputs=net, name='layer_fc2',
units=10)
# Logits output of the neural network.
logits = net
# Softmax output of the neural network.
y_pred = tf.nn.softmax(logits=logits)
# Classification output of the neural network.
y_pred_cls = tf.argmax(y_pred, axis=1)
if mode == tf.estimator.ModeKeys.PREDICT:
# If the estimator is supposed to be in prediction-mode
# then use the predicted class-number that is output by
# the neural network. Optimization etc. is not needed.
spec = tf.estimator.EstimatorSpec(mode=mode,
predictions=y_pred_cls)
else:
# Otherwise the estimator is supposed to be in either
# training or evaluation-mode. Note that the loss-function
# is also required in Evaluation mode.
# Define the loss-function to be optimized, by first
# calculating the cross-entropy between the output of
# the neural network and the true labels for the input data.
# This gives the cross-entropy for each image in the batch.
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels,
logits=logits)
# Reduce the cross-entropy batch-tensor to a single number
# which can be used in optimization of the neural network.
loss = tf.reduce_mean(cross_entropy)
# Define the optimizer for improving the neural network.
optimizer = tf.train.AdamOptimizer(learning_rate=params["learning_rate"])
# Get the TensorFlow op for doing a single optimization step.
train_op = optimizer.minimize(
loss=loss, global_step=tf.train.get_global_step())
# Define the evaluation metrics,
# in this case the classification accuracy.
metrics = \
{
"accuracy": tf.metrics.accuracy(labels, y_pred_cls)
}
# Wrap all of this in an EstimatorSpec.
spec = tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
train_op=train_op,
eval_metric_ops=metrics)
return spec
params = {"learning_rate": 1e-4}
model = tf.estimator.Estimator(model_fn=model_fn,
params=params,
model_dir="./checkpoints_tutorial17-2/")
model.train(input_fn=train_input_fn, steps=2000)
result = model.evaluate(input_fn=test_input_fn)
result
print("Classification accuracy: {0:.2%}".format(result["accuracy"]))
predictions = model.predict(input_fn=predict_input_fn)
cls_pred = np.array(list(predictions))
cls_pred
plot_images(images=some_images,
cls_true=some_images_cls,
cls_pred=cls_pred)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the data.
Step2: Simple neural network with a couple of dense layers. The hidden_sizes list defines the hidden layers, in this case 2 hidden layers of 200 nodes.
Step3: Train and predict
|
<ASSISTANT_TASK:>
Python Code:
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.layers.normalization import BatchNormalization
from keras.layers.advanced_activations import PReLU
from keras.layers.recurrent import LSTM
from keras.utils import np_utils
from sklearn import preprocessing
import numpy as np
import csv
import pandas as pd
import sys
np.random.seed(1919)
train = pd.read_csv('data/train.csv')
test = pd.read_csv('data/test.csv')
y = train.Survived.values
train = train.drop(['Survived'], axis=1)
def modify_data(base_df):
new_df = pd.DataFrame()
new_df['Gender'] = base_df.Sex.map(lambda x:1 if x.lower() == 'female' else 0)
# apply functions to dataframe => Fare NaN
fares_by_class = base_df.groupby('Pclass').Fare.median()
def getFare(example):
if pd.isnull(example):
example['Fare'] = fares_by_class[example['Pclass']]
return example
new_df['Fare'] = base_df['Fare']
new_df['Family'] = (base_df.Parch + base_df.SibSp) > 0
new_df['Family'] = new_df['Family'].map(lambda x:1 if x else 0)
new_df['GenderFam'] = new_df['Gender']+new_df['Family']
new_df['Title'] = base_df.Name.map(lambda x:x.split(' ')[0])
new_df['Rich'] = base_df.Pclass == 1
return new_df
train = modify_data(train)
# TEST DATA
#test = pd.read_csv('titanic_test.csv', header=0) # Load the test file into a dataframe
ids = test['PassengerId'].values
test = modify_data(test)
train = train.fillna(-1)
test = test.fillna(-1)
for f in train.columns:
if train[f].dtype=='object':
lbl = preprocessing.LabelEncoder()
lbl.fit(list(train[f].values) + list(test[f].values))
train[f] = lbl.transform(list(train[f].values))
test[f] = lbl.transform(list(test[f].values))
X = train.values
dimof_input = X.shape[1]
dimof_output = len(set(y.flat))
y = np_utils.to_categorical(y, dimof_output)
test_x = test.values
batch_size = 20
hidden_sizes = [200, 200]
dropout = 0.5
countof_epoch = 200
verbose = 0
model = Sequential()
for i, s in enumerate(hidden_sizes):
if i:
model.add(Dense(s))
else:
model.add(Dense(s, input_shape=(dimof_input,)))
model.add(Activation('tanh'))
model.add(BatchNormalization())
model.add(Dropout(dropout))
model.add(Dense(dimof_output))
model.add(Activation('softmax'))
model.compile(loss='binary_crossentropy', optimizer="rmsprop")
model.fit(
X, y,
show_accuracy=True, #validation_split=0.2,
batch_size=batch_size, nb_epoch=countof_epoch, verbose=verbose)
# Evaluate
loss, accuracy = model.evaluate(X, y, show_accuracy=True, verbose=verbose)
print('loss: ', loss)
print('accuracy: ', accuracy)
print()
predict_x = test_x
predict_df = test
preds = model.predict(predict_x, batch_size=batch_size)
pred_arr = [p[0] for p in preds]
results = pd.DataFrame({"PassengerId":ids, 'Survived': pred_arr})
results['PassengerId'] = results['PassengerId'].astype('int')
results.Survived = results.Survived.map(lambda x:0 if x >= 0.5 else 1)
results.set_index("PassengerId")
print results.Survived.sum()
results.to_csv('results_nn.csv', index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: universal edges are handled as if they were many distinct existencial edges from the point of view of scc_info, so the acceptance / rejection status is not always meaningful.
Step2: A corner case for the dot printer
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import display
import spot
spot.setup(show_default='.bas')
spot.automaton('''
HOA: v1
States: 2
Start: 0&1
AP: 2 "a" "b"
acc-name: Buchi
Acceptance: 1 Inf(0)
--BODY--
State: 0
[0] 0
[!0] 1
State: 1
[1] 1 {0}
--END--
''')
spot.automaton('''
HOA: v1
States: 2
Start: 0&1
AP: 2 "a" "b"
Acceptance: 1 Fin(0)
--BODY--
State: 0
[0] 0&1 {0}
State: 1
[1] 1
--END--
''')
spot.automaton('''
HOA: v1
States: 2
Start: 0&1
AP: 2 "a" "b"
Acceptance: 1 Fin(0)
--BODY--
State: 0
[0] 0 {0}
[!0] 1
State: 1
[1] 1&0
--END--
''')
spot.automaton('''
HOA: v1
States: 2
Start: 0
AP: 2 "a" "b"
Acceptance: 1 Fin(0)
--BODY--
State: 0
[0] 0
[!0] 1 {0}
State: 1
[1] 1&0
--END--
''')
spot.automaton('''
HOA: v1
States: 2
Start: 0
AP: 2 "a" "b"
Acceptance: 1 Fin(0)
--BODY--
State: 0
[0] 0 {0}
[!0] 1
State: 1
[1] 1&0 {0}
--END--
''')
for a in spot.automata('''
HOA: v1
States: 3
Start: 0
AP: 2 "a" "b"
Acceptance: 1 Fin(0)
--BODY--
State: 0
[0] 1&2
State: 1
[1] 1&2 {0}
State: 2
[1] 2
--END--
HOA: v1
States: 3
Start: 0
AP: 2 "a" "b"
Acceptance: 1 Fin(0)
--BODY--
State: 0
[0] 1&2
State: 1
[1] 1 {0}
State: 2
[1] 2
--END--
'''):
display(a)
a = spot.automaton('''
HOA: v1
States: 3
Start: 0&2
AP: 2 "a" "b"
Acceptance: 1 Fin(0)
spot.highlight.edges: 2 2
--BODY--
State: 0
[0] 1&2
State: 1
[1] 1&2 {0}
State: 2
[1] 1&2
--END--
''')
display(a, a.show('.basy'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Training
Step2: The most common embarkation port is S, so let's assume everyone got on there.
Step3: From the Kaggle competition description, the error metric is percentage of correct predictions.
Step4: One good way to think of logistic regression is that it takes the output of a linear regression, and maps it to a probability value between 0 and 1. The mapping is done using the logit function. Passing any value through the logit function will map it to a value between 0 and 1 by "squeezing" the extreme values. This is perfect for us, because we only care about two outcomes.
Step5: Testing
Step6: The most common embarkation port is S, so let's assume everyone got on there.
Step7: We'll also need to replace a missing value in the Fare column.
Step8: generate a submission for the competition!
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from matplotlib import pyplot
%matplotlib inline
titanic = pd.read_csv("train.csv")
titanic_test = pd.read_csv("test.csv")
titanic.shape
titanic.describe()
titanic.info()
titanic.head(3)
titanic["Age"] = titanic["Age"].fillna(titanic["Age"].median())
titanic.loc[titanic["Sex"] == "male", "Sex"] = 0
titanic.loc[titanic["Sex"] == "female", "Sex"] = 1
titanic.head(3)
titanic["Embarked"].unique()
titanic["Embarked"] = titanic["Embarked"].fillna('S')
x = titanic["Embarked"]
x = pd.get_dummies(titanic["Embarked"]).astype('int')
t = pd.concat([titanic, x], axis=1)
t.head(3)
# Import the linear regression class
from sklearn.linear_model import LinearRegression
# Sklearn also has a helper that makes it easy to do cross validation
from sklearn.cross_validation import KFold
# The columns we'll use to predict the target
predictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "C", "Q", "S"]
t.shape[0]
# Initialize our algorithm class
alg = LinearRegression()
nfolds = 9
# Generate cross validation folds for the titanic dataset. It return the row indices corresponding to train and test.
# We set random_state to ensure we get the same splits every time we run this.
kf = KFold(t.shape[0], n_folds=nfolds, random_state=1)
predictions = []
for train, test in kf:
# The predictors we're using the train the algorithm. Note how we only take the rows in the train folds.
train_predictors = (t[predictors].iloc[train,:])
# The target we're using to train the algorithm.
train_target = t["Survived"].iloc[train]
# Training the algorithm using the predictors and target.
alg.fit(train_predictors, train_target)
# We can now make predictions on the test fold
test_predictions = alg.predict(t[predictors].iloc[test,:])
predictions.append(test_predictions)
len(predictions)
predictions[2]
# The predictions are in three separate numpy arrays. Concatenate them into one.
# We concatenate them on axis 0, as they only have one axis.
predictions = np.concatenate(predictions, axis=0)
# Map predictions to outcomes (only possible outcomes are 1 and 0)
predictions[predictions > .5] = 1
predictions[predictions <=.5] = 0
accuracy = sum(predictions[predictions == titanic["Survived"]]) / len(predictions)
accuracy
from sklearn import cross_validation
from sklearn.linear_model import LogisticRegression
# Initialize our algorithm
alg = LogisticRegression(random_state=1)
# Compute the accuracy score for all the cross validation folds. (much simpler than what we did before!)
scores = cross_validation.cross_val_score(alg, t[predictors], t["Survived"], cv=nfolds)
# Take the mean of the scores (because we have one for each fold)
print(scores.mean())
titanic_test["Age"] = titanic_test["Age"].fillna(titanic["Age"].median())
titanic_test.loc[titanic_test["Sex"] == "male", "Sex"] = 0
titanic_test.loc[titanic_test["Sex"] == "female", "Sex"] = 1
titanic_test.head(3)
titanic_test["Embarked"].unique()
titanic_test["Embarked"] = titanic_test["Embarked"].fillna('S')
x = titanic_test["Embarked"]
x = pd.get_dummies(titanic_test["Embarked"]).astype('int')
t_test = pd.concat([titanic_test, x], axis=1)
t_test.head(3)
t_test["Fare"] = t_test["Fare"].fillna(np.mean(t_test["Fare"]))
t_test.head(3)
# Import the linear regression class
from sklearn.linear_model import LinearRegression
# Sklearn also has a helper that makes it easy to do cross validation
from sklearn.cross_validation import KFold
# The columns we'll use to predict the target
predictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "C", "Q", "S"]
t.shape[0]
from sklearn.linear_model import LogisticRegression
# Initialize the algorithm class
alg = LogisticRegression(random_state=1)
# Train the algorithm using all the training data
alg.fit(t[predictors], t["Survived"])
# Make predictions using the test set.
predictions = alg.predict(t_test[predictors])
# Create a new dataframe with only the columns Kaggle wants from the dataset.
submission = pd.DataFrame({
"PassengerId": titanic_test["PassengerId"],
"Survived": predictions
})
submission.to_csv("submission.csv", index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get epochs
|
<ASSISTANT_TASK:>
Python Code:
# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import numpy as np
import mne
from mne.datasets import sample
from mne.beamformer import lcmv
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
label_name = 'Aud-lh'
fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name
subjects_dir = data_path + '/subjects'
event_id, tmin, tmax = 1, -0.2, 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True, proj=True)
raw.info['bads'] = ['MEG 2443', 'EEG 053'] # 2 bads channels
events = mne.read_events(event_fname)
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
left_temporal_channels = mne.read_selection('Left-temporal')
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True,
exclude='bads', selection=left_temporal_channels)
# Pick the channels of interest
raw.pick_channels([raw.ch_names[pick] for pick in picks])
# Re-normalize our empty-room projectors, so they are fine after subselection
raw.info.normalize_proj()
# Read epochs
proj = False # already applied
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
baseline=(None, 0), preload=True, proj=proj,
reject=dict(grad=4000e-13, mag=4e-12, eog=150e-6))
evoked = epochs.average()
forward = mne.read_forward_solution(fname_fwd, surf_ori=True)
# Compute regularized noise and data covariances
noise_cov = mne.compute_covariance(epochs, tmin=tmin, tmax=0, method='shrunk')
data_cov = mne.compute_covariance(epochs, tmin=0.04, tmax=0.15,
method='shrunk')
plt.close('all')
pick_oris = [None, 'normal', 'max-power']
names = ['free', 'normal', 'max-power']
descriptions = ['Free orientation', 'Normal orientation', 'Max-power '
'orientation']
colors = ['b', 'k', 'r']
for pick_ori, name, desc, color in zip(pick_oris, names, descriptions, colors):
stc = lcmv(evoked, forward, noise_cov, data_cov, reg=0.01,
pick_ori=pick_ori)
# View activation time-series
label = mne.read_label(fname_label)
stc_label = stc.in_label(label)
plt.plot(1e3 * stc_label.times, np.mean(stc_label.data, axis=0), color,
hold=True, label=desc)
plt.xlabel('Time (ms)')
plt.ylabel('LCMV value')
plt.ylim(-0.8, 2.2)
plt.title('LCMV in %s' % label_name)
plt.legend()
plt.show()
# Plot last stc in the brain in 3D with PySurfer if available
brain = stc.plot(hemi='lh', subjects_dir=subjects_dir)
brain.set_data_time_index(180)
brain.show_view('lateral')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Se mostrará una aplicación de la SVD a la compresión de imágenes y reducción de ruido.
Step2: Pregunta
Step3: Hacer una función que me resuelva un sistema de ecuaciones Ax=b
Step4: A es una matriz es que representa al sistema de ecuaciones $ X_{1}+X_{2}=b_{1}$ y $b_{2}=0$, su imagen es $b = \begin{bmatrix}b_{1}\0 \end{bmatrix}$. La solución no es única ya que hay una variable libre.
Step5: En este caso la matriz tiene una solución única ya que $b_{2}=exp-32$ y las x´s son diferentes a las del caso anterior.
Step6: Al hacer una aproximación de la forma $sat_score= \alpha + \betastudy_hours + \epsilon$ podemos plantear la minimización de errores al cuadrado de tal manera que $$min \epsilon^{2} = min_{\alpha,\hat{\beta}} satscore - \alpha-\hat{\beta}study hours$$
|
<ASSISTANT_TASK:>
Python Code:
# Segunda parte: Aplicaciones en Python
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import time
from PIL import Image
##Aquí abro la imagen y la convierto a gris
im = Image.open("/Users/usuario/Documents/MaestriaCD/Propedeutico/PropedeuticoDataScience2017/Tarea/Simpsons.png")
gris =im.convert("1")
plt.figure(figsize=(9, 6))
##convierto los datos de la imagen en una matriz con valores
mat_val = np.array(gris)
plt.title("Imagen Original")
plt.imshow(mat_val, cmap='gray');
###hago el SVD
U, D, V = np.linalg.svd(mat_val)
##Reconstruyo y grafíco la imagen de manera completa
img_rec = np.matrix(U) * np.diag(D) * np.matrix(V)
plt.title("Imagen reconstruida de manera completa")
plt.imshow(img_rec, cmap='gray');
##Escogiendo K=100 para reconstruir la imagen con solo K elementos del SVD
k=100
rec =np.matrix(U[:,:k]) *np.diag(D[:k])* np.matrix (V[:k,:])
plt.title("Imagen reconstruida con 100 valores")
plt.imshow(rec, cmap='gray')
##Escogiendo K=150 para reconstruir la imagen con solo K elementos del SVD
k=150
rec =np.matrix(U[:,:k]) *np.diag(D[:k])* np.matrix (V[:k,:])
plt.title("Imagen reconstruida con 150 valores")
plt.imshow(rec, cmap='gray')
##Escogiendo K=200 para reconstruir la imagen con solo K elementos del SVD
k=200
rec =np.matrix(U[:,:k]) *np.diag(D[:k])* np.matrix (V[:k,:])
plt.title("Imagen reconstruida con 200 valores")
plt.imshow(rec, cmap='gray')
import numpy as np
def pseudo (A):
X =np.array(A)
U, D, V = np.linalg.svd(X, full_matrices=False)
V_t = np.transpose(V)
U_t = np.transpose(U)
D_diag=np.diag(D)
rows, col =D_diag.shape
D_inv = np.zeros((rows,col))
##Aquí calculo la inversa de D, invirtiendo los valores y poniendo 0 en vez de 1/0
for i in range(0,max(rows,col)):
if D_diag[i,i]!= 0 :
D_inv[i,i]=1/D_diag[i,i]
else :
D_inv[i,i]= 0
##aquí reconstruyo la pseudoinversa de A
pseudo = np.dot(np.dot(V_t, D_inv), U_t)
return pseudo
def solve (A,y):
A= np.array(A)
Y=np.array(y)
## reviso el tamaño
rows, col =A.shape
vec_rows, vec_col = Y.shape
if rows != vec_rows:
raise Exception ("El tamaño de la matriz y el vector no coinciden")
else:
inv=pseudo(A)
solve= np.dot(inv, Y)
return (solve)
A=[[1,1],[0,0]]
b=[[1],[1]]
solve(A,b)
pseudo(A)
import math
A=[[1,1],[0,1*math.exp(-32)]]
print(pseudo(A))
solve(A,b)
##Programando un script para descargar el archivo .csv de Github y convertirlo en un data frame
import numpy as np
import pandas as pd
import statsmodels.formula.api as sm
import matplotlib.pyplot as plt
#Script para descargar archivo y convertirlo en Data Frame con Pandas
#url="/Users/usuario/Documents/MaestriaCD/Propedeutico/PropedeuticoDataScience2017/study_vs_sat.csv"
url="https://raw.githubusercontent.com/mauriciogtec/PropedeuticoDataScience2017/master/Tarea/study_vs_sat.csv"
data = pd.read_csv(url)
data=pd.read_csv(url)
data= pd.DataFrame(data)
##Deefine una función que me de una predicción para cada valor de sat_score
##
def prediction (S,alpha,beta):
prediction=np.zeros(len(data))
for i in range(len(data)):
score=S[i]
prediction[i]=alpha+beta*score
return prediction
##puedo usar datos que se me ocurran
alpha=-353.164879499
beta= 25.3264677779
S=data["study_hours"]
##Entonces usando esta información puedo hacer la predicción usando mi función
score_pred=prediction(S,alpha,beta)
print(score_pred)
##Definiendo el numpy array con 1 en el primer vector
##y sat_score en el segundo
Origen= data['Origen'] = np.ones(( len(data), ))
X=data[["Origen","study_hours"]]
print(X)
###Calculando X^+ * study_hours para obtener la alpha y beta
X_inv=pseudo(X)
alpha_aprox,beta_aprox=np.dot(X_inv,data["sat_score"])
print(alpha_aprox, beta_aprox)
##ahora, haciendo la formula de estimadores de OLS de veras para calcular alpha t beta
X_t=np.transpose(X)
Sxx=np.dot(X_t,X)
Sxy=np.dot(X_t,data["sat_score"])
Sxx_inv= pseudo(Sxx)
alpha,beta= np.dot(Sxx_inv,Sxy)
print(alpha,beta)
##visualizando los datos correctos vs las aproximaciones
alpha=353.164879499
beta= 25.3264677779
S=data["study_hours"]
prediccion=prediction(S, alpha,beta)
colors = ['red', 'blue']
pred=plt.plot(prediccion, 'bo', markersize=10) # blue circle with size 10
val=plt.plot(data["sat_score"], 'ro', ms=10,)
plt.legend((pred, val),
('Valores de la predicción',"Valor Real"),
scatterpoints=1,
loc='lower left',
ncol=3,
fontsize=8)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the report
Step2: Inspect the report on all the floatting parameters
Step3: Inspect the report on a subset of the floatting parameters
Step4: Inspect the report on a n other subset of floatting parameters
|
<ASSISTANT_TASK:>
Python Code:
# coding: utf-8
import pickle
from pprint import pformat
import numpy as np
np.set_printoptions(precision=4)
np.set_printoptions(edgeitems=1)
with open("report.pkl", 'r') as pfile:
report = pickle.load(pfile)
print("Unconstraint")
print("-" * 10)
print(" - total of {0} studies".format(len(report.recons_im)))
print(" - fixed parameters are : {0}".format(report.fixed_params))
print(" - floatting parameters are : {0}".format(report.floatting_params))
for param in report.floatting_params:
print(" -- '{0}' takes values {1}".format(param,
report.get_list_params(param)))
for metric in report.metrics_funcs:
print(" - best score for {0} is {1}"
"obtained with parameters #{2}".format(metric.func_name,
report.best_score(metric),
report.best_index(metric)))
print(" - {0}".format(pformat(report.best_params(metric))))
print("Constraint #1")
print("-" * 10)
print("Now we fix those floatting parameters:")
key_fixed = report.floatting_params[:-1]
values_fixed = [[report.get_list_params(param)[0]]
for param in report.floatting_params[:-1]]
filter_ = dict(zip(key_fixed, values_fixed))
print(" -- {0}".format(filter_))
print("The remaning floatting parameters is:")
key_floatting = report.floatting_params[-1]
values_floatting = report.get_list_params(report.floatting_params[-1])
print(" -- {0}".format({key_floatting:values_floatting}))
for metric in report.metrics_funcs:
print(" - best score for "
"{0} is {1} obtained with:".format(metric.func_name,
report.best_score(metric, filter_)))
print(" - {0}".format(pformat(report.best_params(metric, filter_))))
print("Constraint #2")
print("-" * 10)
print("Now we fix those floatting parameters:")
key_fixed = report.floatting_params[1:]
values_fixed = [[report.get_list_params(param)[0]]
for param in report.floatting_params[1:]]
filter_ = dict(zip(key_fixed, values_fixed))
print(" -- {0}".format(filter_))
print("The remaning floatting parameters is:")
key_floatting = report.floatting_params[0]
values_floatting = report.get_list_params(report.floatting_params[0])
print(" -- {0}".format({key_floatting:values_floatting}))
for metric in report.metrics_funcs:
print(" - best score for "
"{0} is {1} obtained with:".format(metric.func_name,
report.best_score(metric, filter_)))
print(" - {0}".format(pformat(report.best_params(metric, filter_))))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Show versions for any diagnostics
Step2: Load dataset
Step3: Period of interest 4 days during normal week
Step4: Training
Step5: Set two days for Disaggregation period of interest
Step6: Disaggregate using Hart (Active data only)
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
from os.path import join
from pylab import rcParams
import matplotlib.pyplot as plt
rcParams['figure.figsize'] = (13, 6)
plt.style.use('ggplot')
#import nilmtk
from nilmtk import DataSet, TimeFrame, MeterGroup, HDFDataStore
from nilmtk.disaggregate.hart_85 import Hart85
from nilmtk.disaggregate import CombinatorialOptimisation
from nilmtk.utils import print_dict, show_versions
from nilmtk.metrics import f1_score
#import seaborn as sns
#sns.set_palette("Set3", n_colors=12)
import warnings
warnings.filterwarnings("ignore") #suppress warnings, comment out if warnings required
#uncomment if required
#show_versions()
data_dir = '/Users/GJWood/nilm_gjw_data/HDF5/'
gjw = DataSet(join(data_dir, 'nilm_gjw_data.hdf5'))
print('loaded ' + str(len(gjw.buildings)) + ' buildings')
building_number=1
gjw.set_window('2015-06-01 00:00:00', '2015-06-05 00:00:00')
elec = gjw.buildings[building_number].elec
mains = elec.mains()
house = elec['fridge'] #only one meter so any selection will do
df = house.load().next() #load the first chunk of data into a dataframe
df.info() #check that the data is what we want (optional)
#note the data has two columns and a time index
plotdata = df.ix['2015-06-01 00:00:00': '2015-07-06 00:00:00']
plotdata.plot()
plt.title("Raw Mains Usage")
plt.ylabel("Power (W)")
plt.xlabel("Time");
plt.scatter(plotdata[('power','active')],plotdata[('power','reactive')])
plt.title("Raw Mains Usage Signature Space")
plt.ylabel("Reactive Power (VAR)")
plt.xlabel("Active Power (W)");
h = Hart85()
h.train(mains,cols=[('power','active'),('power','reactive')],min_tolerance=100,noise_level=70,buffer_size=20,state_threshold=15)
h.centroids
plt.scatter(h.steady_states[('active average')],h.steady_states[('reactive average')])
plt.scatter(h.centroids[('power','active')],h.centroids[('power','reactive')],marker='x',c=(1.0, 0.0, 0.0))
plt.legend(['Steady states','Centroids'],loc=4)
plt.title("Training steady states Signature space")
plt.ylabel("Reactive average (VAR)")
plt.xlabel("Active average (W)");
labels = ['Centroid {0}'.format(i) for i in range(len(h.centroids))]
for label, x, y in zip(labels, h.centroids[('power','active')], h.centroids[('power','reactive')]):
plt.annotate(
label,
xy = (x, y), xytext = (-5, 5),
textcoords = 'offset points', ha = 'right', va = 'bottom',
bbox = dict(boxstyle = 'round,pad=0.5', fc = 'yellow', alpha = 0.5))
h.steady_states.head()
h.steady_states.tail()
h.model
ax = mains.plot()
h.steady_states['active average'].plot(style='o', ax = ax);
plt.ylabel("Power (W)")
plt.xlabel("Time");
#plt.show()
h.pair_df
gjw.set_window('2015-07-13 00:00:00','2015-07-14 00:00:00')
elec = gjw.buildings[building_number].elec
mains = elec.mains()
mains.plot()
ax = mains.plot()
h.steady_states['active average'].plot(style='o', ax = ax);
plt.ylabel("Power (W)")
plt.xlabel("Time");
disag_filename = join(data_dir, 'disag_gjw_hart.hdf5')
output = HDFDataStore(disag_filename, 'w')
h.disaggregate(mains,output,sample_period=1)
output.close()
ax = mains.plot()
h.steady_states['active average'].plot(style='o', ax = ax);
plt.ylabel("Power (W)")
plt.xlabel("Time");
disag_hart = DataSet(disag_filename)
disag_hart
disag_hart_elec = disag_hart.buildings[building_number].elec
disag_hart_elec
disag_hart_elec.mains()
h.centroids
h.model
h.steady_states
from nilmtk.metrics import f1_score
f1_hart= f1_score(disag_hart_elec, test_elec)
f1_hart.index = disag_hart_elec.get_labels(f1_hart.index)
f1_hart.plot(kind='barh')
plt.ylabel('appliance');
plt.xlabel('f-score');
plt.title("Hart");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Univariate example
Step2: A confusion martrix shows that the predited level $\tilde y$ aligns closely with the true label. Furthermore, there is a never a two-level error meaing $\tilde y=1$ when $y=3$ and vice-versa.
Step3: Figure 1 shows the distribution of x, the the density of the different classes represented by a colored histogram and the underlying marginal probabilities learned from $\hat\theta_1, \hat\theta_2$, and $\hat\beta$. The predicted marginal probabilites align nicely wit the true underlying densities showing that appropriate slope and intercept thresholds have been learned.
Step4: As Figure 2 shows, the ordinal regresion model has once again calculated reasonable slope and intercept coefficients such that the model's decision boundaries align with our intuition of the underlying data distribution.
Step5: The table above shows that, on average, the ordinal regression model is only off by 0.2 of a quantile whereas the multinomial model is off by 0.4 and the least-squares model is off by a whole level. Interestingly the raw accuracy of the ordinal and multinomial model is virtually identical in these simulations. However because the loss function does not penalize the mutlinomial model for being significantly off as opposed to just modestly off, it is more likely to make larger errors. Figure 3 shows that the ordinal regression model outperforms both the multinomial and linear regression model in for every simulation iteration in terms of MAE.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from scipy.optimize import minimize
from sklearn.preprocessing import StandardScaler
def sig1(z): # sigma
return(1/(1+np.exp(-z)))
def sig2(z): # sigma'(z)
phat = sig1(z)
return(phat*(1-phat))
class y2ord(): # Convert ordinal to 1, 2, ... K
def __init__(self):
self.di = {}
def fit(self,y):
self.uy = np.sort(np.unique(y))
self.di = dict(zip(self.uy, np.arange(len(self.uy))+1))
def transform(self,y):
return(np.array([self.di[z] for z in y]))
def alpha2theta(alpha,K): # theta[t] = theta[t-1] + exp(alpha[t])
return(np.cumsum(np.append(alpha[0], np.exp(alpha[1:]))))
def theta2alpha(theta,K): # alpha[t] = log(theta[t] - theta[t-1])
return(np.append(theta[0],np.log(theta[1:] - theta[:-1])))
def alpha_beta_wrapper(alpha_beta, X, lb=20, ub=20):
K = len(alpha_beta) + 1
if X is not None:
K -= X.shape[1]
beta = alpha_beta[K - 1:]
else:
beta = np.array([0])
alpha = alpha_beta[:K - 1]
theta = alpha2theta(alpha, K)
theta = np.append(np.append(theta[0] - lb, theta), theta[-1] + ub)
return(alpha, theta, beta, K)
# Likelihood function
def nll_ordinal(alpha_beta, X, idx_y, lb=20, ub=20):
alpha, theta, beta, K = alpha_beta_wrapper(alpha_beta, X, lb, ub)
score = np.dot(X,beta)
ll = 0
for kk, idx in enumerate(idx_y):
ll += sum(np.log(sig1(theta[kk+1]-score[idx])-sig1(theta[kk]-score[idx])))
nll = -1*(ll / X.shape[0])
return(nll)
# Gradient wrapper
def gll_ordinal(alpha_beta, X, idx_y, lb=20, ub=20):
grad_alpha = gll_alpha(alpha_beta, X, idx_y)
grad_X = gll_beta(alpha_beta, X, idx_y)
return(np.append(grad_alpha,grad_X))
# gradient function for beta
def gll_beta(alpha_beta, X, idx_y, lb=20, ub=20):
alpha, theta, beta, K = alpha_beta_wrapper(alpha_beta, X, lb, ub)
score = np.dot(X, beta)
grad_X = np.zeros(X.shape[1])
for kk, idx in enumerate(idx_y): # kk = 0; idx=idx_y[kk]
den = sig1(theta[kk + 1] - score[idx]) - sig1(theta[kk] - score[idx])
num = -sig2(theta[kk + 1] - score[idx]) + sig2(theta[kk] - score[idx])
grad_X += np.dot(X[idx].T, num / den)
grad_X = -1 * grad_X / X.shape[0] # negative average of gradient
return(grad_X)
# gradient function for theta=exp(alpha)
def gll_alpha(alpha_beta, X, idx_y, lb=20, ub=20):
alpha, theta, beta, K = alpha_beta_wrapper(alpha_beta, X, lb, ub)
score = np.dot(X, beta)
grad_alpha = np.zeros(K - 1)
for kk in range(K-1):
idx_p, idx_n = idx_y[kk], idx_y[kk+1]
den_p = sig1(theta[kk + 1] - score[idx_p]) - sig1(theta[kk] - score[idx_p])
den_n = sig1(theta[kk + 2] - score[idx_n]) - sig1(theta[kk+1] - score[idx_n])
num_p, num_n = sig2(theta[kk + 1] - score[idx_p]), sig2(theta[kk + 1] - score[idx_n])
grad_alpha[kk] += sum(num_p/den_p) - sum(num_n/den_n)
grad_alpha = -1* grad_alpha / X.shape[0] # negative average of gradient
grad_alpha *= np.append(1, np.exp(alpha[1:])) # apply chain rule
return(grad_alpha)
# inference probabilities
def prob_ordinal(alpha_beta, X, lb=20, ub=20):
alpha, theta, beta, K = alpha_beta_wrapper(alpha_beta, X, lb, ub)
score = np.dot(X, beta)
phat = (np.atleast_2d(theta) - np.atleast_2d(score).T)
phat = sig1(phat[:, 1:]) - sig1(phat[:, :-1])
return(phat)
# Wrapper for training/prediction
class ordinal_reg():
def __init__(self,standardize=True):
self.standardize = standardize
def fit(self,data,lbls):
self.p = data.shape[1]
self.Xenc = StandardScaler().fit(data)
self.yenc = y2ord()
self.yenc.fit(y=lbls)
ytil = self.yenc.transform(lbls)
idx_y = [np.where(ytil == yy)[0] for yy in list(self.yenc.di.values())]
self.K = len(idx_y)
theta_init = np.array([(z + 1) / self.K for z in range(self.K - 1)])
theta_init = np.log(theta_init / (1 - theta_init))
alpha_init = theta2alpha(theta_init, self.K)
param_init = np.append(alpha_init, np.repeat(0, self.p))
self.alpha_beta = minimize(fun=nll_ordinal, x0=param_init, method='L-BFGS-B', jac=gll_ordinal,
args=(self.Xenc.transform(data), idx_y)).x
def predict(self,data):
phat = prob_ordinal(self.alpha_beta,self.Xenc.transform(data))
return(np.argmax(phat,axis=1)+1)
def predict_proba(self,data):
phat = prob_ordinal(self.alpha_beta,self.Xenc.transform(data))
return(phat)
import seaborn as sns
from matplotlib import pyplot as plt
from matplotlib import colors
n = 50
np.random.seed(1234)
mus_1d = np.arange(-2,2+1,2)
df_1d = pd.concat([pd.DataFrame({'x':0.75*np.random.randn(25)+mu,'mu':mu})
for mu in mus_1d]).reset_index(drop=True)
di_mu_1d = dict(zip(mus_1d,['lvls'+str(x+1) for x in range(len(mus_1d))]))
df_1d['lvl'] = df_1d.mu.map(di_mu_1d)
# Fit model
mdl_1d = ordinal_reg()
mdl_1d.fit(df_1d[['x']],df_1d.mu)
# Calculate accuracy
df_1d_prob = pd.DataFrame(mdl_1d.predict_proba(df_1d[['x']]),columns=list(di_mu_1d.values()))
df_1d_prob = df_1d_prob.assign(x=df_1d.x).melt('x',var_name='lvl',value_name='prob')
df_1d_acc = pd.crosstab(index=df_1d.lvl,columns=mdl_1d.predict(df_1d[['x']]))
df_1d_acc.index.name='Actual'
df_1d_acc.columns.name='Predicted'
print(df_1d_acc)
# Function to map prob over histogram
def dens_mapper(*args ,**kwargs):
ax = plt.gca()
frame = kwargs.pop('data')
sns.distplot(a=frame['x'].values, hist=True, kde=False, rug=True, ax=ax, **kwargs)
if frame.lvl.unique()=='lvls'+str(len(mus_1d)):
ax2 = ax.twinx()
sns.lineplot(data=df_1d_prob, x='x', y='prob',hue='lvl', legend=False, ax=ax2, **kwargs)
ax2.set_ylabel('Estimated probability',size=12)
ax2.yaxis.set_label_coords(1.1, 0.5)
g_1d = sns.FacetGrid(df_1d.sort_values('lvl'),hue='lvl',height=5,aspect=1.5,sharey=False)
g_1d.map_dataframe(dens_mapper)
g_1d.add_legend()
g_1d.fig.suptitle('Figure 1: Simulated ordinal data',size=14,weight='bold')
g_1d._legend.set_title('Levels')
g_1d.fig.subplots_adjust(top=0.9,right=0.8,left=0.1)
[z.set_text(t) for z,t in zip(g_1d._legend.texts,
pd.Series(list(di_mu_1d.values())).astype(str).str.replace('lvls',''))]
g_1d.set_ylabels('Density',size=12)
g_1d.set_xlabels('x')
np.random.seed(1234)
cov = np.array([[1,0.5],[0.5,1]])
mus_2d = np.arange(-3,3+1,2)
df_2d = pd.concat([pd.DataFrame(np.random.multivariate_normal(np.array([mu,mu]),cov,n),
columns={'x1','x2'}).assign(mu=mu) for mu in mus_2d])
di_mu_2d = dict(zip(mus_2d,['lvls'+str(x+1) for x in range(len(mus_2d))]))
df_2d['lvl'] = df_2d.mu.map(di_mu_2d)
# Fit model
mdl_2d = ordinal_reg()
mdl_2d.fit(df_2d[['x1','x2']],df_2d.mu)
# Get accuracy
df_2d_acc = pd.crosstab(index=df_2d.lvl,columns=mdl_2d.predict(df_2d[['x1','x2']]))
df_2d_acc.index.name='Actual'
df_2d_acc.columns.name='Predicted'
print(df_2d_acc)
# Plot decision boundaries
x1, x2 = np.meshgrid(np.arange(df_2d.x1.min()-0.5, df_2d.x1.max()+0.5,0.1),
np.arange(df_2d.x2.min()-0.5, df_2d.x2.max()+0.5,0.1))
yhat_2d = mdl_2d.predict(np.c_[x1.ravel(), x2.ravel()]).reshape(x1.shape)
fig, ax = plt.subplots(1, figsize=(7, 5))
plt.pcolormesh(x1, x2, yhat_2d, cmap=colors.ListedColormap(sns.color_palette(n_colors=4)), alpha=0.5)
sns.scatterplot('x1','x2','lvl',data=df_2d, ax=ax)
fig.axes[0].legend(loc='right',bbox_to_anchor=(1.25,0.5))
fig.subplots_adjust(right=0.8)
[z.set_text(t) for z,t in zip(fig.axes[0].legend_.texts[1:],
pd.Series(list(di_mu_2d.values())).astype(str).str.replace('lvls',''))]
fig.axes[0].legend_.get_texts()[0].set_text('Levels')
fig.suptitle('Figure 2: Simulated 2D ordinal data',size=14,weight='bold')
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LogisticRegression
X, y = load_boston(True)
y = np.log(y)
np.random.seed(1234)
y = y + np.random.randn(len(y))*(0.2*y.std())
nq = 10
thresholds = np.append(np.append(y.min()-1,np.quantile(y,np.arange(0,1,1/nq)[1:])),y.max()+1)
yq = pd.cut(x=y,bins=thresholds,right=True,labels=['q'+str(z+1) for z in range(nq)])
yord = yq.astype('category').codes+1
np.seterr(divide='ignore', invalid='ignore')
holder = []
nsim = 125
for ii in range(nsim):
# Do a train/test split (80/20)
ytrain, ytest, Xtrain, Xtest = train_test_split(yord, X, stratify=yord,test_size=0.2,random_state=ii)
# Ordinal regression model
mdl_ord = ordinal_reg()
mdl_ord.fit(Xtrain, ytrain)
# Linear regression
mdl_linreg = LinearRegression().fit(Xtrain, ytrain)
# Multinomial regression
mdl_multi = LogisticRegression(penalty='none',solver='lbfgs',max_iter=1000)
mdl_multi.fit(mdl_ord.Xenc.transform(Xtrain),ytrain)
# Make predictions
yhat_linreg = np.round(mdl_linreg.predict(Xtest)).astype(int)
yhat_multi = mdl_multi.predict(mdl_ord.Xenc.transform(Xtest))
yhat_ord = mdl_ord.predict(data=pd.DataFrame(Xtest))
# Get accuracy
acc_linreg = np.abs(yhat_linreg - ytest).mean()
acc_multi = np.abs(yhat_multi == ytest).mean()
acc_ord = np.abs(yhat_ord == ytest).mean()
holder.append(pd.DataFrame({'ord':acc_ord,'multi':acc_multi,'linreg':acc_linreg},index=[ii]))
df_mae = pd.concat(holder).mean(axis=0).reset_index().rename(columns={'index':'mdl',0:'MAE'})
di_lbls = {'ord':'Ordinal','multi':'Multinomial','linreg':'Linear Regression'}
df_mae = df_mae.assign(mdl=lambda x: x.mdl.map(di_lbls))
print(np.round(df_mae,1))
df_diff = pd.concat(holder).melt('ord').assign(d_ord = lambda x: x.ord - x.value).rename(columns={'variable':'Model'})
df_diff = df_diff.assign(Model = lambda x: x.Model.map(di_lbls))
g = sns.FacetGrid(df_diff,hue='Model',col='Model',height=4,aspect=1.5,sharex=False)
g.map(sns.distplot,'d_ord',rug=True)
for ax in g.axes.flat:
ax.axvline(x=0,linestyle='--',c='black')
g.add_legend()
g.fig.suptitle(t='Figure 3: Distribution of difference in MAE across simulations',size=14,weight='bold')
g.fig.subplots_adjust(top=0.8)
g.set_xlabels('MAE(Ordinal)-MAE(Model)')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step2: Lesson
Step3: Project 1
Step4: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
Step5: TODO
Step6: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
Step7: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
Step8: Examine the ratios you've calculated for a few words
Step9: Looking closely at the values you just calculated, we see the following
Step10: Examine the new ratios you've calculated for the same words from before
Step11: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Step12: End of Project 1.
Step13: Project 2
Step14: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
Step15: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
Step16: TODO
Step17: Run the following cell. It should display (1, 74074)
Step18: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
Step20: TODO
Step21: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
Step23: TODO
Step24: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
Step25: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
Step29: End of Project 2.
Step30: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
Step31: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
Step32: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
Step33: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
Step34: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
Step35: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
Step36: Project 4
Step37: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.
Step38: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
Step39: End of Project 4.
Step40: Project 5
Step41: Run the following cell to recreate the network and train it once again.
Step42: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
Step43: End of Project 5.
Step44: Project 6
Step45: Run the following cell to train your network with a small polarity cutoff.
Step46: And run the following cell to test it's performance. It should be
Step47: Run the following cell to train your network with a much larger polarity cutoff.
Step48: And run the following cell to test it's performance.
Step49: End of Project 6.
|
<ASSISTANT_TASK:>
Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
from collections import Counter
import numpy as np
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
# TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
for i,r in enumerate(reviews):
words = r.split(' ')
positive = labels[i] == 'POSITIVE'
for w in words:
if positive:
positive_counts[w] += 1
else:
negative_counts[w] += 1
total_counts[w] += 1
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
# Create Counter object to store positive/negative ratios
pos_neg_ratios = Counter()
# TODO: Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
for w in list(total_counts):
pos_neg_ratios[w] = positive_counts[w] / float(negative_counts[w]+1)
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
# TODO: Convert ratios to logs
for w,ratio in pos_neg_ratios.items():
if ratio <1:
pos_neg_ratios[w] = -np.log(1/(ratio+0.01))
elif ratio >1:
pos_neg_ratios[w] = np.log(ratio)
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
# TODO: Create set named "vocab" containing all of the words from all of the reviews
vocab = set(total_counts)
vocab_size = len(vocab)
print(vocab_size)
from IPython.display import Image
Image(filename='sentiment_network_2.png')
# TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros
layer_0 = np.zeros((1,vocab_size))
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
def update_input_layer(review):
Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
global layer_0
# clear out previous state by resetting the layer to be all 0s
layer_0 *= 0
# TODO: count how many times each word is used in the given review and store the results in layer_0
for w in review.split(' '):
index = word2index[w]
layer_0[0][index] += 1
update_input_layer(reviews[0])
layer_0
def get_target_for_label(label):
Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
# TODO: Your code here
if label == 'POSITIVE':
return 1
else:
return 0
labels[0]
get_target_for_label(labels[0])
labels[1]
get_target_for_label(labels[1])
import time
import sys
import numpy as np
from collections import Counter
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i,r in enumerate(reviews):
words = r.split(' ')
positive = labels[i] == 'POSITIVE'
for w in words:
if positive:
positive_counts[w] += 1
else:
negative_counts[w] += 1
total_counts[w] += 1
review_vocab = set(total_counts)
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set(labels)
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i in range(len(self.review_vocab)):
self.word2index[self.review_vocab[i]] = i
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i in range(len(self.label_vocab)):
self.label2index[self.label_vocab[i]]= i
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((input_nodes,hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(scale=1 / input_nodes ** .5,
size=hidden_nodes)
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
self.layer_0 *=0
for w in review.split(' '):
index = self.word2index[w]
self.layer_0[0][index] += 1
pass
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
if label == 'POSITIVE':
return 1
else:
return 0
pass
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
pass
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output*(1-output)
pass
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review, label = (training_reviews[i],training_laebels[i])
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.update_input_layer(review)
input_hidden = np.dot(self.layer_0[0], self.weights_0_1)
hidden_output = np.dot(input_hidden, self.weights_1_2)
output = self.sigmoid(hidden_output)
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
error = self.get_target_for_label(label) - output
output_error_term = error*self.sigmoid_output_2_derivative(output)
hidden_error = np.dot(output_error_term, self.weights_1_2)
hidden_error_term = hidden_error*self.weights_1_2
self.weights_1_2 += self.learning_rate*output_error_term*hidden_output / len(training_reviews)
self.weights_0_1 += self.learning_rate*hidden_error_term*self.layer_0[0] / len(training_reviews)
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if np.absolute(error) < 0.5:
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.update_input_layer(review.lower())
input_hidden = np.dot(self.layer_0[0], self.weights_0_1)
hidden_output = np.dot(input_hidden, self.weights_1_2)
output = self.sigmoid(hidden_output)
if output >= 0.5:
return 'POSITIVE'
else:
return 'NEGATIVE'
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
pass
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.test(reviews[-1000:],labels[-1000:])
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
# TODO: -Copy the SentimentNetwork class from Projet 3 lesson
# -Modify it to reduce noise, like in the video
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
# TODO: -Copy the SentimentNetwork class from Project 4 lesson
# -Modify it according to the above instructions
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
# TODO: -Copy the SentimentNetwork class from Project 5 lesson
# -Modify it according to the above instructions
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize))
p.scatter(x="x1", y="x2", size=8, source=source,color=colors_list)
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2.0 - Feature Generation
Step2: 3.0 - Generate plots of each feature
Step3: 4.0 - Train model using RandomForestClassifier
Step4: 5.0 - Predict on test data
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
import pandas as pd
import numpy as np
from sklearn import preprocessing
from sklearn.ensemble import RandomForestClassifier
from sklearn import cross_validation
from sklearn.cross_validation import KFold
from sklearn.cross_validation import train_test_split
from sklearn import metrics
from sklearn.metrics import confusion_matrix
from classification_utilities import display_cm, display_adj_cm
filename = 'training_data.csv'
training_data = pd.read_csv(filename)
## Create a difference vector for each feature e.g. x1-x2, x1-x3... x2-x3...
# order features in depth.
feature_vectors = training_data.drop(['Formation', 'Well Name','Facies'], axis=1)
feature_vectors = feature_vectors[['Depth','GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE']]
def difference_vector(feature_vectors):
length = len(feature_vectors['Depth'])
df_temp = np.zeros((25, length))
for i in range(0,int(len(feature_vectors['Depth']))):
vector_i = feature_vectors.iloc[i,:]
vector_i = vector_i[['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE']]
for j, value_j in enumerate(vector_i):
for k, value_k in enumerate(vector_i):
differ_j_k = value_j - value_k
df_temp[5*j+k, i] = np.abs(differ_j_k)
return df_temp
def diff_vec2frame(feature_vectors, df_temp):
heads = feature_vectors.columns[1::]
for i in range(0,5):
string_i = heads[i]
for j in range(0,5):
string_j = heads[j]
col_head = 'diff'+string_i+string_j
df = pd.Series(df_temp[5*i+j, :])
feature_vectors[col_head] = df
return feature_vectors
df_diff = difference_vector(feature_vectors)
feature_vectors = diff_vec2frame(feature_vectors, df_diff)
# Drop duplicated columns and column of zeros
feature_vectors = feature_vectors.T.drop_duplicates().T
feature_vectors.drop('diffGRGR', axis = 1, inplace = True)
# Add Facies column back into features vector
feature_vectors['Facies'] = training_data['Facies']
# # group by facies, take statistics of each facies e.g. mean, std. Take sample difference of each row with
def facies_stats(feature_vectors):
facies_labels = np.sort(feature_vectors['Facies'].unique())
frame_mean = pd.DataFrame()
frame_std = pd.DataFrame()
for i, value in enumerate(facies_labels):
facies_subframe = feature_vectors[feature_vectors['Facies']==value]
subframe_mean = facies_subframe.mean()
subframe_std = facies_subframe.std()
frame_mean[str(value)] = subframe_mean
frame_std[str(value)] = subframe_std
return frame_mean.T, frame_std.T
def feature_stat_diff(feature_vectors, frame_mean, frame_std):
feature_vec_origin = feature_vectors[['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE']]
for i, column in enumerate(feature_vec_origin):
feature_column = feature_vec_origin[column]
stat_column_mean = frame_mean[column]
stat_column_std = frame_std[column]
for j in range(0,9):
stat_column_mean_facie = stat_column_mean[j]
stat_column_std_facie = stat_column_std[j]
feature_vectors[column + '_mean_diff_facies' + str(j)] = feature_column-stat_column_mean_facie
feature_vectors[column + '_std_diff_facies' + str(j)] = feature_column-stat_column_std_facie
return feature_vectors
frame_mean, frame_std = facies_stats(feature_vectors)
feature_vectors = feature_stat_diff(feature_vectors, frame_mean, frame_std)
# A = feature_vectors.sort_values(by='Facies')
# A.reset_index(drop=True).plot(subplots=True, style='b', figsize = [12, 400])
df = feature_vectors
predictors = feature_vectors.columns
predictors = list(predictors.drop('Facies'))
correct_facies_labels = df['Facies'].values
# Scale features
df = df[predictors]
scaler = preprocessing.StandardScaler().fit(df)
scaled_features = scaler.transform(df)
# Train test split:
X_train, X_test, y_train, y_test = train_test_split(scaled_features, correct_facies_labels, test_size=0.2, random_state=0)
alg = RandomForestClassifier(random_state=1, n_estimators=200, min_samples_split=8, min_samples_leaf=3, max_features= None)
alg.fit(X_train, y_train)
predicted_random_forest = alg.predict(X_test)
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
result = predicted_random_forest
conf = confusion_matrix(y_test, result)
display_cm(conf, facies_labels, hide_zeros=True, display_metrics = True)
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
print(accuracy(conf))
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
print(accuracy_adjacent(conf, adjacent_facies))
# read in Test data
filename = 'validation_data_nofacies.csv'
test_data = pd.read_csv(filename)
# Reproduce feature generation
feature_vectors_test = test_data.drop(['Formation', 'Well Name'], axis=1)
feature_vectors_test = feature_vectors_test[['Depth','GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE']]
df_diff_test = difference_vector(feature_vectors_test)
feature_vectors_test = diff_vec2frame(feature_vectors_test, df_diff_test)
# Drop duplicated columns and column of zeros
feature_vectors_test = feature_vectors_test.T.drop_duplicates().T
feature_vectors_test.drop('diffGRGR', axis = 1, inplace = True)
# Create statistical feature differences using preivously caluclated mean and std values from train data.
feature_vectors_test = feature_stat_diff(feature_vectors_test, frame_mean, frame_std)
feature_vectors_test = feature_vectors_test[predictors]
scaler = preprocessing.StandardScaler().fit(feature_vectors_test)
scaled_features = scaler.transform(feature_vectors_test)
predicted_random_forest = alg.predict(scaled_features)
predicted_random_forest
test_data['Facies'] = predicted_random_forest
test_data.to_csv('test_data_prediction_CE.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load CTCF ChIP-seq peaks for HFF from ENCODE
Step2: Get CTCF motifs from JASPAR
Step3: Overlap peaks & motifs
Step4: There are often multiple motifs overlapping one ChIP-seq peak, and a substantial number of peaks without motifs
Step5: assign the strongest motif to each peak
Step6: stronger peaks tend to have stronger motifs
Step7: We can also ask the reverse question
Step8: filter peaks overlapping blacklisted regions
Step9: there appears to be a small spike in the number of peaks close to blacklist regions
Step10: to be safe, let's remove anything +/- 1kb from a blacklisted region
Step11: there it is! we now have a dataframe containing positions of CTCF ChIP peaks,
|
<ASSISTANT_TASK:>
Python Code:
import bioframe
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import pearsonr, spearmanr
base_dir = '/tmp/bioframe_tutorial_data/'
assembly = 'GRCh38'
ctcf_peaks = bioframe.read_table("https://www.encodeproject.org/files/ENCFF401MQL/@@download/ENCFF401MQL.bed.gz", schema='narrowPeak')
ctcf_peaks[0:5]
### CTCF motif: http://jaspar.genereg.net/matrix/MA0139.1/
jaspar_url = 'http://expdata.cmmt.ubc.ca/JASPAR/downloads/UCSC_tracks/2022/hg38/'
jaspar_motif_file = 'MA0139.1.tsv.gz'
ctcf_motifs = bioframe.read_table(jaspar_url+jaspar_motif_file,schema='jaspar',skiprows=1)
ctcf_motifs[0:4]
df_peaks_motifs = bioframe.overlap(ctcf_peaks,ctcf_motifs, suffixes=('_1','_2'), return_index=True)
# note that counting motifs per peak can also be handled directly with bioframe.count_overlaps
# but since we re-use df_peaks_motifs below we instead use the pandas operations directly
motifs_per_peak = df_peaks_motifs.groupby(["index_1"])["index_2"].count().values
plt.hist(motifs_per_peak,np.arange(0,np.max(motifs_per_peak)))
plt.xlabel('number of overlapping motifs per peak')
plt.ylabel('number of peaks')
plt.semilogy();
print(f'fraction of peaks without motifs {np.round(np.sum(motifs_per_peak==0)/len(motifs_per_peak),2)}')
# since idxmax does not currently take NA, fill with -1
df_peaks_motifs['pval_2'] = df_peaks_motifs['pval_2'].fillna(-1)
idxmax_peaks_motifs = df_peaks_motifs.groupby(["chrom_1", "start_1","end_1"])["pval_2"].idxmax().values
df_peaks_maxmotif = df_peaks_motifs.loc[idxmax_peaks_motifs]
df_peaks_maxmotif['pval_2'].replace(-1,np.nan,inplace=True)
plt.rcParams['font.size']=12
df_peaks_maxmotif['fc_1'] = df_peaks_maxmotif['fc_1'].values.astype('float')
plt.scatter(df_peaks_maxmotif['fc_1'].values,
df_peaks_maxmotif['pval_2'].values, 5, alpha=0.5,lw=0)
plt.xlabel('ENCODE CTCF peak strength, fc')
plt.ylabel('JASPAR CTCF motif strength \n (-log10 pval *100)')
plt.title('corr: '+str(np.round(df_peaks_maxmotif['fc_1'].corr(df_peaks_maxmotif['pval_2']),2)));
df_motifs_peaks = bioframe.overlap(ctcf_motifs,ctcf_peaks,how='left', suffixes=('_1','_2'))
m = df_motifs_peaks.sort_values('pval_1')
plt.plot( m['pval_1'].values[::-1] ,
np.cumsum(pd.isnull(m['chrom_2'].values[::-1])==0)/np.arange(1,len(m)+1))
plt.xlabel('pval')
plt.ylabel('probability motif overlaps a peak');
blacklist = bioframe.read_table('https://www.encodeproject.org/files/ENCFF356LFX/@@download/ENCFF356LFX.bed.gz',
schema='bed3')
blacklist[0:3]
closest_to_blacklist = bioframe.closest(ctcf_peaks,blacklist)
plt.hist(closest_to_blacklist['distance'].astype('Float64').astype('float'),np.arange(0,1e4,100));
# first let's select the columns we want for our final dataframe of peaks with motifs
df_peaks_maxmotif = df_peaks_maxmotif[
['chrom_1','start_1','end_1','fc_1',
'chrom_2','start_2','end_2','pval_2','strand_2']]
# then rename columns for convenience when subtracting
for i in df_peaks_maxmotif.keys():
if '_1' in i: df_peaks_maxmotif.rename(columns={i:i.split('_')[0]},inplace=True)
# now subtract, expanding the blacklist by 1kb
df_peaks_maxmotif_clean = bioframe.subtract(df_peaks_maxmotif,bioframe.expand(blacklist,1000))
df_peaks_maxmotif_clean.iloc[7:15]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The JSON data is saved into the variable data,which is a list of dictionaries, so we can now use the data structure functions to access information for each story. Here are the two most recent posts' IDs and timestamps.
Step2: Parsing data
|
<ASSISTANT_TASK:>
Python Code:
from _keys.facebook import USER_ID, ACCESS_TOKEN, paging_token
import requests
host = 'https://graph.facebook.com/v2.8'
u = '{}/{}/posts?access_token={}'.format(host, USER_ID, ACCESS_TOKEN)
data1 = requests.get(u).json()
pg2 = '{}/{}/posts?limit=25&until=1486832400&__paging_token={}&access_token={}'.format(host, USER_ID, paging_token, ACCESS_TOKEN)
data2 = requests.get(pg2).json()
data = []
data.extend(data1["data"])
data.extend(data2["data"])
print(data[0].keys(), "\n")
# Two most post time:
print(data[0]['created_time'])
import datetime
from dateutil import parser
# return
def scrub(timestamp):
d = parser.parse(timestamp)
return dow(d), hod(d)
# returns day of week
def dow(date): return date.strftime("%A")
# returns hour of day
def hod(time): return time.strftime("%-I:%M%p")
a = list(map((lambda x: scrub(x['created_time'])), data ))
days=["Monday","Tuesday","Wednesday","Thursday","Friday","Saturday","Sunday"]
dow = list(map( (lambda x: x[0]) , a))
print("day\t\tposts")
print("=========== =======")
for day in days:
print (day, " \t", dow.count(day))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So, in order to have tuning curves lke that, we want to have neurons with intercepts=nengo.dists.Uniform(0.82, 0.97) which is close enough to what It looks like you used in the paper.
Step2: Now we compute the tuning curves. Instead of getting tuning curves for every possible value, we just evaluate it along the unit circle.
Step3: Plot the results, showing the first 20 neurons only
|
<ASSISTANT_TASK:>
Python Code:
# How to compute intercept range given an angle range
import numpy as np
angle_range_degrees = np.array([15.0, 35.0])
angle_range = angle_range_degrees * np.pi / 180
print np.cos(angle_range)
import nengo
model = nengo.Network()
with model:
ens = nengo.Ensemble(n_neurons=400, dimensions=2,
intercepts=nengo.dists.Uniform(0.81, 0.97),
)
sim = nengo.Simulator(model)
import numpy as np
theta_degrees = np.linspace(-100, 100, 201) # in degrees
theta = theta_degrees * np.pi / 180
x = np.vstack([np.sin(theta), np.cos(theta)]).T
response_curves = np.zeros((ens.n_neurons, len(theta)))
inputs, activity = nengo.utils.ensemble.tuning_curves(ens, sim, inputs=x)
%matplotlib inline
import pylab
pylab.plot(theta_degrees, activity[:,:20])
pylab.xlabel('represented angle')
pylab.ylabel('firing rate (Hz)')
pylab.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Generador congruencial lineal
Step2: Ejemplo
Step3: Generador mínimo estándar
Step4: Generado Randu (Usado por IBM)
Step5: 3. Método de Box-Muller
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import seaborn as sns
import scipy.stats as stats
%matplotlib inline
def lcg(n, m=2**31-1, a=16807, c=0, seed=2**30):
x = np.zeros(n+1)
x[0]=seed
for i in range(1,n+1):
x[i] = (a * x[i-1]+c)%m
return x[1:]/m
lcg(10, m=31, a=13, c=0, seed=3)
x=lcg(10000)
sns.distplot(x, color="b", fit=stats.uniform);
x=lcg(10000, m=2**31, a=2**16+3, c=0, seed=3)
sns.distplot(x, color="b", fit=stats.uniform);
def bm(n):
m=2**31-1
a=16807
c=0
seed=2**30
x = np.zeros(n+1)
x[0]=seed
for i in range(1,n+1):
x[i] = (a * x[i-1]+c)%m
u=x[1:]/m
u1=u[:int((n/2))]
u2=u[int(n/2):]
nn=np.concatenate((np.sqrt(-2*np.log(1-u1))*np.cos(2*np.pi*u2), np.sqrt(-2*np.log(1-u1))*np.sin(2*np.pi*u2)),axis=0)
return nn
y=bm(100000)
sns.distplot(y, color="b", fit=stats.norm);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Violations of graphical excellence and integrity
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
# Add your filename and uncomment the following line:
Image(filename='graph2.JPG')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Géocodage
|
<ASSISTANT_TASK:>
Python Code:
import geopandas
geopandas.tools.geocode('2900 boulevard Edouard Montpetit, Montreal', provider='nominatim', user_agent="mon-application")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can now import the deepchem package to play with.
Step2: What is a Fingerprint?
Step3: The feature array X has shape (6264, 1024). That means there are 6264 samples in the training set. Each one is represented by a fingerprint of length 1024. Also notice that the label array y has shape (6264, 12)
Step4: Notice that some elements are 0. The weights are being used to indicate missing data. Not all assays were actually performed on every molecule. Setting the weight for a sample or sample/task pair to 0 causes it to be ignored during fitting and evaluation. It will have no effect on the loss function or other metrics.
Step5: MultitaskClassifier is a simple stack of fully connected layers. In this example we tell it to use a single hidden layer of width 1000. We also tell it that each input will have 1024 features, and that it should produce predictions for 12 different tasks.
|
<ASSISTANT_TASK:>
Python Code:
!pip install --pre deepchem
import deepchem as dc
dc.__version__
tasks, datasets, transformers = dc.molnet.load_tox21(featurizer='ECFP')
train_dataset, valid_dataset, test_dataset = datasets
print(train_dataset)
train_dataset.w
model = dc.models.MultitaskClassifier(n_tasks=12, n_features=1024, layer_sizes=[1000])
import numpy as np
model.fit(train_dataset, nb_epoch=10)
metric = dc.metrics.Metric(dc.metrics.roc_auc_score)
print('training set score:', model.evaluate(train_dataset, [metric], transformers))
print('test set score:', model.evaluate(test_dataset, [metric], transformers))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: function declarations
Step2: load locations from database
Step3: loop through locations and compute distance
Step4: compute the distance mean and standard deviation
|
<ASSISTANT_TASK:>
Python Code:
from geo.models import SampleLocation
from database.models import Site
from shapely.geometry import shape, MultiPoint
import geopandas
import pandas
import numpy
from django.db import connection
def get_geodataframe(queryset, modification=None, crs={'+init':'epsg:31254'}):
query = queryset.query.sql_with_params()
if modification:
query = (modification, query[1])
return geopandas.read_postgis(query[0], connection,
geom_col='geometry',
params=query[1],
index_col='id',
crs=crs)
generated = get_geodataframe(SampleLocation.objects.all())
actual = get_geodataframe(Site.objects.filter(id__lte=30))
distance_array = numpy.zeros(30)
distances = pandas.DataFrame({'id': generated.index, 'name': actual.sort_index().name, 'distance': distance_array}).set_index('id')
for i in range(1, 31):
x1 = generated[generated.index == i].geometry.as_matrix()[0].coords.xy[0][0]
x2 = actual[actual.index == i].geometry.as_matrix()[0].coords.xy[0][0]
y1 = generated[generated.index == i].geometry.as_matrix()[0].coords.xy[1][0]
y2 = actual[actual.index == i].geometry.as_matrix()[0].coords.xy[1][0]
distance_array[i - 1] = numpy.sqrt((x2 - x1)**2 + (y2 - y1)**2)
distances['distance'] = distance_array
distances
distances.distance.mean().round(0)
distances.distance.std().round(0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Accessing the layers
Step2: Moreover .modules and .children provide generators for accessing layers.
Step3: Getting the weigths.
Step4: Getting layer properties
|
<ASSISTANT_TASK:>
Python Code:
class Flatten(nn.Module):
def forward(self, x):
return x.view(x.size(0), -1)
def __str__(self):
return 'Flatten()'
model = nn.Sequential(OrderedDict([
('conv2d_1', nn.Conv2d(in_channels=1, out_channels=32, kernel_size=3)),
('relu_1', nn.ReLU()),
('max_pooling2d_1', nn.MaxPool2d(kernel_size=2)),
('conv2d_2', nn.Conv2d(in_channels=32, out_channels=32, kernel_size=3)),
('relu_2', nn.ReLU()),
('dropout_1', nn.Dropout(p=0.25)),
('flatten_1', Flatten()),
('dense_1', nn.Linear(3872, 64)),
('relu_3', nn.ReLU()),
('dropout_2', nn.Dropout(p=0.5)),
('dense_2', nn.Linear(64, 10)),
('readout', nn.LogSoftmax())
]))
model.load_state_dict(torch.load('example_torch_mnist_model.pth'))
for i, layer in enumerate(model):
print('{}\t{}'.format(i, layer))
for m in model.modules():
print(m)
for c in model.children():
print(c)
conv2d_1_weight = model[0].weight.data.numpy()
conv2d_1_weight.shape
for i in range(32):
plt.imshow(conv2d_1_weight[i, 0])
plt.show()
conv2d_1 = model[0]
conv2d_1.kernel_size
conv2d_1.stride
conv2d_1.dilation
conv2d_1.in_channels, conv2d_1.out_channels
conv2d_1.padding
conv2d_1.output_padding
dropout_1 = model[5]
dropout_1.p
dense_1 = model[7]
dense_1.in_features, dense_1.out_features
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 혼합 정밀도
Step2: 지원하는 하드웨어
Step3: 모든 Cloud TPU는 bfloat16을 지원합니다.
Step4: 이 정책은 레이어의 두 가지 중요한 측면, 즉 레이어 계산이 수행되는 dtype과 레이어 변수의 dtype을 지정합니다. 위에서 mixed_float16 정책을 만들었습니다(예, 문자열 'mixed_float16'을 생성자에 전달하여 만든 mixed_precision.Policy). 이 정책에서 레이어는 float16 계산 및 float32 변수를 사용합니다. 계산은 성능을 위해 float16에서 수행되지만, 수치 안정성을 위해서는 변수를 float32로 유지해야 합니다. 이러한 정책 속성을 직접 쿼리 할 수 있습니다.
Step5: 앞에서 언급했듯이 mixed_float16 정책은 7.0 이상의 컴퓨팅 능력을 갖춘 NVIDIA GPU의 성능이 가장 크게 향상됩니다. 이 정책은 다른 GPU 및 CPU에서 실행되지만, 성능이 향상되지 않을 수 있습니다. TPU의 경우 mixed_bfloat16 정책을 대신 사용해야 합니다.
Step6: 각 계층에는 정책이 있으며 기본적으로 전역 정책을 사용합니다. 따라서 이전에 전역 정책을 mixed_float16으로 설정했기 때문에 각 Dense 레이어는 mixed_float16 정책을 갖습니다. 이로 인해 밀도가 높은 레이어는 float16 계산을 수행하고 float32 변수를 갖게됩니다. float16 계산을 수행하기 위해 입력을 float16으로 캐스팅하여 결과적으로 출력이 float16이됩니다. 변수는 float32이며 dtype 불일치로 인한 오류를 피하기 위해 레이어를 호출하면 float16으로 캐스팅됩니다.
Step7: 다음으로 출력 예측을 작성합니다. 일반적으로 다음과 같이 출력 예측을 작성할 수 있지만 float16에서는 항상 수치상으로 안정적이지는 않습니다.
Step8: 모델의 끝에서 softmax 활성화는 float32이어야 합니다. dtype 정책이 mixed_float16이므로, softmax 활성화는 일반적으로 float16 계산 dtype을 가지며 float16 텐서를 출력합니다.
Step9: dtype='float32'를 softmax 레이어 생성자에 전달하면 레이어의 dtype 정책이 float32 정책으로 재정의되어 계산을 수행하고 변수를 float32로 유지합니다. 마찬가지로 dtype=mixed_precision.Policy('float32')대신 전달할 수도 있습니다. 레이어는 항상 dtype 인수를 정책으로 변환합니다. Activation 레이어에는 변수가 없으므로 정책의 변수 dtype은 무시되지만 float32 정책의 계산 dtype은 softmax 및 모델 출력을 float32로 만듭니다.
Step10: 그런 다음 모델을 완료 및 컴파일하고 입력 데이터를 생성합니다.
Step11: 이 예제는 입력 데이터를 int8에서 float32로 캐스팅합니다. 255로 나누기가 CPU에 있으며 float16 연산은 float32 연산보다 느리기 때문에 float16으로 캐스팅하지 않습니다. 이 경우 성능 차이는 무시할만하지만, 일반적으로 CPU에서 실행되는 경우 float32에서 입력 처리 계산을 실행해야 합니다. 각 레이어는 부동 소수점 입력을 계산 dtype에 캐스팅하므로 모델의 첫 번째 레이어는 입력을 float16으로 캐스팅합니다.
Step12: Model.fit으로 모델 훈련
Step13: 모델은 로그에 샘플당 시간을 출력합니다(예
Step14: 실제로 float16으로 오버플로우가 거의 발생하지 않습니다. 또한 순방향 전달에서 언더플로우가 거의 발생하지 않습니다. 그러나 역방향 전달동안 그래디언트가 0으로 언더플로우 될 수 있습니다. 손실 조정은 이러한 언더플로우를 방지하는 기술입니다.
Step15: 손실 규모는 많은 내부 상태를 출력하지만 무시해도됩니다. 가장 중요한 부분은 current_loss_scale 부분으로, 손실 규모의 현재 값을 보여줍니다.
Step16: dtype 정책 생성자는 항상 손실 규모를 LossScale 객체로 변환합니다. 이 경우 DynamicLossScale 이외의 유일한 LossScale 서브 클래스 인 tf.mixed_precision.experimental.FixedLossScale로 변환됩니다.
Step17: 'dynamic'을 전달하는 것은 tf.mixed_precision.experimental.DynamicLossScale()을 전달하는 것과 같습니다.
Step18: 다음으로 훈련 단계 함수를 정의합니다. 손실 규모 옵티마이저의 두 가지 새로운 메서드를 사용하여 손실을 조정하고 그래디언트 조정을 해제합니다.
Step19: LossScaleOptimizer는 훈련 시작 시 처음 몇 단계를 건너뛸 수 있습니다. 최적의 손실 규모를 신속하게 결정할 수 있도록 손실 규모가 크게 시작됩니다. 몇 단계를 거치면 손실 규모가 안정화되고 몇 단계만 건너뜁니다. 이 프로세스는 자동으로 수행되며 훈련 품질에는 영향을 미치지 않습니다.
Step20: 모델의 초기 가중치를 로드하여 처음부터 다시 학습할 수 있습니다.
Step21: 마지막으로 사용자 지정 훈련 루프를 실행합니다.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import mixed_precision
!nvidia-smi -L
policy = mixed_precision.Policy('mixed_float16')
mixed_precision.set_global_policy(policy)
print('Compute dtype: %s' % policy.compute_dtype)
print('Variable dtype: %s' % policy.variable_dtype)
inputs = keras.Input(shape=(784,), name='digits')
if tf.config.list_physical_devices('GPU'):
print('The model will run with 4096 units on a GPU')
num_units = 4096
else:
# Use fewer units on CPUs so the model finishes in a reasonable amount of time
print('The model will run with 64 units on a CPU')
num_units = 64
dense1 = layers.Dense(num_units, activation='relu', name='dense_1')
x = dense1(inputs)
dense2 = layers.Dense(num_units, activation='relu', name='dense_2')
x = dense2(x)
print('x.dtype: %s' % x.dtype.name)
# 'kernel' is dense1's variable
print('dense1.kernel.dtype: %s' % dense1.kernel.dtype.name)
# INCORRECT: softmax and model output will be float16, when it should be float32
outputs = layers.Dense(10, activation='softmax', name='predictions')(x)
print('Outputs dtype: %s' % outputs.dtype.name)
# CORRECT: softmax and model output are float32
x = layers.Dense(10, name='dense_logits')(x)
outputs = layers.Activation('softmax', dtype='float32', name='predictions')(x)
print('Outputs dtype: %s' % outputs.dtype.name)
# The linear activation is an identity function. So this simply casts 'outputs'
# to float32. In this particular case, 'outputs' is already float32 so this is a
# no-op.
outputs = layers.Activation('linear', dtype='float32')(outputs)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(loss='sparse_categorical_crossentropy',
optimizer=keras.optimizers.RMSprop(),
metrics=['accuracy'])
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
x_test = x_test.reshape(10000, 784).astype('float32') / 255
initial_weights = model.get_weights()
history = model.fit(x_train, y_train,
batch_size=8192,
epochs=5,
validation_split=0.2)
test_scores = model.evaluate(x_test, y_test, verbose=2)
print('Test loss:', test_scores[0])
print('Test accuracy:', test_scores[1])
x = tf.constant(256, dtype='float16')
(x ** 2).numpy() # Overflow
x = tf.constant(1e-5, dtype='float16')
(x ** 2).numpy() # Underflow
loss_scale = policy.loss_scale
print('Loss scale: %s' % loss_scale)
new_policy = mixed_precision.Policy('mixed_float16', loss_scale=1024)
print(new_policy.loss_scale)
optimizer = keras.optimizers.RMSprop()
optimizer = mixed_precision.LossScaleOptimizer(optimizer, loss_scale='dynamic')
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
train_dataset = (tf.data.Dataset.from_tensor_slices((x_train, y_train))
.shuffle(10000).batch(8192))
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(8192)
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
predictions = model(x)
loss = loss_object(y, predictions)
scaled_loss = optimizer.get_scaled_loss(loss)
scaled_gradients = tape.gradient(scaled_loss, model.trainable_variables)
gradients = optimizer.get_unscaled_gradients(scaled_gradients)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
return loss
@tf.function
def test_step(x):
return model(x, training=False)
model.set_weights(initial_weights)
for epoch in range(5):
epoch_loss_avg = tf.keras.metrics.Mean()
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
name='test_accuracy')
for x, y in train_dataset:
loss = train_step(x, y)
epoch_loss_avg(loss)
for x, y in test_dataset:
predictions = test_step(x)
test_accuracy.update_state(y, predictions)
print('Epoch {}: loss={}, test accuracy={}'.format(epoch, epoch_loss_avg.result(), test_accuracy.result()))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fonctions
Step2: Une fonction commence toujours par def. Entre parenthèses, ce sont les paramètres (ou entrées de la fonction). Ce qui suit le mot-clé return est le résultat de la fonction (ou sa sortie). Parmi les fonctions, il y a celles qui existent déjà et celles que vous écrivez. La fonction cos existe déjà
Step3: De son utilisation ou appel
Step4: On peut appeler une fonction depuis une autre fonction. Une fonction peut prendre autant de paramètres que l'on veut à condition qu'ils aient des noms différents. On peut aussi leur associer une valeur par défaut
Step5: Les fonctions doivent porter des noms différents. Dans le cas contraire, seule la dernière existe.
Step6: Exercice 1
Step7: Le symbol % permet d'obtenir le reste d'une division entière. L'exercice consiste à écrire une fonction qui retourne la lettre suivante dans l'ordre alphabétique. La lettre qui suit z est a.
Step8: Fonctions dans le détail
Step9: Mot-clé return
Step10: La fonction se termine après le premier return rencontré lors de l'exécution.
Step11: Fonction récursive
Step12: Dictionnaires
Step13: Quelques fonctions utiles
Step14: On utilise souvent un dictionnaire pour compter les lettres d'un texte
Step15: Les valeurs peuvent être n'importe quoi, y compris des listes ou des dictionnaires. Les clés doivent être des types immuables (nombre, chaînes de caractères, tuple incluant des types immuables). Si vous utilisez un autre type, cela déclenche une erreur
Step16: Lorsqu'on affecte une valeur à une clé, le dictionnaire crée la clé ou remplace la valeur précédente par la nouvelle
Step17: On peut aussi créer un dictionnaire de façon compacte
Step18: notions de coût
Step19: A quoi ça sert ? on se sert beaucoup des dictionnaires pour compter la fréquences des caractères, des mots ou de couples de mots dans un texte. On les ordonne ensuite par fréquences décroissantes pour obtenir les mots ou caractères les plus fréquents.
Step20: Il faut écrire une fonction qui retourne tous les mots de la liste qui ont un 'y' en seconde position.
Step21: Exercice 3
Step22: Construisez le dictionnaire dictionnaire_bien_choisi pour que cela fonctionne.
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
def polynome ( x ) :
x2 = x*x
return x2 + x - 5
def polynome ( x, coefficient ) :
return sum ( [ x**i * c for i,c in enumerate(coefficient) ] )
y = polynome ( 1.2, [ 1, 2, -1] ) # calcul de -x^2 + 2x + 1 pour x = 1.2
y
from math import log # on importe une fonction existante
def log_base ( x, base = 10 ) :
return log (x) / log(base)
y = log_base (1000) # identique à y = log_base (1000, 10)
z = log_base (1000, 2) # logarithme en base deux
y,z
def polynome ( x ) : # remplacée par la seconde fonction
return x*x + x - 5
def polynome ( x, coefficient ) :
return sum ( [ x**i * c for i,c in enumerate(coefficient) ] )
y = polynome(4) # déclenche une exception
print ( chr( 65 ), chr (97) )
print ( ord("B"), ord ("b") )
def lettre_suivante(lettre) :
# ......
return ....
def calcul(x) :
y = x**2
z = x + y
return z
print(z) # déclenche une exception
def calcul(x) :
y = x**2
z = x + y
a = calcul(3)
print(a)
def valeur_absolue(x) :
print("je passe par ici")
if x < 0 :
y = -x
return y
else :
y = x
return y
print("je ne passe jamais par ici")
valeur_absolue(-5)
def recursive(x) :
if x / 2 < 1 :
print("je ne m'appelle pas pour x=",x)
return 1
else :
print("je m'appelle pour x=",x)
return recursive (x/2) + 1
recursive( 10 )
d = { } # un dictionnaire vide
d = { 'a' : 'acronym', 'b': 'bizarre' } # un dictionnaire qui associe 'acroym' à 'a' et 'bizarre' à 'b'.
z = d ['a'] # z reçoit la valeur associée à 'a' et stockée dans le dictionnaire d
d = { 'a' : 'acronym', 'b': 'bizarre' }
l = len(d) # retourne le nombre d'élément de d
b = 'a' in d # b vaut True si une valeur est associée à 'a', on dit aussi que la clé 'a' est présente
x = d.get ('a', '') # x vaut d['a'] si la clé 'a' existe, il vaut '' sinon
"d=",d,"l=",l,"b=",b,"x=",x
texte = "exemple de texte"
d = { }
for c in texte :
d[c] = d.get(c,0) + 1
d
f = [3,4]
d[f] = 0 # déclenche une exception
d = { }
d['a'] = 0 # création d'une clé
d['a'] = 1 # remplacement d'une valeur
d
d = { s:len(s) for s in ['un', 'deux', 'trois'] }
d
d = { s:len(s) for s in ['un', 'deux', 'trois', 'quatre', 'cinq'] }
d
ordonne = [ (v,k) for k,v in d.items()]
ordonne
ordonne.sort()
ordonne
mots = ['eddard', 'catelyn', 'robb', 'sansa', 'arya', 'brandon',
'rickon', 'theon', 'rorbert', 'cersei', 'tywin', 'jaime',
'tyrion', 'shae', 'bronn', 'lancel', 'joffrey', 'sandor',
'varys', 'renly', 'a' ]
def mots_lettre_position (liste, lettre, position) :
# ......
return [ .... ]
def mots_lettre_position (dictionnaire_bien_choisi, lettre, position) :
return dictionnaire_bien_choisi. get ( (position, lettre) , [] )
def code_vigenere ( message, cle) :
# ...... à remplir
return message_code
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's go over the columns
Step2: Now suppose we want a DataFrame of the Blaze Data Object above, want to filter it further down to the announcements only, and we only want the sid, issue_amount, and the asof_date.
|
<ASSISTANT_TASK:>
Python Code:
# import the dataset
from quantopian.interactive.data.eventvestor import issue_debt
# or if you want to import the free dataset, use:
# from quantopian.interactive.data.eventvestor import issue_debt_free
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
# Let's use blaze to understand the data a bit using Blaze dshape()
issue_debt.dshape
# And how many rows are there?
# N.B. we're using a Blaze function to do this, not len()
issue_debt.count()
# Let's see what the data looks like. We'll grab the first three rows.
issue_debt[:3]
issues = issue_debt[('2014-12-31' < issue_debt['asof_date']) &
(issue_debt['asof_date'] <'2016-01-01') &
(issue_debt.issue_amount < 20)&
(issue_debt.issue_units == "$M")]
# When displaying a Blaze Data Object, the printout is automatically truncated to ten rows.
issues.sort('asof_date')
df = odo(issues, pd.DataFrame)
df = df[df.issue_stage == "Announcement"]
df = df[['sid', 'issue_amount', 'asof_date']].dropna()
df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In keras, input could be ndarray, or generator. We could just use model.predict(test_generator). But to simplify, here we just input the first record to model.
Step2: Great! Now the Keras application is completed.
Step3: Deploy Cluster Serving
Step4: We config the model path in config.yaml to following (the detail of config is at Cluster Serving Configuration)
Step5: Start Cluster Serving
Step6: After configuration, start Cluster Serving by cluster-serving-start (the detail is at Cluster Serving Programming Guide)
Step7: Prediction using Cluster Serving
Step8: In Cluster Serving, only NdArray is supported as input. Thus, we first transform the generator to ndarray (If you do not know how to transform your input to NdArray, you may get help at data transform guide)
Step9: If everything works well, the result prediction would be the exactly the same NdArray object with the output of original Keras model.
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import os
import PIL
tf.__version__
# Obtain data from url:"https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip"
zip_file = tf.keras.utils.get_file(origin="https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip",
fname="cats_and_dogs_filtered.zip", extract=True)
# Find the directory of validation set
base_dir, _ = os.path.splitext(zip_file)
test_dir = os.path.join(base_dir, 'validation')
# Set images size to 160x160x3
image_size = 160
# Rescale all images by 1./255 and apply image augmentation
test_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255)
# Flow images using generator to the test_generator
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(image_size, image_size),
batch_size=1,
class_mode='binary')
# Create the base model from the pre-trained model MobileNet V2
IMG_SHAPE=(160,160,3)
model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
prediction=model.predict(test_generator.next()[0])
print(prediction)
# Save trained model to ./transfer_learning_mobilenetv2
model.save('/tmp/transfer_learning_mobilenetv2')
! ls /tmp/transfer_learning_mobilenetv2
! pip install bigdl-serving
# we go to a new directory and initialize the environment
! mkdir cluster-serving
os.chdir('cluster-serving')
! cluster-serving-init
! tail wget-log.2
# if you encounter slow download issue like above, you can just use following command to download
# ! wget https://repo1.maven.org/maven2/com/intel/analytics/bigdl/bigdl-spark_2.4.3/0.9.0/bigdl-spark_2.4.3-0.9.0-serving.jar
# if you are using wget to download, or get "bigdl-xxx-serving.jar" after "ls", please call mv *serving.jar bigdl.jar after downloaded.
# After initialization finished, check the directory
! ls
## BigDL Cluster Serving
model:
# model path must be provided
path: /tmp/transfer_learning_mobilenetv2
! head config.yaml
! $FLINK_HOME/bin/start-cluster.sh
! cluster-serving-start
from bigdl.serving.client import InputQueue, OutputQueue
input_queue = InputQueue()
arr = test_generator.next()[0]
arr
# Use async api to put and get, you have pass a name arg and use the name to get
input_queue.enqueue('my-input', t=arr)
output_queue = OutputQueue()
prediction = output_queue.query('my-input')
# Use sync api to predict, this will block until the result is get or timeout
prediction = input_queue.predict(arr)
prediction
# don't forget to delete the model you save for this tutorial
! rm -rf /tmp/transfer_learning_mobilenetv2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Modulation
Step2: Spectogram shows that we have synthesized positive frequency for True bit and negative for False.
|
<ASSISTANT_TASK:>
Python Code:
samples_per_symbol = 64 # this is so high to make stuff plottable
symbols = [1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0]
data = []
for x in symbols:
data.extend([1 if x else -1] * samples_per_symbol)
plt.plot(data)
plt.title('Data to send')
plt.show()
fs = 300e3
deviation = 70e3 # deviation from center frequency
sensitivity = 2 * np.pi * deviation / fs
print(sensitivity)
d_phase = 0
phl = []
for symbol in data:
d_phase += symbol * sensitivity # this is FSK
d_phase = ((d_phase + np.pi) % (2.0 * np.pi)) - np.pi # keep in pi range
phl.append(d_phase * 1j)
sig = np.exp(phl)
# awgn channel
# sig = sig + np.random.normal(scale=np.sqrt(0.1))
Pxx, freqs, bins, im = plt.specgram(sig, Fs=fs, NFFT=64, noverlap=0)
plt.show()
import inspect
def get_objects_rednode(obj):
source_path = inspect.getsourcefile(type(obj))
source = open(source_path).read()
print(source)
from pyhacores.moving_average.model import MovingAverage
obj = MovingAverage(2)
get_objects_rednode(obj)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: While this would work for defining a single molecule or very small system, this would not be efficient for large systems. Instead, the clone and translate operator can be used to facilitate automation. Below, we simply define a single prototype particle (lj_proto), which we then copy and translate about the system.
Step2: To simplify this process, mBuild provides several build-in patterning tools, where for example, Grid3DPattern can be used to perform this same operation. Grid3DPattern generates a set of points, from 0 to 1, which get stored in the variable "pattern". We need only loop over the points in pattern, cloning, translating, and adding to the system. Note, because Grid3DPattern defines points between 0 and 1, they must be scaled based on the desired system size, i.e., pattern.scale(2).
Step3: Larger systems can therefore be easily generated by toggling the values given to Grid3DPattern. Other patterns can also be generated using the same basic code, such as a 2D grid pattern
Step4: Points on a sphere can be generated using SpherePattern. Points on a disk using DisKPattern, etc.
Step5: We can also take advantage of the hierachical nature of mBuild to accomplish the same task more cleanly. Below we create a component that corresponds to the sphere (class SphereLJ), and one that corresponds to the disk (class DiskLJ), and then instantiate and shift each of these individually in the MonoLJ component.
Step6: Again, since mBuild is hierarchical, the pattern functions can be used to generate large systems of any arbitary component. For example, we can replicate the SphereLJ component on a regular array.
Step7: Several functions exist for rotating compounds. For example, the spin command allows a compound to be rotated, in place, about a specific axis (i.e., it considers the origin for the rotation to lie at the compound's center of mass).
Step8: Configurations can be dumped to file using the save command; this takes advantage of MDTraj and supports a range of file formats (see http
|
<ASSISTANT_TASK:>
Python Code:
import mbuild as mb
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
lj_particle1 = mb.Particle(name='LJ', pos=[0, 0, 0])
self.add(lj_particle1)
lj_particle2 = mb.Particle(name='LJ', pos=[1, 0, 0])
self.add(lj_particle2)
lj_particle3 = mb.Particle(name='LJ', pos=[0, 1, 0])
self.add(lj_particle3)
lj_particle4 = mb.Particle(name='LJ', pos=[0, 0, 1])
self.add(lj_particle4)
lj_particle5 = mb.Particle(name='LJ', pos=[1, 0, 1])
self.add(lj_particle5)
lj_particle6 = mb.Particle(name='LJ', pos=[1, 1, 0])
self.add(lj_particle6)
lj_particle7 = mb.Particle(name='LJ', pos=[0, 1, 1])
self.add(lj_particle7)
lj_particle8 = mb.Particle(name='LJ', pos=[1, 1, 1])
self.add(lj_particle8)
monoLJ = MonoLJ()
monoLJ.visualize()
import mbuild as mb
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
for i in range(0,2):
for j in range(0,2):
for k in range(0,2):
lj_particle = mb.clone(lj_proto)
pos = [i,j,k]
mb.translate(lj_particle, pos)
self.add(lj_particle)
monoLJ = MonoLJ()
monoLJ.visualize()
import mbuild as mb
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
pattern = mb.Grid3DPattern(2, 2, 2)
pattern.scale(2)
for pos in pattern:
lj_particle = mb.clone(lj_proto)
mb.translate(lj_particle, pos)
self.add(lj_particle)
monoLJ = MonoLJ()
monoLJ.visualize()
import mbuild as mb
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
pattern = mb.Grid2DPattern(5, 5)
pattern.scale(5)
for pos in pattern:
lj_particle = mb.clone(lj_proto)
mb.translate(lj_particle, pos)
self.add(lj_particle)
monoLJ = MonoLJ()
monoLJ.visualize()
import mbuild as mb
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
pattern_sphere = mb.SpherePattern(200)
pattern_sphere.scale(0.5)
for pos in pattern_sphere:
lj_particle = mb.clone(lj_proto)
pos[0]-=1.0
mb.translate(lj_particle, pos)
self.add(lj_particle)
pattern_disk = mb.DiskPattern(200)
pattern_disk.scale(0.5)
for pos in pattern_disk:
lj_particle = mb.clone(lj_proto)
pos[0]+=1.0
mb.translate(lj_particle, pos)
self.add(lj_particle)
monoLJ = MonoLJ()
monoLJ.visualize()
import mbuild as mb
class SphereLJ(mb.Compound):
def __init__(self):
super(SphereLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
pattern_sphere = mb.SpherePattern(200)
pattern_sphere.scale(0.5)
for pos in pattern_sphere:
lj_particle = mb.clone(lj_proto)
mb.translate(lj_particle, pos)
self.add(lj_particle)
class DiskLJ(mb.Compound):
def __init__(self):
super(DiskLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
pattern_disk = mb.DiskPattern(200)
pattern_disk.scale(0.5)
for pos in pattern_disk:
lj_particle = mb.clone(lj_proto)
mb.translate(lj_particle, pos)
self.add(lj_particle)
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
sphere = SphereLJ();
pos=[-1, 0, 0]
mb.translate(sphere, pos)
self.add(sphere)
disk = DiskLJ();
pos=[1, 0, 0]
mb.translate(disk, pos)
self.add(disk)
monoLJ = MonoLJ()
monoLJ.visualize()
import mbuild as mb
class SphereLJ(mb.Compound):
def __init__(self):
super(SphereLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
pattern_sphere = mb.SpherePattern(13)
pattern_sphere.scale(0.5)
for pos in pattern_sphere:
lj_particle = mb.clone(lj_proto)
mb.translate(lj_particle, pos)
self.add(lj_particle)
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
sphere = SphereLJ();
pattern = mb.Grid3DPattern(3, 3, 3)
pattern.scale(10)
for pos in pattern:
lj_sphere = mb.clone(sphere)
mb.translate_to(lj_sphere, pos)
#shift the particle so the center of mass
#of the system is at the origin
mb.translate(lj_sphere, [-5,-5,-5])
self.add(lj_sphere)
monoLJ = MonoLJ()
monoLJ.visualize()
import mbuild as mb
import random
from numpy import pi
class CubeLJ(mb.Compound):
def __init__(self):
super(CubeLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
pattern = mb.Grid3DPattern(2, 2, 2)
pattern.scale(1)
for pos in pattern:
lj_particle = mb.clone(lj_proto)
mb.translate(lj_particle, pos)
self.add(lj_particle)
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
cube_proto = CubeLJ();
pattern = mb.Grid3DPattern(3, 3, 3)
pattern.scale(10)
rnd = random.Random()
rnd.seed(123)
for pos in pattern:
lj_cube = mb.clone(cube_proto)
mb.translate_to(lj_cube, pos)
#shift the particle so the center of mass
#of the system is at the origin
mb.translate(lj_cube, [-5,-5,-5])
mb.spin_x(lj_cube, rnd.uniform(0, 2 * pi))
mb.spin_y(lj_cube, rnd.uniform(0, 2 * pi))
mb.spin_z(lj_cube, rnd.uniform(0, 2 * pi))
self.add(lj_cube)
monoLJ = MonoLJ()
monoLJ.visualize()
#save as xyz file
monoLJ.save('output.xyz')
#save as mol2
monoLJ.save('output.mol2')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Q. How would you estimate the time it takes for a photon to travel from the core to the surface?
Step2: This is a 3D random walk (if you take ASTR3730 you'll see a derivation of this).
Step3: Let's compute tau by using a Monte Carlo experiment.
Step4: Q. Does "positions" contain the final position of each particle, or the entire trajectory of each particle?
Step5: Q. How can we get the average position of all particles?
Step6: Q. And the average traveled distance for all particles?
Step7: $\sqrt{\frac{4N_s}{\tau}}$ is the theoretical expectation value for the separation for a large number of tests run, in 1D. (different values for higher dimensions).
Step8: Q. What should this histogram look like? Should it be centrally peaked? If so, at what value? How wide?
Step9: VECTORIZATION of Implementation
Step10: Files
Step11: Use so called context manager
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image(url='http://upload.wikimedia.org/wikipedia/commons/thumb/b/b4/The_Sun_by_the_Atmospheric_Imaging_Assembly_of_NASA%27s_Solar_Dynamics_Observatory_-_20100819.jpg/251px-The_Sun_by_the_Atmospheric_Imaging_Assembly_of_NASA%27s_Solar_Dynamics_Observatory_-_20100819.jpg')
Image(url='http://upload.wikimedia.org/wikipedia/commons/thumb/d/d4/Sun_poster.svg/500px-Sun_poster.svg.png')
%matplotlib inline
import numpy as np
import matplotlib.pylab as pl
pl.plot?
N = 10000
radius = 250
tau = 2*np.pi
x = np.random.randint(-radius, radius, N)
y = np.random.randint(-radius, radius, N)
distances = np.sqrt(x**2 + y**2)
phi = np.arange(0, tau, 0.01)
pl.figure(figsize=(5,5))
pl.scatter(x[np.argwhere(distances <= radius)],
y[np.argwhere(distances <= radius)],
color='k', s=1);
pl.scatter(x[np.argwhere(distances > radius)],
y[np.argwhere(distances > radius)],
color='r', s=1);
pl.plot(radius * np.cos(phi), radius * np.sin(phi), color='g', linewidth=2);
print("tau is", 8.0 * np.sum(distances < radius) / float(N))
tauArray = np.zeros(N)
for num in np.arange(N):
xArray = np.random.randint(-radius, radius, num + 1)
yArray = np.random.randint(-radius, radius, num + 1)
tauArray[num] = 8.0 * np.sum(np.sqrt(xArray**2 + yArray**2) < radius) / float(num + 1)
pl.plot(np.arange(N), tauArray);
pl.axhline(y=tau, lw=2, color='r')
pl.ylim(tau - 0.5, tau + 0.5);
import random
# Number of particles
Np = 100
# Number of steps (per particle)
Ns = 50000
# All particles start at x = 0
positions = np.zeros(Np)
distances = np.zeros(Np)
# A (randomly drawn) 1 will move the particle to the left
# and a 2 will move it to the right
Left = 1; Right = 2
# Step Ns times for each particle "p"
for p in range(Np):
for step in range(Ns):
# Integer random number generator
direction = random.randint(1, 2)
# returns a random integer x such that 1 <= x <= 2
# (effectively a coin-flip here)
if direction == Left:
positions[p] -= 1 # Move left
elif direction == Right:
positions[p] += 1 # Move right
print("Positions")
print(positions)
print("Average Position", positions.mean())
print("Avg Separation", abs(positions).mean())
print("Expectation %g" % np.sqrt(4*Ns/tau))
# Standard deviation
positions.std()
n, bins, patches = pl.hist(positions, 20, facecolor='g')
pl.xlabel('Final Position')
pl.ylabel('Frequency')
pl.title('1-D Random Walk Distance')
pl.grid(True)
Np = 100 # Number of particles
Ns = 50000 # Number of steps (per particle)
# Draw the move random number steps all at once:
moves = np.random.randint(1, 3, size=Np*Ns)
# FOR np.random.randint THIS RUNS FROM 1 TO 2, INTEGERS ONLY
# Q. What's happening here?
moves = 2 * moves - 3
# Create a 2-D array of moves so that moves[i, j]
# is the "i"th step of particle j:
moves.shape = (Ns, Np)
# Create an array of initial starting positions for each particle
positions = np.zeros(Np)
for step in range(Ns):
# Select the moves values for the current step:
positions += moves[step, :]
# Updates positions for all particles in this step
# This is vectorized: I'm not looping over the particles
# Histogram the results
n, bins, patches = pl.hist(positions, bins=np.arange(-500, 500, 50), facecolor='b')
pl.xlabel('Final Position')
pl.ylabel('Frequency')
pl.title('1-D Random Walk Distance')
pl.grid(True)
print("Average Position", np.mean(positions))
print("Avg Separation ", np.mean(abs(positions)))
print("Expectation %g" % np.sqrt(4*Ns/tau))
f = open('afile')
f = open('afile', 'w')
type(f)
f.write('testing\n')
f.close()
cat afile
f = open('afile') # default mode: read
f.read()
f.write('more tests\n')
f.close()
f = open('afile', 'w')
f.read()
f.close()
f = open('afile', 'a') # append!
cat afile
with open('afile', 'w') as f:
f.write('testing\nmore tests\n')
cat afile
with open('afile', 'r') as f:
data = f.read()
data
with open('afile', 'r') as f:
data = f.readlines()
data
s = 'mystring '
s.strip('m ')
data = [item.strip() for item in data]
data
row = ' '.join(['1','2','3'])
f.write(row+'\n')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Look at some sample outputs below
Step2: We can define a function in python by the def keyword, followed by the name of the function, a set of parenthesis with its inputs or paramaters and a
Step3: Whenever we call print_double, we ask the function to print $2x$ so we don't have to.
Step4: A function can also have multiple parameters, or no parameters at all. Remember, the parameters are the things that go in between the paranthesis. Suppose that we want a function that takes in zero parameters and always returns $1337$, we just leave the parameter section blank.
Step5: What do you think will happen if we run
Step6: Recall that the slope of a line that goes through $(x_1, y_1)$ and $(x_2, y_2)$ is $\frac{y_2-y_1}{x_2-x_1}$. Write a function that takes in four parameters x1,y1,x2,y2 and finds the slope of the line that goes through the two points. Call it find_Slope.
Step7: Recall the example of you and your friend walking home from school. Now we will write a function that will calculate the time difference of arrival for an abstract sense. If your friend lives a distance $a$ miles away from you and you both decide to hang out at a location $x$ during the weekend, what will be the time difference of arrival of you and your friend getting home? Both of you walk with the same speed $s$. Answer the questions below to help find the TDOA.
Step8: Now we have a function that given a position finds the time difference of arrival. What if we want to find the position of your hangout spot given the TDOA? Write a function called find_position that takes in the position of your friend's house $a$, a time difference of arrival ($t$), and speed ($s$) and returns the position of your hangout location.
|
<ASSISTANT_TASK:>
Python Code:
def double(x):
return(2*x);
print double(1);
print double(2);
print double(3);
def print_double(x):
print(2*x);
print_double(1);
print_double(2);
print_double(3);
def leet():
return 1337;
#Try what "print leet()" does.
#Enter your code here:
#Write your function here
#Solution
def find_Slope(x1,y1,x2,y2):
return (y2-y1)/(x2-x1);
#Write your function here
#Solution
def TDOA(a, x, s):
return (a-2.0*x)/s;
#Write your function here
#Solution
def find_position(a, t, s):
return (a-s*t)/2.0;
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run the next cell to load the "SIGNS" dataset you are going to use.
Step2: As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.
Step3: In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.
Step5: 1.1 - Create placeholders
Step7: Expected Output
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step14: Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code!
Step15: Expected output
|
<ASSISTANT_TASK:>
Python Code:
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
import tensorflow as tf
from tensorflow.python.framework import ops
from cnn_utils import *
%matplotlib inline
np.random.seed(1)
# Loading the data (signs)
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Example of a picture
index = 6
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
X_train = X_train_orig/255.
X_test = X_test_orig/255.
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
conv_layers = {}
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_H0, n_W0, n_C0, n_y):
Creates the placeholders for the tensorflow session.
Arguments:
n_H0 -- scalar, height of an input image
n_W0 -- scalar, width of an input image
n_C0 -- scalar, number of channels of the input
n_y -- scalar, number of classes
Returns:
X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype "float"
Y -- placeholder for the input labels, of shape [None, n_y] and dtype "float"
### START CODE HERE ### (≈2 lines)
X = tf.placeholder(tf.float32, shape=[None, n_H0, n_W0, n_C0])
Y = tf.placeholder(tf.float32, shape=[None, n_y])
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(64, 64, 3, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
Initializes weight parameters to build a neural network with tensorflow. The shapes are:
W1 : [4, 4, 3, 8]
W2 : [2, 2, 8, 16]
Returns:
parameters -- a dictionary of tensors containing W1, W2
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 2 lines of code)
W1 = tf.get_variable('W1', shape=[4, 4, 3, 8], initializer=tf.contrib.layers.xavier_initializer(seed=0))
W2 = tf.get_variable('W2', shape=[2, 2, 8, 16], initializer=tf.contrib.layers.xavier_initializer(seed=0))
### END CODE HERE ###
parameters = {"W1": W1,
"W2": W2}
return parameters
tf.reset_default_graph()
with tf.Session() as sess_test:
parameters = initialize_parameters()
init = tf.global_variables_initializer()
sess_test.run(init)
print("W1 = " + str(parameters["W1"].eval()[1,1,1]))
print("W2 = " + str(parameters["W2"].eval()[1,1,1]))
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
Implements the forward propagation for the model:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "W2"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
W2 = parameters['W2']
### START CODE HERE ###
# CONV2D: stride of 1, padding 'SAME'
Z1 = tf.nn.conv2d(X, W1, strides=[1, 1, 1, 1], padding='SAME')
# RELU
A1 = tf.nn.relu(Z1)
# MAXPOOL: window 8x8, sride 8, padding 'SAME'
P1 = tf.nn.max_pool(A1, ksize=[1, 8, 8, 1], strides=[1, 8, 8, 1], padding='SAME')
# CONV2D: filters W2, stride 1, padding 'SAME'
Z2 = tf.nn.conv2d(P1, W2, strides=[1, 1, 1, 1], padding='SAME')
# RELU
A2 = tf.nn.relu(Z2)
# MAXPOOL: window 4x4, stride 4, padding 'SAME'
P2 = tf.nn.max_pool(A2, ksize=[1, 4, 4, 1], strides=[1, 4, 4, 1], padding='SAME')
# FLATTEN
P2 = tf.contrib.layers.flatten(P2)
# FULLY-CONNECTED without non-linear activation function (not not call softmax).
# 6 neurons in output layer. Hint: one of the arguments should be "activation_fn=None"
Z3 = tf.contrib.layers.fully_connected(P2, 6, activation_fn=None)
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)})
print("Z3 = " + str(a))
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=Z3, labels=Y))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)})
print("cost = " + str(a))
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009,
num_epochs = 100, minibatch_size = 64, print_cost = True):
Implements a three-layer ConvNet in Tensorflow:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X_train -- training set, of shape (None, 64, 64, 3)
Y_train -- test set, of shape (None, n_y = 6)
X_test -- training set, of shape (None, 64, 64, 3)
Y_test -- test set, of shape (None, n_y = 6)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
train_accuracy -- real number, accuracy on the train set (X_train)
test_accuracy -- real number, testing accuracy on the test set (X_test)
parameters -- parameters learnt by the model. They can then be used to predict.
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep results consistent (tensorflow seed)
seed = 3 # to keep results consistent (numpy seed)
(m, n_H0, n_W0, n_C0) = X_train.shape
n_y = Y_train.shape[1]
costs = [] # To keep track of the cost
# Create Placeholders of the correct shape
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_H0, n_W0, n_C0, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost.
### START CODE HERE ### (1 line)
train_op = tf.train.AdamOptimizer(learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables globally
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
minibatch_cost = 0.
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the optimizer and the cost, the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , temp_cost = sess.run([train_op, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
### END CODE HERE ###
minibatch_cost += temp_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 5 == 0:
print ("Cost after epoch %i: %f" % (epoch, minibatch_cost))
if print_cost == True and epoch % 1 == 0:
costs.append(minibatch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# Calculate the correct predictions
predict_op = tf.argmax(Z3, 1)
correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(accuracy)
train_accuracy = accuracy.eval({X: X_train, Y: Y_train})
test_accuracy = accuracy.eval({X: X_test, Y: Y_test})
print("Train Accuracy:", train_accuracy)
print("Test Accuracy:", test_accuracy)
return train_accuracy, test_accuracy, parameters
_, _, parameters = model(X_train, Y_train, X_test, Y_test)
fname = "images/thumbs_up.jpg"
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64))
plt.imshow(my_image)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: That looks like any other Python code! But this example is a bit silly.
Step3: We start with a list of strings that desparately need translation. And add a little
Step4: Beginning Python programmers like to append things; this is not how you are
Step5: Rather use a comprehension like so
Step6: Or use map
Step8: Noodlify!
Step9: Let's take stock of the mutations to the original. We've added a @schedule decorator to word, and changed a function call in sentence. Also we added the __str__ method; this is only needed to plot the workflow graph. Let's run the new script.
Step10: The last peculiar thing that you may notice, is the gather function. It collects the promises that map generates and creates a single new promise. The definition of gather is very simple
Step11: Dealing with repetition
Step12: Let's run the example workflows now, but focus on the actions taken, looking at the logs. The function run_and_print_log in the tutorial module runs our workflow with four parallel threads and caches results in a Sqlite3 database.
Step13: Running the workflow, we can now see that at the second occurence of the word 'oote', the function call is attached to the first job that asked for the same result. The job word_size('oote') is run only once.
Step14: Now, running a similar workflow again, notice that previous results are retrieved from the database.
Step15: Although the result of every single job is retrieved we still had to go through the trouble of looking up the results of word_size('Oote'), word_size('oote'), and word_size('Boe') to find out that we wanted the result from the format_string. If you want to cache the result of an entire workflow, pack the workflow in another scheduled function!
Step16: See how the first job is evaluated to return a new workflow. Note that if the version is omitted, it is automatically generated from the source of the function. For example, let's say we decided the function word_size_phrase should return a dictionary of all word sizes in stead of a string. Here we use the function called lift to transform a dictionary containing promises to a promise of a dictionary. lift can handle lists, dictionaries, sets, tuples and objects that are constructable from their __dict__ member.
Step17: Be careful with versions! Noodles will believe you upon your word! If we lie about the version, it will go ahead and retrieve the result belonging to the old function
|
<ASSISTANT_TASK:>
Python Code:
from noodles import run_single
from noodles.tutorial import (add, sub, mul)
u = add(5, 4)
v = sub(u, 3)
w = sub(u, 2)
x = mul(v, w)
answer = run_single(x)
print("The answer is {0}.".format(answer))
import urllib.request
import json
import re
class Translate:
Translate words and sentences in the worst possible way. The Glosbe dictionary
has a nice REST interface that we query for a phrase. We then take the first result.
To translate a sentence, we cut it in pieces, translate it and paste it back into
a Frankenstein monster.
def __init__(self, src_lang='en', tgt_lang='fy'):
self.src = src_lang
self.tgt = tgt_lang
self.url = 'https://glosbe.com/gapi/translate?' \
'from={src}&dest={tgt}&' \
'phrase={{phrase}}&format=json'.format(
src=src_lang, tgt=tgt_lang)
def query_phrase(self, phrase):
with urllib.request.urlopen(self.url.format(phrase=phrase.lower())) as response:
translation = json.loads(response.read().decode())
return translation
def word(self, phrase):
translation = self.query_phrase(phrase)
#translation = {'tuc': [{'phrase': {'text': phrase.lower()[::-1]}}]}
if len(translation['tuc']) > 0 and 'phrase' in translation['tuc'][0]:
result = translation['tuc'][0]['phrase']['text']
if phrase[0].isupper():
return result.title()
else:
return result
else:
return "<" + phrase + ">"
def sentence(self, phrase):
words = re.sub("[^\w]", " ", phrase).split()
space = re.sub("[\w]+", "{}", phrase)
return space.format(*map(self.word, words))
shakespeare = [
"If music be the food of love, play on,",
"Give me excess of it; that surfeiting,",
"The appetite may sicken, and so die."]
def print_poem(intro, poem):
print(intro)
for line in poem:
print(" ", line)
print()
print_poem("Original:", shakespeare)
shakespeare_auf_deutsch = []
for line in shakespeare:
shakespeare_auf_deutsch.append(
Translate('en', 'de').sentence(line))
print_poem("Auf Deutsch:", shakespeare_auf_deutsch)
shakespeare_ynt_frysk = \
(Translate('en', 'fy').sentence(line) for line in shakespeare)
print_poem("Yn it Frysk:", shakespeare_ynt_frysk)
shakespeare_pa_dansk = \
map(Translate('en', 'da').sentence, shakespeare)
print_poem("På Dansk:", shakespeare_pa_dansk)
from noodles import schedule
@schedule
def format_string(s, *args, **kwargs):
return s.format(*args, **kwargs)
import urllib.request
import json
import re
class Translate:
Translate words and sentences in the worst possible way. The Glosbe dictionary
has a nice REST interface that we query for a phrase. We then take the first result.
To translate a sentence, we cut it in pieces, translate it and paste it back into
a Frankenstein monster.
def __init__(self, src_lang='en', tgt_lang='fy'):
self.src = src_lang
self.tgt = tgt_lang
self.url = 'https://glosbe.com/gapi/translate?' \
'from={src}&dest={tgt}&' \
'phrase={{phrase}}&format=json'.format(
src=src_lang, tgt=tgt_lang)
def query_phrase(self, phrase):
with urllib.request.urlopen(self.url.format(phrase=phrase.lower())) as response:
translation = json.loads(response.read().decode())
return translation
@schedule
def word(self, phrase):
#translation = {'tuc': [{'phrase': {'text': phrase.lower()[::-1]}}]}
translation = self.query_phrase(phrase)
if len(translation['tuc']) > 0 and 'phrase' in translation['tuc'][0]:
result = translation['tuc'][0]['phrase']['text']
if phrase[0].isupper():
return result.title()
else:
return result
else:
return "<" + phrase + ">"
def sentence(self, phrase):
words = re.sub("[^\w]", " ", phrase).split()
space = re.sub("[\w]+", "{}", phrase)
return format_string(space, *map(self.word, words))
def __str__(self):
return "[{} -> {}]".format(self.src, self.tgt)
def __serialize__(self, pack):
return pack({'src_lang': self.src,
'tgt_lang': self.tgt})
@classmethod
def __construct__(cls, msg):
return cls(**msg)
from noodles import gather, run_parallel
from noodles.tutorial import get_workflow_graph
shakespeare_en_esperanto = \
map(Translate('en', 'eo').sentence, shakespeare)
wf = gather(*shakespeare_en_esperanto)
workflow_graph = get_workflow_graph(wf._workflow)
result = run_parallel(wf, n_threads=8)
print_poem("Shakespeare en Esperanto:", result)
workflow_graph.attr(size='10')
workflow_graph
from noodles import (schedule, gather_all)
import re
@schedule
def word_size(word):
return len(word)
@schedule
def format_string(s, *args, **kwargs):
return s.format(*args, **kwargs)
def word_size_phrase(phrase):
words = re.sub("[^\w]", " ", phrase).split()
space = re.sub("[\w]+", "{}", phrase)
word_lengths = map(word_size, words)
return format_string(space, *word_lengths)
from noodles.tutorial import display_workflows, run_and_print_log
display_workflows(
prefix='poetry',
sizes=word_size_phrase("Oote oote oote, Boe"))
# remove the database if it already exists
!rm -f tutorial.db
run_and_print_log(word_size_phrase("Oote oote oote, Boe"), highlight=range(4, 8))
run_and_print_log(word_size_phrase("Oe oe oote oote oote"), highlight=range(5, 10))
@schedule(version='1.0')
def word_size_phrase(phrase):
words = re.sub("[^\w]", " ", phrase).split()
space = re.sub("[\w]+", "{}", phrase)
word_lengths = map(word_size, words)
return format_string(space, *word_lengths)
run_and_print_log(
word_size_phrase("Kneu kneu kneu kneu ote kneu eur"),
highlight=[1, 17])
from noodles import lift
def word_size_phrase(phrase):
words = re.sub("[^\w]", " ", phrase).split()
return lift({word: word_size(word) for word in words})
display_workflows(prefix='poetry', lift=word_size_phrase("Kneu kneu kneu kneu ote kneu eur"))
run_and_print_log(word_size_phrase("Kneu kneu kneu kneu ote kneu eur"))
@schedule(version='1.0')
def word_size_phrase(phrase):
words = re.sub("[^\w]", " ", phrase).split()
return lift({word: word_size(word) for word in words})
run_and_print_log(
word_size_phrase("Kneu kneu kneu kneu ote kneu eur"),
highlight=[1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: BallPen & InkPen both are initializing the parent class using super().__init(size, name) function. Now lets create few objects of both,
Step2: Lets create a parent class which inherits grand_parent class, note we have used super().__init_(middle_name) to set middle name using parents function middle_name.
Step3: Now lets create the student which inherits parent class. Check its init also.
Step4: Check the order of init's being called.
Step5: Now lets create the same classes without init functions, and see what happens
Step6: We got the error because, init of none of the parent's were called, and only students init was called.
|
<ASSISTANT_TASK:>
Python Code:
class Pen():
def __init__(self, size, name):
self.name = name
self.size = size
def set_name(self, name):
self.name = name
class BallPen(Pen):
def __init__(self, size, name, color):
self.color = color
super().__init__(size, name)
def set_color(self, color):
self.color = color
class InkPen(Pen):
def __init__(self, size, name, cart_type):
self.cart = cart_type
super().__init__(size, name)
pb = BallPen(10, "Renolds", "Green")
print(pb.name)
pb.set_name("cello")
print(pb.name)
print(pb.__dict__)
class grand_parent:
def __init__(self, middle_name):
print("grand_parent init")
self.__middle_name = middle_name
def middle_name(self, middle_name):
self.__middle_name = middle_name
return self.__middle_name
class parent(grand_parent):
def __init__(self, middle_name, surname):
print("parent init")
self.__surname = surname
super().__init__(middle_name)
def middle_name(self):
return self.middle_name
class student(parent):
def __init__(self, name, middle_name, surname):
print("student init")
self.name = name
super().__init__(middle_name, surname)
mohan = student("Venkat", "kumar", "Mohan")
print(mohan.middle_name)
mohan.middle_name = "KUMAR"
print(mohan.middle_name)
class grand_parent:
def __init__(self, middle_name):
print("grand_parent init")
self.__middle_name = middle_name
def middle_name(self, middle_name):
self.__middle_name = middle_name
return self.__middle_name
class parent(grand_parent):
def __init__(self, middle_name, surname):
print("parent init")
self.__surname = surname
def middle_name(self):
return self.__middle_name
class student(parent):
def __init__(self, name, middle_name, surname):
print("student init")
self.name = name
mohan = student("Venkat", "kumar", "Mohan")
try:
print(mohan.middle_name())
except Exception as e:
print(e)
# NOTE: python 2 has issues with Super , get it also documented here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: pmf 메서드를 사용하면 확률 질량 함수(pmf
Step2: 시뮬레이션을 하려면 rvs 메서드를 사용한다.
Step3: 이론적인 확률 분포와 샘플의 확률 분포를 동시에 나타내려면 다음과 같은 코드를 사용한다.
|
<ASSISTANT_TASK:>
Python Code:
N = 10
theta = 0.6
rv = sp.stats.binom(N, theta)
rv
xx = np.arange(N + 1)
plt.bar(xx, rv.pmf(xx), align="center")
plt.ylabel("P(x)")
plt.title("pmf of binomial distribution")
plt.show()
np.random.seed(0)
x = rv.rvs(100)
x
sns.countplot(x)
plt.show()
y = np.bincount(x, minlength=N) / len(x)
df = pd.DataFrame({"theoretic": rv.pmf(xx), "simulation": y}).stack()
df = df.reset_index()
df.columns = ["value", "type", "ratio"]
df
sns.barplot(x="value", y="ratio", hue="type", data=df)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Modify the results_path variable in the next cell to match the output directory of your Enrich2-Example dataset.
Step2: Open the Experiment HDF5 file.
Step3: The pd.HDFStore.keys() method returns a list of all the tables in this HDF5 file.
Step4: First we will work with the barcode-variant map for this analysis, stored in the "/main/barcodemap" table. The index is the barcode and it has a single column for the variant HGVS string.
Step5: To find out how many unique barcodes are linked to each variant, we'll count the number of times each variant appears in the barcode-variant map using a Counter data structure. We'll then output the top ten variants by number of unique barcodes.
Step6: Next we'll turn the Counter into a data frame.
Step7: The data frame has the information we want, but it will be easier to use later if it's indexed by variant rather than row number.
Step8: We'll use a cutoff to choose variants with a minimum number of unique barcodes, and store this subset in a new index. We'll also exclude the wild type by dropping the first entry of the index.
Step9: We can use this index to get condition-level scores for these variants by querying the "/main/variants/scores" table. Since we are working with an Experiment HDF5 file, the data frame column names are a MultiIndex with two levels, one for experimental conditions and one for data values (see the pandas documentation for more information).
Step10: There are fewer rows in multi_bc_scores than in multi_bc_variants because some of the variants were not scored in all replicate selections, and therefore do not have a condition-level score.
Step11: We'll add a column to the bc_counts data frame that contains scores from the multi_bc_scores data frame. To reference a column in a data frame with a MultiIndex, we need to specify all column levels.
Step12: Many rows in bc_counts are missing scores (displayed as NaN) because those variants were not in multi_bc_scores. We'll drop them before continuing.
Step13: Now that we have a data frame containing the subset of variants we're interested in, we can make a plot of score vs. number of unique barcodes. This example uses functions and colors from the Enrich2 plotting library.
|
<ASSISTANT_TASK:>
Python Code:
% matplotlib inline
from __future__ import print_function
import os.path
from collections import Counter
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from enrich2.variant import WILD_TYPE_VARIANT
import enrich2.plots as enrich_plot
pd.set_option("display.max_rows", 10) # rows shown when pretty-printing
results_path = "/path/to/Enrich2-Example/Results/"
my_store = pd.HDFStore(os.path.join(results_path, "BRCA1_Example_exp.h5"))
my_store.keys()
bcm = my_store['/main/barcodemap']
bcm
variant_bcs = Counter(bcm['value'])
variant_bcs.most_common(10)
bc_counts = pd.DataFrame(variant_bcs.most_common(), columns=['variant', 'barcodes'])
bc_counts
bc_counts.index = bc_counts['variant']
bc_counts.index.name = None
del bc_counts['variant']
bc_counts
bc_cutoff = 10
multi_bc_variants = bc_counts.loc[bc_counts['barcodes'] >= bc_cutoff].index[1:]
multi_bc_variants
multi_bc_scores = my_store.select('/main/variants/scores', where='index in multi_bc_variants')
multi_bc_scores
my_store.close()
bc_counts['score'] = multi_bc_scores['E3', 'score']
bc_counts
bc_counts.dropna(inplace=True)
bc_counts
fig, ax = plt.subplots()
enrich_plot.configure_axes(ax, xgrid=True)
ax.plot(bc_counts['barcodes'],
bc_counts['score'],
linestyle='none', marker='.', alpha=0.6,
color=enrich_plot.plot_colors['bright5'])
ax.set_xlabel("Unique Barcodes")
ax.set_ylabel("Variant Score")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Step5: Checking out the results
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[5]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
print(mnist.train.images.shape)
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
image_size = mnist.train.images.shape[1]
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, image_size), name = 'inputs')
targets_ = tf.placeholder(tf.float32, (None, image_size), name = 'targets')
# Output of hidden layer, single fully connected layer here with ReLU activation
encoded = tf.layers.dense(inputs_, units=encoding_dim, activation=tf.nn.relu)
# Output layer logits, fully connected layer with no activation
logits = tf.layers.dense(encoded, image_size, activation=None)
# Sigmoid output from logits
decoded = tf.nn.sigmoid(logits, name ='output')
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
# Create the session
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part Two
|
<ASSISTANT_TASK:>
Python Code:
import re
numbers = re.compile('(\d+)')
UPPER_LIMIT = 4294967295
with open('../inputs/day20.txt', 'r') as f:
data = [
tuple(map(
int, numbers.findall(line)))
for line in f.readlines()]
data.sort()
rule_index = 0
n = 0
while n <= UPPER_LIMIT:
for i in range(rule_index, len(data)):
low, high = data[i]
if low <= n <= high:
n += 1
break
else:
print('answer', n)
break
rule_index = 0
n = 0
ips = 0
while n <= UPPER_LIMIT:
for i in range(rule_index, len(data)):
low, high = data[i]
if low <= n <= high:
n = high + 1
break
else:
ips += 1
n += 1
print(ips)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Inner product
Step2: Covariance
Step3: Cosine Similarity
Step4: Pearson Correlation
Step5: OLS (univariate w/o intercept)
Step6: OLS (univariate w/ intercept)
|
<ASSISTANT_TASK:>
Python Code:
# Import some stuff
import numpy as np
import pandas as pd
import scipy.spatial.distance as spd
from pymer4.simulate import easy_multivariate_normal
from pymer4.models import Lm
import matplotlib.pyplot as plt
% matplotlib inline
# Prep some data
X = easy_multivariate_normal(50,2,corrs=.2)
a, b = X[:,0], X[:,1]
np.dot(a,b)
a_centered = a - a.mean()
b_centered = b - b.mean()
np.dot(a_centered,b_centered) / len(a) # could have used len(b) instead
# Check our work
np.cov(a,b,ddof=0)[0][1]
# Euclidean/L2 norm = square root of sum of squared values
# algebra form
a_norm = np.sqrt(np.sum(np.power(a,2)))
# matrix form
b_norm = np.sqrt(np.dot(b,b.T))
# numpy short-cut
# np.linalg.norm(a)
np.dot(a,b) / (a_norm * b_norm)
# Check our work (subract 1 because scipy returns distances)
1 - spd.cosine(a,b)
# Can think of this as normalized covariance OR centered cosine similarity
a_centered_norm = np.linalg.norm(a_centered)
b_centered_norm = np.linalg.norm(b_centered)
np.dot(a_centered,b_centered) / (a_centered_norm * b_centered_norm)
# Check our work
1 - spd.correlation(a,b)
# Can think of this as cosine similarity using only one vector
np.dot(a,b) / (a_norm * a_norm)
# Check our work
model = Lm('B ~ 0 + A',data=pd.DataFrame({'A':a,'B':b}))
model.fit(summarize=False)
model.coefs.iloc[-1,0]
# In the numerator we could actually center a or b, or both.
np.dot(a_centered,b) / (a_centered_norm * a_centered_norm)
# Check our work
model = Lm('B ~ A',data=pd.DataFrame({'A':a,'B':b}))
model.fit(summarize=False)
model.coefs.iloc[-1,0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: DO NOT MODIFY the following constants as they include filepaths used in this notebook and data that is shared during training and inference.
Step2: Setup Environment
Step3: DELETE any old data from previous runs
Step4: Clone the TensorFlow Github Repository, which contains the relevant code required to run this tutorial.
Step5: Load TensorBoard to visualize the accuracy and loss as training proceeds.
Step6: Training
Step7: Generate a TensorFlow Model for Inference
Step8: Generate a TensorFlow Lite Model
Step9: Generate a TensorFlow Lite for MicroControllers Model
Step10: Deploy to a Microcontroller
|
<ASSISTANT_TASK:>
Python Code:
# A comma-delimited list of the words you want to train for.
# The options are: yes,no,up,down,left,right,on,off,stop,go
# All the other words will be used to train an "unknown" label and silent
# audio data with no spoken words will be used to train a "silence" label.
WANTED_WORDS = "yes,no"
# The number of steps and learning rates can be specified as comma-separated
# lists to define the rate at each stage. For example,
# TRAINING_STEPS=12000,3000 and LEARNING_RATE=0.001,0.0001
# will run 12,000 training loops in total, with a rate of 0.001 for the first
# 8,000, and 0.0001 for the final 3,000.
TRAINING_STEPS = "12000,3000"
LEARNING_RATE = "0.001,0.0001"
# Calculate the total number of steps, which is used to identify the checkpoint
# file name.
TOTAL_STEPS = str(sum(map(lambda string: int(string), TRAINING_STEPS.split(","))))
# Print the configuration to confirm it
!echo "Training these words:" $WANTED_WORDS
!echo "Training steps in each stage:" $TRAINING_STEPS
!echo "Learning rate in each stage:" $LEARNING_RATE
!echo "Total number of training steps:" $TOTAL_STEPS
# Calculate the percentage of 'silence' and 'unknown' training samples required
# to ensure that we have equal number of samples for each label.
number_of_labels = WANTED_WORDS.count(',') + 1
number_of_total_labels = number_of_labels + 2 # for 'silence' and 'unknown' label
equal_percentage_of_training_samples = int(100.0/(number_of_total_labels))
SILENT_PERCENTAGE = equal_percentage_of_training_samples
UNKNOWN_PERCENTAGE = equal_percentage_of_training_samples
# Constants which are shared during training and inference
PREPROCESS = 'micro'
WINDOW_STRIDE ='20'
MODEL_ARCHITECTURE = 'tiny_conv' # Other options include: single_fc, conv,
# low_latency_conv, low_latency_svdf, tiny_embedding_conv
QUANTIZE = '1' # For booleans, we provide 1 or 0 (instead of True or False)
# Constants used during training only
VERBOSITY = 'WARN'
EVAL_STEP_INTERVAL = '1000'
SAVE_STEP_INTERVAL = '5000'
# Constants for training directories and filepaths
DATASET_DIR = 'dataset/'
LOGS_DIR = 'logs/'
TRAIN_DIR = 'train/' # for training checkpoints and other files.
# Constants for inference directories and filepaths
import os
MODELS_DIR = 'models/'
os.mkdir(MODELS_DIR)
MODEL_TF = MODELS_DIR + 'model.pb'
MODEL_TFLITE = MODELS_DIR + 'model.tflite'
MODEL_TFLITE_MICRO = MODELS_DIR + 'model.cc'
%tensorflow_version 1.x
import tensorflow as tf
!rm -rf {DATASET_DIR} {LOGS_DIR} {TRAIN_DIR} {MODELS_DIR}
!git clone -q https://github.com/tensorflow/tensorflow
%load_ext tensorboard
%tensorboard --logdir {LOGS_DIR}
!python tensorflow/tensorflow/examples/speech_commands/train.py \
--data_dir={DATASET_DIR} \
--wanted_words={WANTED_WORDS} \
--silence_percentage={SILENT_PERCENTAGE} \
--unknown_percentage={UNKNOWN_PERCENTAGE} \
--preprocess={PREPROCESS} \
--window_stride={WINDOW_STRIDE} \
--model_architecture={MODEL_ARCHITECTURE} \
--quantize={QUANTIZE} \
--how_many_training_steps={TRAINING_STEPS} \
--learning_rate={LEARNING_RATE} \
--train_dir={TRAIN_DIR} \
--summaries_dir={LOGS_DIR} \
--verbosity={VERBOSITY} \
--eval_step_interval={EVAL_STEP_INTERVAL} \
--save_step_interval={SAVE_STEP_INTERVAL} \
!python tensorflow/tensorflow/examples/speech_commands/freeze.py \
--wanted_words=$WANTED_WORDS \
--window_stride_ms=$WINDOW_STRIDE \
--preprocess=$PREPROCESS \
--model_architecture=$MODEL_ARCHITECTURE \
--quantize=$QUANTIZE \
--start_checkpoint=$TRAIN_DIR$MODEL_ARCHITECTURE'.ckpt-'$TOTAL_STEPS \
--output_file=$MODEL_TF \
input_tensor = 'Reshape_2'
output_tensor = 'labels_softmax'
converter = tf.lite.TFLiteConverter.from_frozen_graph(
MODEL_TF, [input_tensor], [output_tensor])
converter.inference_type = tf.uint8
converter.quantized_input_stats = {input_tensor: (0.0, 9.8077)} # (mean, standard deviation)
tflite_model = converter.convert()
tflite_model_size = open(MODEL_TFLITE, "wb").write(tflite_model)
print("Model is %d bytes" % tflite_model_size)
# Install xxd if it is not available
!apt-get update && apt-get -qq install xxd
# Convert to a C source file
!xxd -i {MODEL_TFLITE} > {MODEL_TFLITE_MICRO}
# Update variable names
REPLACE_TEXT = MODEL_TFLITE.replace('/', '_').replace('.', '_')
!sed -i 's/'{REPLACE_TEXT}'/g_model/g' {MODEL_TFLITE_MICRO}
# Print the C source file
!cat {MODEL_TFLITE_MICRO}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sin embargo no existen funciones trigonométricas cargadas por default. Para esto tenemos que importarlas de la libreria math
Step2: Variables
Step3: Ejercicio
Step4: Ejecuta la prueba de abajo para saber si has creado el codigo correcto
Step5: Listas
Step6: Pero si intentamos multiplicar estos datos por un numero, no tendrá el comportamiento esperado.
Step7: Funciones
Step8: Esta linea de codigo es equivalente a definir una función matemática de la siguiente manera
Step9: Esta notación que introducimos es muy util para funciones matemáticas, pero esto nos obliga a pensar en las definiciones de una manera funcional, lo cual no siempre es la solución (sobre todo en un lenguaje con un paradigma de programación orientado a objetos).
Step10: Con los mismos resultados
Step11: Ejercicio
Step12: Y para probar trata de convertir algunos datos
Step13: Ciclos de control
Step14: ó agregarlo en una lista nueva
Step15: y aun muchas cosas mas, pero por ahora es momento de empezar con la práctica.
Step16: Ejecuta las pruebas de abajo
Step17: Método de bisección
Step18: Una vez que tenemos dos puntos de los que sabemos que definen el intervalo donde se encuetra una raiz, podemos empezar a iterar para descubrir el punto medio.
Step19: Y de aqui podemos notar que el resultado que nos dio esto es positivo, es decir que la raiz tiene que estar entre $x_1$ y $x_M$. Por lo que para nuestra siguiente iteración usaremos el nuevo intervalo $x_1 = 1$ y $x_2 = 2.875$, es decir que ahora asignaremos el valor de $x_M$ a $x_2$.
Step20: Y podriamos seguir haciendo esto hasta que tengamos la exactitud que queremos, pero esa no seria una manera muy inteligente de hacerlo (tenemos una maquina a la que le gusta hacer tareas repetitivas y no la aprovechamos?).
Step21: Si volvemos a ejecutar el codigo que teniamos, sustituyendo esta función, obtendremos exactamente el mismo resultado
Step22: Y ahora lo que tenemos que hacer es poner una condicion para que $x_M$ se intercambie con $x_1$ ó $x_2$ dependiendo del signo.
Step23: Si, yo se que parece raro, pero si lo revisas con calma te daras cuenta que funciona.
Step24: Es decir, $n = 10$.
|
<ASSISTANT_TASK:>
Python Code:
2 + 3
2*3
2**3
sin(pi)
from math import sin, pi
sin(pi)
a = 10
a
c =
from pruebas_1 import prueba_1_1
prueba_1_1(_, c)
A = [2, 4, 8, 10]
A
A*2
f = lambda x: x**2 + 1
f(2)
def g(x):
y = x**2 + 1
return y
g(2)
def cel_a_faren(grados_cel):
grados_faren = # Escribe el codigo para hacer el calculo aqui
return grados_faren
cel_a_faren(10)
cel_a_faren(50)
for dato in A:
print dato*2
B = []
for dato in A:
B.append(dato*2)
B
C = [] # Escribe el codigo para declarar el primer arreglo adentro de los corchetes
C
D = []
# Escribe el codigo de tu ciclo for aqui
D
from pruebas_1 import prueba_1_3
prueba_1_3(C, D)
f = lambda x: x**3 + 2*x**2 + 10*x - 20
f(1.0)
f(2.0)
x_1, x_2 = 1.0, 2.0
xm1 = (x_1 + x_2)/2.0
f(xm1)
x_1, x_2 = x_1, xm1
xm2 = (x_1 + x_2)/2.0
f(xm2)
def biseccion(x1, x2):
return (x1 + x2)/2.0
x_1, x_2 = x_1, xm1
xm2 = biseccion(x_1, x_2)
f(xm2)
x_1, x_2 = 1.0, 2.0
xm1 = biseccion(x_1, x_2)
f(xm1)
if x_2*xm1 > 0:
x_2 = xm1
else:
x_1 = xm1
xm2 = biseccion(x_1, x_2)
f(xm2)
if x_2*xm2 > 0:
x_2 = xm2
else:
x_1 = xm2
xm3 = biseccion(x_1, x_2)
f(xm3)
n = (log(1) - log(0.001))/(log(2))
n
def metodo_biseccion(funcion, x1, x2, n):
xs = []
for i in range(n):
xs.append(biseccion(x1, x2))
if funcion(x2)*funcion(xs[-1]) > 0:
x2 = xs[-1]
else:
x1 = xs[-1]
return xs[-1]
metodo_biseccion(f, 1.0, 2.0, 10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in house sales data
Step2: Create new features
Step3: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this variable will mostly affect houses with many bedrooms.
Step4: Applying L1 penalty requires adding an extra parameter (l1_penalty) to the linear regression call in GraphLab Create. (Other tools may have separate implementations of LASSO.) Note that it's important to set l2_penalty=0 to ensure we don't introduce an additional L2 penalty.
Step5: Find what features had non-zero weight.
Step6: Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.
Step7: Next, we write a loop that does the following
Step8: QUIZ QUESTIONS
Step9: QUIZ QUESTION
Step10: Exploring the larger range of values to find a narrow range with the desired sparsity
Step11: Now, implement a loop that search through this space of possible l1_penalty values
Step12: Out of this large range, we want to find the two ends of our desired narrow range of l1_penalty. At one end, we will have l1_penalty values that have too few non-zeros, and at the other end, we will have an l1_penalty that has too many non-zeros.
Step13: QUIZ QUESTIONS
Step14: For l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20)
Step15: QUIZ QUESTIONS
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
sales = graphlab.SFrame('kc_house_data.gl/')
from math import log, sqrt
sales['sqft_living_sqrt'] = sales['sqft_living'].apply(sqrt)
sales['sqft_lot_sqrt'] = sales['sqft_lot'].apply(sqrt)
sales['bedrooms_square'] = sales['bedrooms']*sales['bedrooms']
# In the dataset, 'floors' was defined with type string,
# so we'll convert them to int, before creating a new feature.
sales['floors'] = sales['floors'].astype(int)
sales['floors_square'] = sales['floors']*sales['floors']
all_features = ['bedrooms', 'bedrooms_square',
'bathrooms',
'sqft_living', 'sqft_living_sqrt',
'sqft_lot', 'sqft_lot_sqrt',
'floors', 'floors_square',
'waterfront', 'view', 'condition', 'grade',
'sqft_above',
'sqft_basement',
'yr_built', 'yr_renovated']
len(all_features)
model_all = graphlab.linear_regression.create(sales, target='price', features=all_features,
validation_set=None,
l2_penalty=0., l1_penalty=1e10, verbose=False)
model_all.get('coefficients').print_rows(num_rows=len(all_features) + 1)
(training_and_validation, testing) = sales.random_split(.9,seed=1) # initial train/test split
(training, validation) = training_and_validation.random_split(0.5, seed=1) # split training into train and validate
import numpy as np
l1_penalty_list = np.logspace(1, 7, num=13)
sort_table = []
for l1_penalty in l1_penalty_list:
model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None,
l2_penalty=0, l1_penalty=l1_penalty, verbose=False)
predictions = model.predict(validation)
RSS = sum([(predictions[i] - validation[i]['price']) ** 2 for i in range(len(predictions))])
print l1_penalty, RSS
sort_table.append((RSS, l1_penalty))
print sorted(sort_table)[0]
best_model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None,
l2_penalty=0, l1_penalty=10, verbose=False)
best_model.get('coefficients').print_rows(num_rows=len(all_features) + 1)
predictions = best_model.predict(testing)
RSS = sum([(predictions[i] - testing[i]['price']) ** 2 for i in range(len(predictions))])
print RSS
best_model['coefficients']['value'].nnz()
max_nonzeros = 7
l1_penalty_values = np.logspace(8, 10, num=20)
info = []
for l1_penalty in l1_penalty_values:
model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None,
l2_penalty=0, l1_penalty=l1_penalty, verbose=False)
nnz = model['coefficients']['value'].nnz()
info.append((l1_penalty, nnz))
for x in enumerate(info):
print x
l1_penalty_min = l1_penalty_values[14]
l1_penalty_max = l1_penalty_values[15]
print l1_penalty_min, l1_penalty_max
l1_penalty_values = np.linspace(l1_penalty_min,l1_penalty_max,20)
print l1_penalty_values
sort_table = []
for l1_penalty in l1_penalty_values:
model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None,
l2_penalty=0, l1_penalty=l1_penalty, verbose=False)
nnz = model['coefficients']['value'].nnz()
if not nnz == max_nonzeros:
continue
predictions = model.predict(validation)
RSS = sum([(predictions[i] - validation[i]['price']) ** 2 for i in range(len(predictions))])
print l1_penalty, RSS
sort_table.append((RSS, l1_penalty))
print sorted(sort_table)[0]
best_model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None,
l2_penalty=0, l1_penalty=3320073020.20013, verbose=False)
best_model.get('coefficients').print_rows(num_rows=len(all_features) + 1)
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.