repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
list | types
list |
|---|---|---|---|---|
rldotai/rlbench
|
rlbench/simple_demo_state_values.ipynb
|
gpl-3.0
|
[
"The simplest demo of the various algorithms\nBefore attempting to handle the Off-Policy case with General Value Functions, it would be instructive to first examine how the algorithms perform in the On-Policy case with (more or less) fixed parameters.",
"%load_ext autoreload\n%autoreload 2\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimport algos\nimport features\nimport parametric\nimport policy\nimport chicken\nfrom agents import OnPolicyAgent\nfrom rlbench import *",
"We have a number of algorithms that we can try",
"algos.algo_registry",
"We define an environment and an associated feature function, and set out parameters (or parameter functions) for the experiment.\nIn order to test these algorithms in an efficient and consistent manner, we have an OnPolicyAgent wrapper class that handles the work of performing function approximation and providing the algorithms with the right parameters in the right order.\nThis makes things a little bit simpler to work with (since we've registered all of the algorithms via metaclass programming) and can just re-use the same parameters, trusting that the unneeded ones won't actually be used by the Agent. \nThen, we just run the experiment for each algorithm and plot the results.",
"# define the experiment\nnum_states = 8\nnum_features = 8\nmax_steps = 100000\n\n# set up environment\nenv = chicken.Chicken(num_states)\n\n# set up policy\npol_pi = policy.FixedPolicy({s: {0: 1} if s < 4 else {0: 0.5, 1: 0.5} for s in env.states})\n\n# set feature mapping\n# phi = features.RandomBinary(num_features, num_features // 2, random_seed=101011)\nphi = features.Int2Unary(num_states)\n\n# set up algorithms\nupdate_params = {\n 'alpha': 0.01,\n 'beta': 0.0001,\n 'gm': 0.9,\n 'gm_p': 0.9,\n 'lm': 0.0,\n 'lm_p': 0.0,\n 'interest': 1.0,\n}\n\n# Run all available algorithms \nfor name, alg in algos.algo_registry.items():\n # Set up the agent, run the experiment, get state-values\n agent = OnPolicyAgent(alg(phi.length), pol_pi, phi, update_params)\n steps = run_episode(agent, env, max_steps)\n values = agent.get_values(env.states)\n\n # plot the results\n xvals = list(sorted(env.states))\n yvals = [values[x] for x in xvals]\n plt.title(name)\n plt.xlabel('State Index')\n plt.ylabel('State Value')\n plt.bar(xvals, yvals)\n plt.show()",
"Running a Single Experiment\nTo run just a single on-policy experiment, we can do the following:",
"# define the experiment\nnum_states = 8\nnum_features = 8\nmax_steps = 100000\n\n# set up environment\nenv = chicken.Chicken(num_states)\n\n# set up policy\npol_pi = policy.FixedPolicy({s: {0: 1} if s < 4 else {0: 0.5, 1: 0.5} for s in env.states})\n\n# set feature mapping\n# phi = features.RandomBinary(num_features, num_features // 2, random_seed=101011)\nphi = features.Int2Unary(num_states)\n\n# set up algorithms\nupdate_params = {\n 'alpha': 0.1,\n 'beta': 0.001,\n 'gm': 0.9,\n 'gm_p': 0.9,\n 'lm': 0.0,\n 'lm_p': 0.0,\n 'interest': 1.0,\n}\n\n\n# Choose an algorithm to run\nname = 'GTD'\nalg = algos.algo_registry[name]\n\n# Set up the agent, run the experiment, get state-values\nagent = OnPolicyAgent(alg(phi.length), pol_pi, phi, update_params)\nsteps = run_episode(agent, env, max_steps)\nvalues = agent.get_values(env.states)\n\n# plot the results\nxvals = list(sorted(env.states))\nyvals = [values[x] for x in xvals]\nplt.title(name)\nplt.xlabel('State Index')\nplt.ylabel('State Value')\nplt.bar(xvals, yvals)\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mykespb/jupyters
|
myke-calcmarks.ipynb
|
mit
|
[
"Оценки тестирования по результатам\nОписание\nПусть есть тест с известными правильными ответами и диапазонами ответов на каждый вопрос. \nДля определённости возьмём возможные ответы как 0 или 1.\nСложность каждого задания заранее не известна. \nНам нужно\n\nопределить относительную сложность каждого задания исходя из того, сколько участников правильно ответили на него\nрассчитать баллы каждого участника и его место в рейтинге исходя из полученных выше оценок сложности заданий.",
"%matplotlib inline\nimport math\nimport matplotlib.pyplot as plt\nimport numpy as np",
"Данные",
"# правильные ответы заданий\ngood = 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1\ngood\n\n# полученные ответы от участников, построчно\ngots = ((0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1), \n (0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1),\n (0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0),\n (1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0),\n (0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0))\ngots",
"Расчёты",
"# Расчёт правильности ответов\ncorr = [[1 if rep[i] == good[i] else 0 for i in xrange(len(good))] for rep in gots]\ncorr\n\n# число правильных ответов для каждого вопроса\ncorrbyn = [0 for i in xrange(len(good))]\nfor v in corr:\n for a in xrange(len(good)):\n corrbyn[a] += v[a]\ncorrbyn\n\n# пусть ценности ответов равны, ценности вопросов обратно пропорциональны числу правильно на них ответивших\n# рассчитаем ценность каждого вопроса\nngots = len(gots)\nqual = [ngots - x for x in corrbyn]\nqual\n\n# уточнённые ответы с ценностями\ncorq = [[v[i] * qual[i] for i in xrange(len(v))] for v in corr]\ncorq\n\n# суммы баллов каждого участников\nballs = [sum(v) for v in corq]\nballs\n\n# победители, отсортированно\nwins = sorted(list(enumerate(balls)), key=lambda x: x[1], reverse=True)\nwins",
"Результаты",
"print \"\\nТаким образом, победил участник №\", wins[0][0]+1, \"с\", wins[0][1], \"баллами, затем участник №\", wins[1][0]+1, \"с\", wins[1][1], \"баллами, и т.д.\\n\"\n\n# расстановка по местам\nfor i, u in enumerate(wins):\n print \"место %2d -- участник %2d, баллы: %3d\" % (i+1, u[0]+1, u[1])\n\nplt.bar(range(len(balls)), balls)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
adityaka/misc_scripts
|
python-scripts/data_analytics_learn/link_pandas/Ex_Files_Pandas_Data/Exercise Files/02_02/Begin/Selection.ipynb
|
bsd-3-clause
|
[
"Differences between interactive and production work\nNote: while standard Python / Numpy expressions for selecting and setting are intuitive and come in handy for interactive work, for production code, we (the pandas development team) recommend the optimized pandas data access methods, .at, .iat, .loc, .iloc and .ix.\nfrom:http://pandas.pydata.org/pandas-docs/stable/10min.html",
"import pandas as pd\nimport numpy as np\n\nsample_numpy_data = np.array(np.arange(24)).reshape((6,4))\ndates_index = pd.date_range('20160101', periods=6)\nsample_df = pd.DataFrame(sample_numpy_data, index=dates_index, columns=list('ABCD'))\nsample_df",
"selection using column name\nselection using slice\n\nremember: up to, but not including second index\n\nselection using date time index\n\nnote: last index is included\n\nSelection by label\ndocumentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html\nlabel-location based indexer for selection by label\nSelecting using multi-axis by label\nLabel slicing, both endpoints are included",
"sample_df.loc['2016-01-01':'2016-01-03',['A','B']]",
"Reduce number of dimensions for returned object\n\nnotice order of 'D' and 'B'",
"sample_df.loc['2016-01-03',['D','B']]",
"using result\nselect a scalar",
"sample_df.loc[dates_index[2], 'C']",
"Selection by Position\ndocumentation: http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.iloc.html\ninteger-location based indexing for selection by position\ninteger slices\nlists of integers\nslicing rows explicitly\nimplicitly selecting all columns\nslicing columns explicitly\nimplicitly selecting all rows\nBoolean Indexing\ntest based upon one column's data\ntest based upon entire data set\nisin() method\ndocumentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html\nReturns a boolean Series showing whether each element in the Series is exactly contained in the passed sequence of values.",
"sample_df_2 = sample_df.copy()\nsample_df_2['Fruits'] = ['apple', 'orange','banana','strawberry','blueberry','pineapple']\nsample_df_2",
"select rows where 'Fruits' column contains eith 'banana' or 'pineapple'; notice 'smoothy', which is not in the column",
"sample_df_2[sample_df_2['Fruits'].isin(['banana','pineapple', 'smoothy'])]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
thalesians/tsa
|
src/jupyter/python/kalman.ipynb
|
apache-2.0
|
[
"Kalman filtering\nIntroduction\nMuch thought has been given to the interfaces of the Kalman filter and related classes in thalesians.tsa. These interfaces and the underlying implementations incorporate many suggestions by Martin Zinkin.\nBefore we proceed, we import some Python modules:",
"import os, sys\nsys.path.append(os.path.abspath('../../main/python'))\n\nimport datetime as dt\n\nimport numpy as np\nimport numpy.testing as npt\nimport matplotlib.pyplot as plt\n\nfrom thalesians.tsa.distrs import NormalDistr as N\nimport thalesians.tsa.filtering as filtering\nimport thalesians.tsa.filtering.kalman as kalman\nimport thalesians.tsa.numpyutils as npu\nimport thalesians.tsa.processes as proc",
"A single-process, univariate example\nFirst we need a process model. In this case it will be a single stochastic process,",
"process = proc.WienerProcess.create_from_cov(mean=3., cov=25.)",
"This we pass to a newly created Kalman filter, along with the initial time and initial state. The latter takes the form of a normal distribution. We have chosen to use Python datetimes as our data type for time, but we could have chosen ints or something else.",
"t0 = dt.datetime(2017, 5, 12, 16, 18, 25, 204000)\nkf = filtering.kalman.KalmanFilter(t0, state_distr=N(mean=100., cov=250.), process=process)",
"Next we create an observable, which incorporates a particular observation model. In this case, the observation model is particularly simple, since we are observing the entire state of the Kalman filter. Our observation model is a 1x1 identity:",
"observable = kf.create_observable(kalman.LinearGaussianObsModel.create(1.), process)",
"Let's roll forward the time by one hour:",
"t1 = t0 + dt.timedelta(hours=1)",
"What is our predicted observation at this time? Since we haven't observed any actual information, this is our prior observation estimate:",
"prior_predicted_obs1 = observable.predict(t1)\nprior_predicted_obs1",
"We confirm that this is consistent with how our (linear-Gaussian) process model scales over time:",
"prior_predicted_obs1 = observable.predict(t1)\nnpt.assert_almost_equal(prior_predicted_obs1.distr.mean, 100. + 3./24.)\nnpt.assert_almost_equal(prior_predicted_obs1.distr.cov, 250. + 25./24.)\nnpt.assert_almost_equal(prior_predicted_obs1.cross_cov, prior_predicted_obs1.distr.cov)",
"Let us now actually observe our observation. Say, the observation is 100.35 and the observation noise covariance is 100.0:",
"observable.observe(time=t1, obs=N(mean=100.35, cov=100.0))",
"Having seen an actual observation, let us obtain the posterior observation estimate:",
"posterior_predicted_obs1 = observable.predict(t1); posterior_predicted_obs1",
"We can now fast-forward the time, by two hours, say, and repeat the process:",
"t2 = t1 + dt.timedelta(hours=2)\n \nprior_predicted_obs2 = observable.predict(t2)\nnpt.assert_almost_equal(prior_predicted_obs2.distr.mean, 100.28590504 + 2.*3./24.)\nnpt.assert_almost_equal(prior_predicted_obs2.distr.cov, 71.513353115 + 2.*25./24.)\nnpt.assert_almost_equal(prior_predicted_obs2.cross_cov, prior_predicted_obs2.distr.cov)\n \nobservable.observe(time=t2, obs=N(mean=100.35, cov=100.0))\n\nposterior_predicted_obs2 = observable.predict(t2)\nnpt.assert_almost_equal(posterior_predicted_obs2.distr.mean, 100.45709020)\nnpt.assert_almost_equal(posterior_predicted_obs2.distr.cov, 42.395213845)\nnpt.assert_almost_equal(posterior_predicted_obs2.cross_cov, posterior_predicted_obs2.distr.cov)\n",
"A multi-process, multivariate example\nThe real power of our Kalman filter interface is demonstrated for process models consisting of several (independent) stochastic processes:",
"process1 = proc.WienerProcess.create_from_cov(mean=3., cov=25.)\nprocess2 = proc.WienerProcess.create_from_cov(mean=[1., 4.], cov=[[36.0, -9.0], [-9.0, 25.0]])",
"Such models are common in finance, where, for example, the dynamics of a yield curve may be represented by a (multivariate) stochastic process, whereas the idiosyncratic spread for each bond may be an independent stochastic process.\nLet us pass process1 and process2 as a (compound) process model to our Kalman filter, along with the initial time and state:",
"t0 = dt.datetime(2017, 5, 12, 16, 18, 25, 204000)\nkf = kalman.KalmanFilter(\n t0,\n state_distr=N(\n mean=[100.0, 120.0, 130.0],\n cov=[[250.0, 0.0, 0.0],\n [0.0, 360.0, 0.0],\n [0.0, 0.0, 250.0]]),\n process=(process1, process2))",
"We shall now create several observables, each corresponding to a distinct observation model. The first one will observe the entire state:",
"state_observable = kf.create_observable(\n kalman.LinearGaussianObsModel.create(1.0, np.eye(2)),\n process1, process2)",
"The second observable will observe the first coordinate of the first process:",
"coord0_observable = kf.create_observable(\n kalman.LinearGaussianObsModel.create(1.),\n process1)",
"The third, the first coordinate of the second process:",
"coord1_observable = kf.create_observable(\n kalman.LinearGaussianObsModel.create(npu.row(1., 0.)),\n process2)",
"The fourth, the second coordinate of the second process:",
"coord2_observable = kf.create_observable(\n kalman.LinearGaussianObsModel.create(npu.row(0., 1.)),\n process2)",
"The fifth will observe the sum of the entire state (across the two processes):",
"sum_observable = kf.create_observable(\n kalman.LinearGaussianObsModel.create(npu.row(1., 1., 1.)),\n process1, process2)",
"And the sixth a certain linear combination thereof:",
"lin_comb_observable = kf.create_observable(\n kalman.LinearGaussianObsModel.create(npu.row(2., 0., -3.)),\n process1, process2)",
"Fast-forward the time by one hour:",
"t1 = t0 + dt.timedelta(hours=1)",
"Let's predict the state at this time...",
"predicted_obs1_prior = state_observable.predict(t1)\npredicted_obs1_prior",
"And check that it is consistent with the scaling of the (multivariate) Wiener process with time:",
"npt.assert_almost_equal(predicted_obs1_prior.distr.mean,\n npu.col(100.0 + 3.0/24.0, 120.0 + 1.0/24.0, 130.0 + 4.0/24.0))\nnpt.assert_almost_equal(predicted_obs1_prior.distr.cov,\n [[250.0 + 25.0/24.0, 0.0, 0.0],\n [0.0, 360.0 + 36.0/24.0, -9.0/24.0],\n [0.0, -9.0/24.0, 250 + 25.0/24.0]])\nnpt.assert_almost_equal(predicted_obs1_prior.cross_cov, predicted_obs1_prior.distr.cov)",
"Suppose that a new observation arrives, and we observe each of the three coordinates individually:",
"state_observable.observe(time=t1, obs=N(mean=[100.35, 121.0, 135.0],\n cov=[[100.0, 0.0, 0.0],\n [0.0, 400.0, 0.0],\n [0.0, 0.0, 100.0]]));",
"Let's look at our (posterior) predicted state:",
"state_observable.predict(t1)",
"Let's also look at the predictions for the individual coordinates:",
"coord0_observable.predict(t1)\n\ncoord1_observable.predict(t1)\n\ncoord2_observable.predict(t1)",
"The predicted sum:",
"sum_observable.predict(t1)",
"And the predicted linear combination:",
"lin_comb_observable.predict(t1)",
"Let's now go 30 minutes into the future:",
"t2 = t1 + dt.timedelta(minutes=30)",
"And observe only the first coordinate of the second process, with a pretty high confidence:",
"coord1_observable.observe(time=t2, obs=N(mean=125.25, cov=4.))",
"How does our predicted state change?",
"state_observable.predict(t2)",
"Thirty minutes later...",
"t3 = t2 + dt.timedelta(minutes=30)",
"We observe the sum of the three coordinates, rather than the individual coordinates:",
"sum_observable.observe(time=t3, obs=N(mean=365.00, cov=9.))",
"How has our prediction of the state changed?",
"state_observable.predict(t3)",
"And what is its predicted sum?",
"sum_observable.predict(t3)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kubeflow/kfp-tekton-backend
|
samples/core/dsl_static_type_checking/dsl_static_type_checking.ipynb
|
apache-2.0
|
[
"KubeFlow Pipeline DSL Static Type Checking\nIn this notebook, we will demo: \n\nDefining a KubeFlow pipeline with Python DSL\nCompile the pipeline with type checking\n\nStatic type checking helps users to identify component I/O inconsistencies without running the pipeline. It also shortens the development cycles by catching the errors early. This feature is especially useful in two cases: 1) when the pipeline is huge and manually checking the types is infeasible; 2) when some components are shared ones and the type information is not immediately avaiable to the pipeline authors.\nSince this sample focuses on the DSL type checking, we will use components that are not runnable in the system but with various type checking scenarios. \nComponent definition\nComponents can be defined in either YAML or functions decorated by dsl.component.\nType definition\nTypes can be defined as string or a dictionary with the openapi_schema_validator property formatted as:\nyaml\n{\n type_name: {\n openapi_schema_validator: {\n }\n }\n}\nFor example, the following yaml declares a GCSPath type with the openapi_schema_validator for output field_m.\nThe type could also be a plain string, such as the GcsUri. The type name could be either one of the core types or customized ones.\nyaml\nname: component a\ndescription: component a desc\ninputs:\n - {name: field_l, type: Integer}\noutputs:\n - {name: field_m, type: {GCSPath: {openapi_schema_validator: {type: string, pattern: \"^gs://.*$\" } }}}\n - {name: field_n, type: customized_type}\n - {name: field_o, type: GcsUri} \nimplementation:\n container:\n image: gcr.io/ml-pipeline/component-a\n command: [python3, /pipelines/component/src/train.py]\n args: [\n --field-l, {inputValue: field_l},\n ]\n fileOutputs: \n field_m: /schema.txt\n field_n: /feature.txt\n field_o: /output.txt\nIf you define the component using the function decorator, there are a list of core types.\nFor example, the following component declares a core type Integer for input field_l while\ndeclares customized_type for its output field_n.\npython\n@component\ndef task_factory_a(field_l: Integer()) -> {'field_m': {'GCSPath': {'openapi_schema_validator': '{\"type\": \"string\", \"pattern\": \"^gs://.*$\"}'}}, \n 'field_n': 'customized_type',\n 'field_o': 'Integer'\n }:\n return ContainerOp(\n name = 'operator a',\n image = 'gcr.io/ml-pipeline/component-a',\n arguments = [\n '--field-l', field_l,\n ],\n file_outputs = {\n 'field_m': '/schema.txt',\n 'field_n': '/feature.txt',\n 'field_o': '/output.txt'\n }\n )\nType check switch\nType checking is enabled by default. It can be disabled as --disable-type-check argument if dsl-compile is run in the command line, or dsl.compiler.Compiler().compile(type_check=False).\nIf one wants to ignore the type for one parameter, call ignore_type() function in PipelineParam.\nHow does type checking work?\nDSL compiler checks the type consistencies among components by checking the type_name as well as the openapi_schema_validator. Some special cases are listed here:\n1. Type checking succeed: If the upstream/downstream components lack the type information.\n2. Type checking succeed: If the type check is disabled.\n3. Type checking succeed: If the parameter type is ignored.\nSetup\nInstall Pipeline SDK",
"!python3 -m pip install 'kfp>=0.1.31' --quiet\n",
"Type Check with YAML components: successful scenario\nAuthor components in YAML",
"# In yaml, one can optionally add the type information to both inputs and outputs.\n# There are two ways to define the types: string or a dictionary with the openapi_schema_validator property.\n# The openapi_schema_validator is a json schema object that describes schema of the parameter value.\ncomponent_a = '''\\\nname: component a\ndescription: component a desc\ninputs:\n - {name: field_l, type: Integer}\noutputs:\n - {name: field_m, type: {GCSPath: {openapi_schema_validator: {type: string, pattern: \"^gs://.*$\" } }}}\n - {name: field_n, type: customized_type}\n - {name: field_o, type: GcsUri} \nimplementation:\n container:\n image: gcr.io/ml-pipeline/component-a\n command: [python3, /pipelines/component/src/train.py]\n args: [\n --field-l, {inputValue: field_l},\n ]\n fileOutputs: \n field_m: /schema.txt\n field_n: /feature.txt\n field_o: /output.txt\n'''\ncomponent_b = '''\\\nname: component b\ndescription: component b desc\ninputs:\n - {name: field_x, type: customized_type}\n - {name: field_y, type: GcsUri}\n - {name: field_z, type: {GCSPath: {openapi_schema_validator: {type: string, pattern: \"^gs://.*$\" } }}}\noutputs:\n - {name: output_model_uri, type: GcsUri}\nimplementation:\n container:\n image: gcr.io/ml-pipeline/component-a\n command: [python3]\n args: [\n --field-x, {inputValue: field_x},\n --field-y, {inputValue: field_y},\n --field-z, {inputValue: field_z},\n ]\n fileOutputs: \n output_model_uri: /schema.txt\n'''",
"Author a pipeline with the above components",
"import kfp.components as comp\nimport kfp.dsl as dsl\nimport kfp.compiler as compiler\n# The components are loaded as task factories that generate container_ops.\ntask_factory_a = comp.load_component_from_text(text=component_a)\ntask_factory_b = comp.load_component_from_text(text=component_b)\n\n#Use the component as part of the pipeline\n@dsl.pipeline(name='type_check_a',\n description='')\ndef pipeline_a():\n a = task_factory_a(field_l=12)\n b = task_factory_b(field_x=a.outputs['field_n'], field_y=a.outputs['field_o'], field_z=a.outputs['field_m'])\n\ncompiler.Compiler().compile(pipeline_a, 'pipeline_a.zip', type_check=True)",
"Type Check with YAML components: failed scenario\nAuthor components in YAML",
"# In this case, the component_a contains an output field_o as GcrUri \n# but the component_b requires an input field_y as GcsUri\ncomponent_a = '''\\\nname: component a\ndescription: component a desc\ninputs:\n - {name: field_l, type: Integer}\noutputs:\n - {name: field_m, type: {GCSPath: {openapi_schema_validator: {type: string, pattern: \"^gs://.*$\" } }}}\n - {name: field_n, type: customized_type}\n - {name: field_o, type: GcrUri} \nimplementation:\n container:\n image: gcr.io/ml-pipeline/component-a\n command: [python3, /pipelines/component/src/train.py]\n args: [\n --field-l, {inputValue: field_l},\n ]\n fileOutputs: \n field_m: /schema.txt\n field_n: /feature.txt\n field_o: /output.txt\n'''\ncomponent_b = '''\\\nname: component b\ndescription: component b desc\ninputs:\n - {name: field_x, type: customized_type}\n - {name: field_y, type: GcsUri}\n - {name: field_z, type: {GCSPath: {openapi_schema_validator: {type: string, pattern: \"^gs://.*$\" } }}}\noutputs:\n - {name: output_model_uri, type: GcsUri}\nimplementation:\n container:\n image: gcr.io/ml-pipeline/component-a\n command: [python3]\n args: [\n --field-x, {inputValue: field_x},\n --field-y, {inputValue: field_y},\n --field-z, {inputValue: field_z},\n ]\n fileOutputs: \n output_model_uri: /schema.txt\n'''",
"Author a pipeline with the above components",
"import kfp.components as comp\nimport kfp.dsl as dsl\nimport kfp.compiler as compiler\nfrom kfp.dsl.types import InconsistentTypeException\ntask_factory_a = comp.load_component_from_text(text=component_a)\ntask_factory_b = comp.load_component_from_text(text=component_b)\n\n#Use the component as part of the pipeline\n@dsl.pipeline(name='type_check_b',\n description='')\ndef pipeline_b():\n a = task_factory_a(field_l=12)\n b = task_factory_b(field_x=a.outputs['field_n'], field_y=a.outputs['field_o'], field_z=a.outputs['field_m'])\n\ntry:\n compiler.Compiler().compile(pipeline_b, 'pipeline_b.zip', type_check=True)\nexcept InconsistentTypeException as e:\n print(e)",
"Author a pipeline with the above components but type checking disabled.",
"# Disable the type_check\ncompiler.Compiler().compile(pipeline_b, 'pipeline_b.zip', type_check=False)",
"Type Check with decorated components: successful scenario\nAuthor components with decorator",
"from kfp.dsl import component\nfrom kfp.dsl.types import Integer, GCSPath\nfrom kfp.dsl import ContainerOp\n# when components are defined based on the component decorator,\n# the type information is annotated to the input or function returns.\n# There are two ways to define the type: string or a dictionary with the openapi_schema_validator property\n@component\ndef task_factory_a(field_l: Integer()) -> {'field_m': {'GCSPath': {'openapi_schema_validator': '{\"type\": \"string\", \"pattern\": \"^gs://.*$\"}'}}, \n 'field_n': 'customized_type',\n 'field_o': 'Integer'\n }:\n return ContainerOp(\n name = 'operator a',\n image = 'gcr.io/ml-pipeline/component-a',\n arguments = [\n '--field-l', field_l,\n ],\n file_outputs = {\n 'field_m': '/schema.txt',\n 'field_n': '/feature.txt',\n 'field_o': '/output.txt'\n }\n )\n\n# Users can also use the core types that are pre-defined in the SDK.\n# For a full list of core types, check out: https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/dsl/types.py\n@component\ndef task_factory_b(field_x: 'customized_type',\n field_y: Integer(),\n field_z: {'GCSPath': {'openapi_schema_validator': '{\"type\": \"string\", \"pattern\": \"^gs://.*$\"}'}}) -> {'output_model_uri': 'GcsUri'}:\n return ContainerOp(\n name = 'operator b',\n image = 'gcr.io/ml-pipeline/component-a',\n command = [\n 'python3',\n field_x,\n ],\n arguments = [\n '--field-y', field_y,\n '--field-z', field_z,\n ],\n file_outputs = {\n 'output_model_uri': '/schema.txt',\n }\n )",
"Author a pipeline with the above components",
"#Use the component as part of the pipeline\n@dsl.pipeline(name='type_check_c',\n description='')\ndef pipeline_c():\n a = task_factory_a(field_l=12)\n b = task_factory_b(field_x=a.outputs['field_n'], field_y=a.outputs['field_o'], field_z=a.outputs['field_m'])\n\ncompiler.Compiler().compile(pipeline_c, 'pipeline_c.zip', type_check=True)",
"Type Check with decorated components: failure scenario\nAuthor components with decorator",
"from kfp.dsl import component\nfrom kfp.dsl.types import Integer, GCSPath\nfrom kfp.dsl import ContainerOp\n# task_factory_a outputs an input field_m with the openapi_schema_validator different\n# from the task_factory_b's input field_z.\n# One is gs:// and the other is gcs://\n@component\ndef task_factory_a(field_l: Integer()) -> {'field_m': {'GCSPath': {'openapi_schema_validator': '{\"type\": \"string\", \"pattern\": \"^gs://.*$\"}'}}, \n 'field_n': 'customized_type',\n 'field_o': 'Integer'\n }:\n return ContainerOp(\n name = 'operator a',\n image = 'gcr.io/ml-pipeline/component-a',\n arguments = [\n '--field-l', field_l,\n ],\n file_outputs = {\n 'field_m': '/schema.txt',\n 'field_n': '/feature.txt',\n 'field_o': '/output.txt'\n }\n )\n\n@component\ndef task_factory_b(field_x: 'customized_type',\n field_y: Integer(),\n field_z: {'GCSPath': {'openapi_schema_validator': '{\"type\": \"string\", \"pattern\": \"^gcs://.*$\"}'}}) -> {'output_model_uri': 'GcsUri'}:\n return ContainerOp(\n name = 'operator b',\n image = 'gcr.io/ml-pipeline/component-a',\n command = [\n 'python3',\n field_x,\n ],\n arguments = [\n '--field-y', field_y,\n '--field-z', field_z,\n ],\n file_outputs = {\n 'output_model_uri': '/schema.txt',\n }\n )",
"Author a pipeline with the above components",
"#Use the component as part of the pipeline\n@dsl.pipeline(name='type_check_d',\n description='')\ndef pipeline_d():\n a = task_factory_a(field_l=12)\n b = task_factory_b(field_x=a.outputs['field_n'], field_y=a.outputs['field_o'], field_z=a.outputs['field_m'])\n\ntry:\n compiler.Compiler().compile(pipeline_d, 'pipeline_d.zip', type_check=True)\nexcept InconsistentTypeException as e:\n print(e)",
"Author a pipeline with the above components but ignoring types.",
"#Use the component as part of the pipeline\n@dsl.pipeline(name='type_check_d',\n description='')\ndef pipeline_d():\n a = task_factory_a(field_l=12)\n # For each of the arguments, authors can also ignore the types by calling ignore_type function.\n b = task_factory_b(field_x=a.outputs['field_n'], field_y=a.outputs['field_o'], field_z=a.outputs['field_m'].ignore_type())\ncompiler.Compiler().compile(pipeline_d, 'pipeline_d.zip', type_check=True)",
"Type Check with missing type information\nAuthor components(with missing types)",
"from kfp.dsl import component\nfrom kfp.dsl.types import Integer, GCSPath\nfrom kfp.dsl import ContainerOp\n# task_factory_a lacks the type information for output filed_n\n# task_factory_b lacks the type information for input field_y\n# When no type information is provided, it matches all types.\n@component\ndef task_factory_a(field_l: Integer()) -> {'field_m': {'GCSPath': {'openapi_schema_validator': '{\"type\": \"string\", \"pattern\": \"^gs://.*$\"}'}}, \n 'field_o': 'Integer'\n }:\n return ContainerOp(\n name = 'operator a',\n image = 'gcr.io/ml-pipeline/component-a',\n arguments = [\n '--field-l', field_l,\n ],\n file_outputs = {\n 'field_m': '/schema.txt',\n 'field_n': '/feature.txt',\n 'field_o': '/output.txt'\n }\n )\n\n@component\ndef task_factory_b(field_x: 'customized_type',\n field_y,\n field_z: {'GCSPath': {'openapi_schema_validator': '{\"type\": \"string\", \"pattern\": \"^gs://.*$\"}'}}) -> {'output_model_uri': 'GcsUri'}:\n return ContainerOp(\n name = 'operator b',\n image = 'gcr.io/ml-pipeline/component-a',\n command = [\n 'python3',\n field_x,\n ],\n arguments = [\n '--field-y', field_y,\n '--field-z', field_z,\n ],\n file_outputs = {\n 'output_model_uri': '/schema.txt',\n }\n )",
"Author a pipeline with the above components",
"#Use the component as part of the pipeline\n@dsl.pipeline(name='type_check_e',\n description='')\ndef pipeline_e():\n a = task_factory_a(field_l=12)\n b = task_factory_b(field_x=a.outputs['field_n'], field_y=a.outputs['field_o'], field_z=a.outputs['field_m'])\n\ncompiler.Compiler().compile(pipeline_e, 'pipeline_e.zip', type_check=True)",
"Type Check with both named arguments and positional arguments",
"#Use the component as part of the pipeline\n@dsl.pipeline(name='type_check_f',\n description='')\ndef pipeline_f():\n a = task_factory_a(field_l=12)\n b = task_factory_b(a.outputs['field_n'], a.outputs['field_o'], field_z=a.outputs['field_m'])\n\ncompiler.Compiler().compile(pipeline_f, 'pipeline_f.zip', type_check=True)",
"Type Check between pipeline parameters and component parameters",
"@component\ndef task_factory_a(field_m: {'GCSPath': {'openapi_schema_validator': '{\"type\": \"string\", \"pattern\": \"^gs://.*$\"}'}}, field_o: 'Integer'):\n return ContainerOp(\n name = 'operator a',\n image = 'gcr.io/ml-pipeline/component-b',\n arguments = [\n '--field-l', field_m,\n '--field-o', field_o,\n ],\n )\n\n# Pipeline input types are also checked against the component I/O types.\n@dsl.pipeline(name='type_check_g',\n description='')\ndef pipeline_g(a: {'GCSPath': {'openapi_schema_validator': '{\"type\": \"string\", \"pattern\": \"^gs://.*$\"}'}}='gs://kfp-path', b: Integer()=12):\n task_factory_a(field_m=a, field_o=b)\n\ntry:\n compiler.Compiler().compile(pipeline_g, 'pipeline_g.zip', type_check=True)\nexcept InconsistentTypeException as e:\n print(e)",
"Clean up",
"from pathlib import Path\nfor p in Path(\".\").glob(\"pipeline_[a-g].zip\"):\n p.unlink()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
qutip/qutip-notebooks
|
examples/spin-chain.ipynb
|
lgpl-3.0
|
[
"QuTiP example: Dynamics of a Spin Chain\nJ.R. Johansson and P.D. Nation\nFor more information about QuTiP see http://qutip.org",
"%matplotlib inline\n\nimport matplotlib.pyplot as plt\n\nimport numpy as np\n\nfrom qutip import *",
"Hamiltonian:\n$\\displaystyle H = - \\frac{1}{2}\\sum_n^N h_n \\sigma_z(n) - \\frac{1}{2} \\sum_n^{N-1} [ J_x^{(n)} \\sigma_x(n) \\sigma_x(n+1) + J_y^{(n)} \\sigma_y(n) \\sigma_y(n+1) +J_z^{(n)} \\sigma_z(n) \\sigma_z(n+1)]$",
"def integrate(N, h, Jx, Jy, Jz, psi0, tlist, gamma, solver):\n\n si = qeye(2)\n sx = sigmax()\n sy = sigmay()\n sz = sigmaz()\n\n sx_list = []\n sy_list = []\n sz_list = []\n\n for n in range(N):\n op_list = []\n for m in range(N):\n op_list.append(si)\n\n op_list[n] = sx\n sx_list.append(tensor(op_list))\n\n op_list[n] = sy\n sy_list.append(tensor(op_list))\n\n op_list[n] = sz\n sz_list.append(tensor(op_list))\n\n # construct the hamiltonian\n H = 0\n\n # energy splitting terms\n for n in range(N):\n H += - 0.5 * h[n] * sz_list[n]\n\n # interaction terms\n for n in range(N-1):\n H += - 0.5 * Jx[n] * sx_list[n] * sx_list[n+1]\n H += - 0.5 * Jy[n] * sy_list[n] * sy_list[n+1]\n H += - 0.5 * Jz[n] * sz_list[n] * sz_list[n+1]\n\n # collapse operators\n c_op_list = []\n\n # spin dephasing\n for n in range(N):\n if gamma[n] > 0.0:\n c_op_list.append(np.sqrt(gamma[n]) * sz_list[n])\n\n # evolve and calculate expectation values\n if solver == \"me\":\n result = mesolve(H, psi0, tlist, c_op_list, sz_list)\n elif solver == \"mc\":\n ntraj = 250 \n result = mcsolve(H, psi0, tlist, c_op_list, sz_list, ntraj)\n\n return result.expect\n\n#\n# set up the calculation\n#\nsolver = \"me\" # use the ode solver\n#solver = \"mc\" # use the monte-carlo solver\n\nN = 10 # number of spins\n\n# array of spin energy splittings and coupling strengths. here we use\n# uniform parameters, but in general we don't have too\nh = 1.0 * 2 * np.pi * np.ones(N) \nJz = 0.1 * 2 * np.pi * np.ones(N)\nJx = 0.1 * 2 * np.pi * np.ones(N)\nJy = 0.1 * 2 * np.pi * np.ones(N)\n# dephasing rate\ngamma = 0.01 * np.ones(N)\n\n# intial state, first spin in state |1>, the rest in state |0>\npsi_list = []\npsi_list.append(basis(2,1))\nfor n in range(N-1):\n psi_list.append(basis(2,0))\npsi0 = tensor(psi_list)\n\ntlist = np.linspace(0, 50, 200)\n\nsz_expt = integrate(N, h, Jx, Jy, Jz, psi0, tlist, gamma, solver)\n\nfig, ax = plt.subplots(figsize=(10,6))\n\nfor n in range(N):\n ax.plot(tlist, np.real(sz_expt[n]), label=r'$\\langle\\sigma_z^{(%d)}\\rangle$'%n)\n\nax.legend(loc=0)\nax.set_xlabel(r'Time [ns]')\nax.set_ylabel(r'\\langle\\sigma_z\\rangle')\nax.set_title(r'Dynamics of a Heisenberg spin chain');",
"Software version:",
"from qutip.ipynbtools import version_table\n\nversion_table()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
csdms/pymt
|
notebooks/cem.ipynb
|
mit
|
[
"Coastline Evolution Model\nThe Coastline Evolution Model (CEM) addresses predominately sandy, wave-dominated coastlines on time-scales ranging from years to millenia and on spatial scales ranging from kilometers to hundreds of kilometers. Shoreline evolution results from gradients in wave-driven alongshore sediment transport. \nAt its most basic level, the model follows the standard 'one-line' modeling approach, where the cross-shore dimension is collapsed into a single data point. However, the model allows the planview shoreline to take on arbitrary local orientations, and even fold back upon itself, as complex shapes such as capes and spits form under some wave climates (distributions of wave influences from different approach angles). So the model works on a 2D grid.\nThe model has been used to represent varying geology underlying a sandy coastline and shoreface in a simplified manner and enables the simulation of coastline evolution when sediment supply from an eroding shoreface may be constrained. CEM also supports the simulation of human manipulations to coastline evolution through beach nourishment or hard structures.\nCEM authors & developers: Andrew Ashton, Brad Murray, Jordan Slot, Jaap Nienhuis and others.\nThis version is adapted from a CSDMS teaching notebook, listed below. \nIt has been created by Irina Overeem, October 2019 for a Sedimentary Modeling course.\n\nLink to this notebook: https://github.com/csdms/pymt/blob/master/notebooks/cem.ipynb\nInstall command: $ conda install notebook pymt_cem\nDownload local copy of notebook:\n\n$ curl -O https://raw.githubusercontent.com/csdms/pymt/master/notebooks/cem.ipynb\nKey References\nAshton, A.D., Murray, B., Arnault, O. 2001. Formation of coastline features by large-scale instabilities induced by high-angle waves, Nature 414.\nAshton, A. D., and A. B. Murray (2006), High-angle wave instability and emergent shoreline shapes: 1. Modeling of sand waves, flying spits, and capes, J. Geophys. Res., 111, F04011, doi:10.1029/2005JF000422.\nLinks\n\nCEM source code: Look at the files that have deltas in their name.\nCEM description on CSDMS: Detailed information on the CEM model.\n\nInteracting with the Coastline Evolution Model BMI using Python",
"import numpy as np\nimport matplotlib.pyplot as plt\n\n#Some magic that allows us to view images within the notebook.\n%matplotlib inline",
"Import the Cem class. In Python, a model with a Basic Model Interface (BMI) will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later!",
"import pymt.models\ncem = pymt.models.Cem()",
"Even though we can't run our waves model yet, we can still get some information about it. Some things we can do with our model are to get help, to get the names of the input variables or output variables.",
"help(cem)\n\ncem.input_var_names\n\ncem.output_var_names",
"We can also get information about specific variables. Here we'll look at some info about wave direction. This is the main input of the Cem model. Notice that BMI components always use CSDMS standard names. The CSDMS Standard Name for wave angle is,\n\"sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity\"\n\nQuite a mouthful, I know. With that name we can get information about that variable and the grid that it is on (it's actually not a one).",
"angle_name = 'sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity'\n\nprint(\"Data type: %s\" % cem.get_var_type(angle_name))\nprint(\"Units: %s\" % cem.get_var_units(angle_name))\nprint(\"Grid id: %d\" % cem.get_var_grid(angle_name))\nprint(\"Number of elements in grid: %d\" % cem.get_grid_number_of_nodes(0))\nprint(\"Type of grid: %s\" % cem.get_grid_type(0))",
"First we initialize the model with the BMI initialize method. Normally we would pass it a string that represents the name of an input file. For this example we'll pass None, which tells Cem to use some defaults.",
"args = cem.setup(number_of_rows=100, number_of_cols=200, grid_spacing=200.)\n\ncem.initialize(*args)",
"Before running the model, let's set a couple input parameters. These two parameters represent the wave height and wave period of the incoming waves to the coastline.",
"cem.set_value(\"sea_surface_water_wave__height\", 1.5)\ncem.set_value(\"sea_surface_water_wave__period\", 7.)\ncem.set_value(\"sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity\", 0. * np.pi / 180.)",
"Assignment 1\nLet's think about the wave conditions that are the input to this CEM model run. For both assignment 1 and 2 it will help to look theory up in the paper by Ashton & Murray 2001, and/or Ashton et al, 2006.\nHow do wave height and wave period determine sediment transport?\nThe relationship between sediment transport and wave height and period is non-linear. What are the implications of this non-linearity for the impact of lots of small ocean storms versus a few extreme storms with much higher wave height?",
"# list your answers here",
"Assignment 2\nThe other important part of the wave conditions that is input to CEM model is under what angle the waves approach the shore. It will help to read the paper by Ashton & Murray 2001, and the longer version by Ashton et al, 2006.\nExplain why incoming wave angle is an important control?",
"# discuss wave angle here",
"The CEM model operates on a grid, consisting of a number of rows and colums with values. \nThe main output variable for this model is water depth, or bathymetry. In this case, the CSDMS Standard Name is much shorter:\n\"sea_water__depth\"\n\nFirst we find out which of Cem's grids contains water depth.",
"grid_id = cem.get_var_grid('sea_water__depth')",
"With the grid_id, we can now get information about the grid. For instance, the number of dimension and the type of grid. This grid happens to be uniform rectilinear. If you were to look at the \"grid\" types for wave height and period, you would see that they aren't on grids at all but instead are scalars, or single values.",
"grid_type = cem.get_grid_type(grid_id)\ngrid_rank = cem.get_grid_ndim(grid_id)\nprint('Type of grid: %s (%dD)' % (grid_type, grid_rank))",
"Because this grid is uniform rectilinear, it is described by a set of BMI methods that are only available for grids of this type. These methods include:\n* get_grid_shape\n* get_grid_spacing\n* get_grid_origin",
"spacing = np.empty((grid_rank, ), dtype=float)\n\nshape = cem.get_grid_shape(grid_id)\ncem.get_grid_spacing(grid_id, out=spacing)\n\nprint('The grid has %d rows and %d columns' % (shape[0], shape[1]))\nprint('The spacing between rows is {:f} m and between columns is {:f} m'.format(spacing[0], spacing[1]))",
"Allocate memory for the water depth grid and get the current values from cem.",
"z = np.empty(shape, dtype=float)\ncem.get_value('sea_water__depth', out=z)",
"Here I define a convenience function for plotting the water depth and making it look pretty. You don't need to worry too much about it's internals for this tutorial. It just saves us some typing later on.",
"def plot_coast(spacing, z):\n import matplotlib.pyplot as plt\n \n xmin, xmax = 0., z.shape[1] * spacing[0] * 1e-3\n ymin, ymax = 0., z.shape[0] * spacing[1] * 1e-3\n\n plt.imshow(z, extent=[xmin, xmax, ymin, ymax], origin='lower', cmap='ocean')\n plt.colorbar().ax.set_ylabel('Water Depth (m)')\n plt.xlabel('Along shore (km)')\n plt.ylabel('Cross shore (km)')",
"It generates plots that look like this. We begin with a flat delta (green) and a linear coastline (y = 3 km). The bathymetry drops off linearly to the top of the domain to more than 20 m water depth.",
"plot_coast(spacing, z)",
"Right now we have waves coming in but no sediment entering the ocean. To add a sediment source and specify its discharge, we need to figure out where to put it. For now we'll put it on a cell that's next to the ocean.",
"#Allocate memory for the sediment discharge array \n# and set the bedload sediment flux at the coastal cell to some value.\nqs = np.zeros_like(z)\nqs[0, 100] = 750",
"The CSDMS Standard Name for this variable is:\n\"land_surface_water_sediment~bedload__mass_flow_rate\"\n\nYou can get an idea of the units based on the quantity part of the name. \"mass_flow_rate\" indicates mass per time. You can double-check this with the BMI method function get_var_units.",
"cem.get_var_units('land_surface_water_sediment~bedload__mass_flow_rate')",
"Assignment 3\nHere, we are introducing a river mouth of one gridcell of 200 by 200m. And we just have specified a bedload flux of 750 kg/s. Is this a realistic incoming value?\nHow much water discharge and slope would you possibly need to transport a bedload flux of that magnitude?",
"# read in the csv file of bedload measurements in the Rhine River, the Netherlands\n# these data were collected over different days over a season in 2004, at nearby locations.\n# plot how river discharge controls bedload; Q (x-axis) and Qb (y-axis) data. \n# label both axes",
"Assignment 4\nThe bedload measurements were a combination of very different methods, and taken at different locations (although nearby). The data is quite scattered. But if you would fit a linear regression line through this data,\nyou would find that the river discharge of the Rhine can be related to its bedload transport as: \n Qb=0.0163*Q",
"# extrapolate this relationship and calculate how much river discharge, Q, \n# would be needed to transport the model specification Qb of 1250 kg/s\n\ncem.time_step, cem.time_units, cem.time",
"Set the bedload flux and run the model.",
"for time in range(3000):\n cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)\n cem.update_until(time)\n \ncem.get_value('sea_water__depth', out=z)\n\ncem.time\n\ncem.get_value('sea_water__depth', out=z)\nplot_coast(spacing, z)\n\n# this code gives you a handle on retrieving the position of the river mouth over time \nval = np.empty((5, ), dtype=float)\ncem.get_value(\"basin_outlet~coastal_center__x_coordinate\", val)\nval / 100.\n\nprint(val)",
"Assignment 5\nDescribe what the CEM model has simulated in 3000 timesteps. How far has this wave influenced delta prograded? \nRecall the R-factor for fluvial dominance (Nienhuis 2015). What would the R-factor be for this simulated system? (smaller then 1, larger then 1)? Motivate.",
"# your run description goes here",
"Assignment 6\nLet's add another sediment source with a different flux and update the model. remember that the Basic Model Interface allows you to update values and then continue a simulation",
"# introduce a second river here\nqs[0, 150] = 1500\nfor time in range(4000):\n cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)\n cem.update_until(time)\n \ncem.get_value('sea_water__depth', out=z)\n\nplot_coast(spacing, z)",
"Here we shut off the sediment supply completely.",
"qs.fill(0.)\nfor time in range(4500):\n cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)\n cem.update_until(time)\n \ncem.get_value('sea_water__depth', out=z)\n\nplot_coast(spacing, z)",
"Assignment 7\nCreate a new CEM run (remember to create a new cem instance) with a more subdued river influx and higher waves.",
"import pymt.models\ncemLR = pymt.models.Cem()\n\nargs = cemLR.setup(number_of_rows=100, number_of_cols=200, grid_spacing=200.)\n\ncemLR.initialize(*args)\n\n# Here you will have to change the settings to a different wave climate\ncemLR.set_value(\"sea_surface_water_wave__height\", 1.5)\ncemLR.set_value(\"sea_surface_water_wave__period\", 7.)\ncemLR.set_value(\"sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity\", 0. * np.pi / 180.)\n\nzLR = np.empty(shape, dtype=float)\ncemLR.get_value('sea_water__depth', out=zLR)\n\n# set your smaller river input here",
"Assignment 8",
"# run your new simulation for a similar time as the other first simulation\n\nfor time in range(3000):\n cemLR.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qsLR)\n cemLR.update_until(time)\n \ncemLR.get_value('sea_water__depth', out=zLR)\n\n# hypothesize how your run output would be different\n\n# plot the sea water depth\n# save out this figure",
"BONUS Assignment 9 - for graduate students\nCreate a new CEM run (remember to create a new cem instance) that is all similar to your first simulation. \nIn this experiment we will use a different incoming wave angle, and look at its effect without a river input first, 1000 timesteps and then with a river input for another 2000 timsteps.",
"## initialize CEM instance\n\n# set the wave angle\n\n# run for 1000 timesteps\n\n# plot intermediate output\n\n# save out an array of this sea water depth at t=1000\n\n# describe what effect you see. Is it to be expected? \n#What is the unique theory in the CEM model that drives this behavior?",
"BONUS Assignment 9 - for graduate students\nUse the same CEM run that you have just started. \nKeep the incoming wave angle you had specified, and now run the rest of the simulation with a new river input for another 2000 timsteps. 'Place' the rivermouth out of center in the grid (although not too close to the grid boundary, that can give instability problems).",
"# your code to introduce new river input goes here\n\n# run an additional 2000 timesteps\n\n# plot\n\n# describe what effect you see. Is it to be expected? \n# Is this a fluvial-dominated delta or a wave-dominated delta? \n# Is the delta assymetric?\n\n# save out the array of your final sea water depth\n# calculate the deposition and erosion per gridcell between t=1000 and t=3000",
"NICELY DONE!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kit-cel/wt
|
wt/vorlesung/ch7_9/annulus.ipynb
|
gpl-2.0
|
[
"Content and Objective\n\nShow points that are distributed uniformly on an annulus being defined by $${ (x_1,x_2)^\\mathrm{T}: x_2\\geq 0, 1\\leq x_1^2+x_2^2\\leq 2}$$\nMarginal pdf w.r.t. x_1 is determined analytically and by simulation\nConditional pdf w.r.t. x_2 is provided and results are being discussed\n\nImport",
"# importing\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nimport matplotlib\n\n# showing figures inline\n%matplotlib inline\n\n# plotting options \nfont = {'size' : 20}\nplt.rc('font', **font)\nplt.rc('text', usetex=1)\n\nmatplotlib.rc('figure', figsize=(18, 6) )",
"Simulation of Points",
"# number of points to be sampled \nN_points = int( 1e5 )\n\n\n# initialize empty list for points\npoints = []\n\n# loop for generating points\nwhile len( points ) < N_points:\n\n # sample point in [ -2, 2 ] x [ 0, 2 ]\n x1 = -2 + 4 * np.random.rand()\n x2 = 2 * np.random.rand() \n \n # check if sampled point is within annulus\n if ( np.sqrt( x1**2 + x2**2 ) >= 1 and np.sqrt( x1**2 + x2**2 ) <= 2):\n points.append( [x1,x2] )\n else:\n continue\n\n# get x and y coordinated of all points \ncollect_X1 = [ p[0] for p in points ]\ncollect_X2 = [ p[1] for p in points ]\n\n# plotting\n# NOTE: Only subset of points is shown; realized by \"slicing\" only part of the lists\nplt.plot( collect_X1[ : 10000], collect_X2[ : 10000], 'x') \n\nplt.grid( True )\nplt.xlabel('$x_1$')\nplt.ylabel('$x_2$')",
"Get PDF $f_{X_1}(x_1)$ and Plot with Histogram",
"# theoretical pdf as on lecture slides\ndelta_x = 0.001\n\n# define slices on x1-axis as [-2,-1], [-1,1], [1,2]\nx1_m2_m1 = np.arange( -2, -1, delta_x )\nx1_m1_p1 = np.arange( -1 + delta_x, 1, delta_x )\nx1_p1_p2 = np.arange( 1, 2 + delta_x , delta_x ) \n\nf1_m2_m1 = 2 / 3. / np.pi * np.sqrt( 4 - x1_m2_m1**2 )\nf1_m1_p1 = 2 / 3. / np.pi * ( np.sqrt( 4 - x1_m1_p1**2 ) - np.sqrt( 1 - x1_m1_p1**2 ) )\nf1_p1_p2 = 2 / 3. / np.pi * np.sqrt( 4 - x1_p1_p2**2 )\n\nx1 = np.concatenate( ( x1_m2_m1, x1_m1_p1, x1_p1_p2 ) )\nf1 = np.concatenate( ( f1_m2_m1, f1_m1_p1, f1_p1_p2 ) )\n\n#plotting theoretical pdf and according histogram\nplt.plot( x1, f1, linewidth=2.0, label='theo.')\nplt.hist( collect_X1, bins=100, density=1, label='sim.' ) \n\nplt.grid( True )\nplt.xlabel('$x_1, n$')\nplt.ylabel('$f_{{X_1}}(x_1), H_{{{}}}(n)$'.format( N_points ) )\nplt.legend(loc='upper right')",
"Get Conditional PDF $f_{X_2}(x_2|X_1=x_1)$",
"# several values for x1\nx1 = [ -1, -.5, 0, .5, 1 ]\n\n# vector x2 for the pdf and pdf\n# array of array f2_cond_x2 for several pdfs depending on x1\nx2 = np.arange( 0, 2, delta_x ) \nf2_cond_x1 = np.zeros( ( len(x1), len(x2) ) )\n\n# loop for different x1\nfor ind_x1, val_x1 in enumerate( x1 ):\n\n # get theoretical pdf\n f2_cond_x1[ ind_x1, : ] = 1 / ( np.sqrt( 4 - val_x1**2 ) - np.sqrt( 1 - val_x1**2 ) )\n\n # set pdf to zero where necessary\n ind_to_be_del = [ ind_x2 for ind_x2, val_x2 in enumerate( x2 ) \n if ( np.abs( val_x2 ) > np.sqrt( 4 - val_x1**2 ) or np.abs( val_x2 ) < np.sqrt( 1 - val_x1**2 ) or val_x2 < 0 or val_x2 > 2) ]\n \n f2_cond_x1[ ind_x1, ind_to_be_del ] = 0.0 \n\n# plotting\n\n# loop for different x1\nfor ind_x, val_x in enumerate( x1 ): \n\n plt.plot( x2, f2_cond_x1[ ind_x, :], linewidth=2.0, label='$x_1 = {{{}}}$'.format( val_x ) ) \n\n\n# get histogram of conditional pdf by choosing only point where x_1 is approx. val_x1\n# and where y coordinate is within the possible values\nval_x1 = 0.5\n\nconditional_points = [ x2 for [x1,x2] in points if np.abs( x1 - val_x1 ) < .05 \n and 1 - val_x1**2 <= x2 ** 2 <= 4 - val_x1**2 ]\n\nplt.hist( conditional_points, bins=100, density=1, color='#ff7f0e', label='sim. for $x_1$={}'.format(val_x1) ) \n\n\nplt.xlabel('$x_2$')\nplt.ylabel('$f_{X_2}(x_2|X_1 = x_1)$')\nplt.grid( True )\nplt.legend( loc = 'upper left' )\nplt.xlim( (-.1, 2.1) ) \nplt.ylim( (-.1, 1.1) ) ",
"<font color=\"#009682\"><b>Question:</b></font> Can you reason why simulated (orange) distribution doesn't exactly match a uniform distribution?\n<font color=\"#009682\"><b>Exercise:</b></font> Show that expectation of X_2 is as found in the lecture.\n<font color=\"#009682\"><b>Hint:</b></font> Maybe Numpy's \"average\" might be a good idea..."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
nitin-cherian/LifeLongLearning
|
Python/Python Morsels/multimax/Trey's Solutions/multimax.ipynb
|
mit
|
[
"def multimax(iterable):\n \"\"\" Return a list of all maximum values \"\"\"\n max_item = max(iterable)\n \n return [\n item\n for item in iterable\n if item == max_item\n ]\n\nmultimax([1, 2, 4, 3])\n\nmultimax([1, 4, 2, 4, 3])\n\nmultimax([1, 1, 1])",
"Bonus1: multimax function returns an empty list if the given iterable is empty",
"multimax([])\n\ndef multimax(iterable):\n \"\"\" Return a list of all maximum values \"\"\"\n try:\n max_item = max(iterable)\n except ValueError:\n return []\n \n return [\n item\n for item in iterable\n if item == max_item\n ]\n\nmultimax([])\n\ndef multimax(iterable):\n \"\"\" Return a list of all maximum values \"\"\"\n max_item = max(iterable, default=None) # Using the default keyword-only argument of max prevents exception.\n \n return [\n item\n for item in iterable\n if item == max_item\n ]\n\nmultimax([])",
"Bonus2: multimax function will work with iterators (lazy iterables) such as files, zip objects, and generators",
"numbers = [1, 3, 8, 5, 4, 10, 6]\nodds = (n for n in numbers if n % 2 == 1)\n\nmultimax(odds)\n\ndef multimax(iterable):\n \"\"\" Return a list of all maximum values \"\"\"\n maximums = []\n \n for item in iterable:\n if not maximums or maximums[0] == item:\n maximums.append(item)\n else:\n if item > maximums[0]:\n maximums = [item]\n \n return maximums \n\nmultimax([])\n\nmultimax([1, 4, 2, 4, 3])\n\nnumbers = [1, 3, 8, 5, 4, 10, 6]\nodds = (n for n in numbers if n % 2 == 1)\n\nmultimax(odds)",
"Bonus3: multimax function accept a keyword argument called \"key\" that is a function which will be used to determine the key by which to compare values as maximums",
"def multimax(iterable, key=None):\n \"\"\" Return a list of all maximum values \"\"\"\n if key is None:\n def key(item): return item\n \n maximums = []\n key_max = None\n \n for item in iterable:\n k = key(item)\n \n if k == key_max:\n maximums.append(item)\n elif not maximums or k > key_max:\n key_max = k\n maximums = [item] \n \n return maximums \n\nmultimax([1, 2, 4, 3])\n\nmultimax([1, 4, 2, 4, 3])\n\nnumbers = [1, 3, 8, 5, 4, 10, 6]\nodds = (n for n in numbers if n % 2 == 1)\n\nmultimax(odds)\n\nmultimax([])\n\nwords = [\"cheese\", \"shop\", \"ministry\", \"of\", \"silly\", \"walks\", \"argument\", \"clinic\"]\n\nmultimax(words, key=len)",
"We may use lambda when no key is provided like so:",
"def multimax(iterable, key=lambda x: x):\n \"\"\" Return a list of all maximum values \"\"\"\n maximums = []\n key_max = None\n \n for item in iterable:\n k = key(item)\n \n if k == key_max:\n maximums.append(item)\n elif not maximums or k > key_max:\n key_max = k\n maximums = [item] \n \n return maximums ",
"Unit Tests",
"import unittest\n\n\nclass MultiMaxTests(unittest.TestCase):\n\n \"\"\"Tests for multimax.\"\"\"\n\n def test_single_max(self):\n self.assertEqual(multimax([1, 2, 4, 3]), [4])\n\n def test_two_max(self):\n self.assertEqual(multimax([1, 4, 2, 4, 3]), [4, 4])\n\n def test_all_max(self):\n self.assertEqual(multimax([1, 1, 1, 1, 1]), [1, 1, 1, 1, 1])\n\n def test_lists(self):\n inputs = [[0], [1], [], [0, 1], [1]]\n expected = [[1], [1]]\n self.assertEqual(multimax(inputs), expected)\n\n def test_order_maintained(self):\n inputs = [\n (3, 2),\n (2, 1),\n (3, 2),\n (2, 0),\n (3, 2),\n ]\n expected = [\n inputs[0],\n inputs[2],\n inputs[4],\n ]\n outputs = multimax(inputs)\n self.assertEqual(outputs, expected)\n self.assertIs(outputs[0], expected[0])\n self.assertIs(outputs[1], expected[1])\n self.assertIs(outputs[2], expected[2])\n\n # To test the Bonus part of this exercise, comment out the following line\n # @unittest.expectedFailure\n def test_empty(self):\n self.assertEqual(multimax([]), [])\n\n # To test the Bonus part of this exercise, comment out the following line\n # @unittest.expectedFailure\n def test_iterator(self):\n numbers = [1, 4, 2, 4, 3]\n squares = (n**2 for n in numbers)\n self.assertEqual(multimax(squares), [16, 16])\n\n # To test the Bonus part of this exercise, comment out the following line\n # @unittest.expectedFailure\n def test_key_function(self):\n words = [\"alligator\", \"animal\", \"apple\", \"artichoke\", \"avalanche\"]\n outputs = [\"alligator\", \"artichoke\", \"avalanche\"]\n self.assertEqual(multimax(words, key=len), outputs)\n\n\nif __name__ == \"__main__\":\n unittest.main(argv=['first-arg-is-ignored'], exit=False)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
manipopopo/tensorflow
|
tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb
|
apache-2.0
|
[
"Copyright 2018 The TensorFlow Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\").\nImage Captioning with Attention\n<table class=\"tfo-notebook-buttons\" align=\"left\"><td>\n<a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a> \n</td><td>\n<a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb\"><img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a></td></table>\n\nImage captioning is the task of generating a caption for an image. Given an image like this:\n \nImage Source, License: Public Domain\nOur goal is generate a caption, such as \"a surfer riding on a wave\". Here, we'll use an attention based model. This enables us to see which parts of the image the model focuses on as it generates a caption.\n\nThis model architecture below is similar to Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. \nThe code uses tf.keras and eager execution, which you can learn more about in the linked guides.\nThis notebook is an end-to-end example. If you run it, it will download the MS-COCO dataset, preprocess and cache a subset of the images using Inception V3, train an encoder-decoder model, and use it to generate captions on new images.\nThe code requires TensorFlow version >=1.9. If you're running this in Colab\nIn this example, we're training on a relatively small amount of data as an example. On a single P100 GPU, this example will take about ~2 hours to train. We train on the first 30,000 captions (corresponding to about ~20,000 images depending on shuffling, as there are multiple captions per image in the dataset)",
"# Import TensorFlow and enable eager execution\n# This code requires TensorFlow version >=1.9\nimport tensorflow as tf\ntf.enable_eager_execution()\n\n# We'll generate plots of attention in order to see which parts of an image\n# our model focuses on during captioning\nimport matplotlib.pyplot as plt\n\n# Scikit-learn includes many helpful utilities\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.utils import shuffle\n\nimport re\nimport numpy as np\nimport os\nimport time\nimport json\nfrom glob import glob\nfrom PIL import Image\nimport pickle",
"Download and prepare the MS-COCO dataset\nWe will use the MS-COCO dataset to train our model. This dataset contains >82,000 images, each of which has been annotated with at least 5 different captions. The code code below will download and extract the dataset automatically. \nCaution: large download ahead. We'll use the training set, it's a 13GB file.",
"annotation_zip = tf.keras.utils.get_file('captions.zip', \n cache_subdir=os.path.abspath('.'),\n origin = 'http://images.cocodataset.org/annotations/annotations_trainval2014.zip',\n extract = True)\nannotation_file = os.path.dirname(annotation_zip)+'/annotations/captions_train2014.json'\n\nname_of_zip = 'train2014.zip'\nif not os.path.exists(os.path.abspath('.') + '/' + name_of_zip):\n image_zip = tf.keras.utils.get_file(name_of_zip, \n cache_subdir=os.path.abspath('.'),\n origin = 'http://images.cocodataset.org/zips/train2014.zip',\n extract = True)\n PATH = os.path.dirname(image_zip)+'/train2014/'\nelse:\n PATH = os.path.abspath('.')+'/train2014/'",
"Optionally, limit the size of the training set for faster training\nFor this example, we'll select a subset of 30,000 captions and use these and the corresponding images to train our model. As always, captioning quality will improve if you choose to use more data.",
"# read the json file\nwith open(annotation_file, 'r') as f:\n annotations = json.load(f)\n\n# storing the captions and the image name in vectors\nall_captions = []\nall_img_name_vector = []\n\nfor annot in annotations['annotations']:\n caption = '<start> ' + annot['caption'] + ' <end>'\n image_id = annot['image_id']\n full_coco_image_path = PATH + 'COCO_train2014_' + '%012d.jpg' % (image_id)\n \n all_img_name_vector.append(full_coco_image_path)\n all_captions.append(caption)\n\n# shuffling the captions and image_names together\n# setting a random state\ntrain_captions, img_name_vector = shuffle(all_captions,\n all_img_name_vector,\n random_state=1)\n\n# selecting the first 30000 captions from the shuffled set\nnum_examples = 30000\ntrain_captions = train_captions[:num_examples]\nimg_name_vector = img_name_vector[:num_examples]\n\nlen(train_captions), len(all_captions)",
"Preprocess the images using InceptionV3\nNext, we will use InceptionV3 (pretrained on Imagenet) to classify each image. We will extract features from the last convolutional layer. \nFirst, we will need to convert the images into the format inceptionV3 expects by:\n* Resizing the image to (299, 299)\n* Using the preprocess_input method to place the pixels in the range of -1 to 1 (to match the format of the images used to train InceptionV3).",
"def load_image(image_path):\n img = tf.read_file(image_path)\n img = tf.image.decode_jpeg(img, channels=3)\n img = tf.image.resize_images(img, (299, 299))\n img = tf.keras.applications.inception_v3.preprocess_input(img)\n return img, image_path",
"Initialize InceptionV3 and load the pretrained Imagenet weights\nTo do so, we'll create a tf.keras model where the output layer is the last convolutional layer in the InceptionV3 architecture. \n* Each image is forwarded through the network and the vector that we get at the end is stored in a dictionary (image_name --> feature_vector). \n* We use the last convolutional layer because we are using attention in this example. The shape of the output of this layer is 8x8x2048. \n* We avoid doing this during training so it does not become a bottleneck. \n* After all the images are passed through the network, we pickle the dictionary and save it to disk.",
"image_model = tf.keras.applications.InceptionV3(include_top=False, \n weights='imagenet')\nnew_input = image_model.input\nhidden_layer = image_model.layers[-1].output\n\nimage_features_extract_model = tf.keras.Model(new_input, hidden_layer)",
"Caching the features extracted from InceptionV3\nWe will pre-process each image with InceptionV3 and cache the output to disk. Caching the output in RAM would be faster but memory intensive, requiring 8 * 8 * 2048 floats per image. At the time of writing, this would exceed the memory limitations of Colab (although these may change, an instance appears to have about 12GB of memory currently). \nPerformance could be improved with a more sophisticated caching strategy (e.g., by sharding the images to reduce random access disk I/O) at the cost of more code.\nThis will take about 10 minutes to run in Colab with a GPU. If you'd like to see a progress bar, you could: install tqdm (!pip install tqdm), then change this line: \nfor img, path in image_dataset: \nto:\nfor img, path in tqdm(image_dataset):.",
"# getting the unique images\nencode_train = sorted(set(img_name_vector))\n\n# feel free to change the batch_size according to your system configuration\nimage_dataset = tf.data.Dataset.from_tensor_slices(\n encode_train).map(load_image).batch(16)\n\nfor img, path in image_dataset:\n batch_features = image_features_extract_model(img)\n batch_features = tf.reshape(batch_features, \n (batch_features.shape[0], -1, batch_features.shape[3]))\n\n for bf, p in zip(batch_features, path):\n path_of_feature = p.numpy().decode(\"utf-8\")\n np.save(path_of_feature, bf.numpy())",
"Preprocess and tokenize the captions\n\nFirst, we'll tokenize the captions (e.g., by splitting on spaces). This will give us a vocabulary of all the unique words in the data (e.g., \"surfing\", \"football\", etc).\nNext, we'll limit the vocabulary size to the top 5,000 words to save memory. We'll replace all other words with the token \"UNK\" (for unknown).\nFinally, we create a word --> index mapping and vice-versa.\nWe will then pad all sequences to the be same length as the longest one.",
"# This will find the maximum length of any caption in our dataset\ndef calc_max_length(tensor):\n return max(len(t) for t in tensor)\n\n# The steps above is a general process of dealing with text processing\n\n# choosing the top 5000 words from the vocabulary\ntop_k = 5000\ntokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=top_k, \n oov_token=\"<unk>\", \n filters='!\"#$%&()*+.,-/:;=?@[\\]^_`{|}~ ')\ntokenizer.fit_on_texts(train_captions)\ntrain_seqs = tokenizer.texts_to_sequences(train_captions)\n\ntokenizer.word_index = {key:value for key, value in tokenizer.word_index.items() if value <= top_k}\n# putting <unk> token in the word2idx dictionary\ntokenizer.word_index[tokenizer.oov_token] = top_k + 1\ntokenizer.word_index['<pad>'] = 0\n\n# creating the tokenized vectors\ntrain_seqs = tokenizer.texts_to_sequences(train_captions)\n\n# creating a reverse mapping (index -> word)\nindex_word = {value:key for key, value in tokenizer.word_index.items()}\n\n# padding each vector to the max_length of the captions\n# if the max_length parameter is not provided, pad_sequences calculates that automatically\ncap_vector = tf.keras.preprocessing.sequence.pad_sequences(train_seqs, padding='post')\n\n# calculating the max_length \n# used to store the attention weights\nmax_length = calc_max_length(train_seqs)",
"Split the data into training and testing",
"# Create training and validation sets using 80-20 split\nimg_name_train, img_name_val, cap_train, cap_val = train_test_split(img_name_vector, \n cap_vector, \n test_size=0.2, \n random_state=0)\n\nlen(img_name_train), len(cap_train), len(img_name_val), len(cap_val)",
"Our images and captions are ready! Next, let's create a tf.data dataset to use for training our model.",
"# feel free to change these parameters according to your system's configuration\n\nBATCH_SIZE = 64\nBUFFER_SIZE = 1000\nembedding_dim = 256\nunits = 512\nvocab_size = len(tokenizer.word_index)\n# shape of the vector extracted from InceptionV3 is (64, 2048)\n# these two variables represent that\nfeatures_shape = 2048\nattention_features_shape = 64\n\n# loading the numpy files \ndef map_func(img_name, cap):\n img_tensor = np.load(img_name.decode('utf-8')+'.npy')\n return img_tensor, cap\n\ndataset = tf.data.Dataset.from_tensor_slices((img_name_train, cap_train))\n\n# using map to load the numpy files in parallel\n# NOTE: Be sure to set num_parallel_calls to the number of CPU cores you have\n# https://www.tensorflow.org/api_docs/python/tf/py_func\ndataset = dataset.map(lambda item1, item2: tf.py_func(\n map_func, [item1, item2], [tf.float32, tf.int32]), num_parallel_calls=8)\n\n# shuffling and batching\ndataset = dataset.shuffle(BUFFER_SIZE)\n# https://www.tensorflow.org/api_docs/python/tf/contrib/data/batch_and_drop_remainder\ndataset = dataset.batch(BATCH_SIZE)\ndataset = dataset.prefetch(1)",
"Model\nFun fact, the decoder below is identical to the one in the example for Neural Machine Translation with Attention.\nThe model architecture is inspired by the Show, Attend and Tell paper.\n\nIn this example, we extract the features from the lower convolutional layer of InceptionV3 giving us a vector of shape (8, 8, 2048). \nWe squash that to a shape of (64, 2048).\nThis vector is then passed through the CNN Encoder(which consists of a single Fully connected layer).\nThe RNN(here GRU) attends over the image to predict the next word.",
"def gru(units):\n # If you have a GPU, we recommend using the CuDNNGRU layer (it provides a \n # significant speedup).\n if tf.test.is_gpu_available():\n return tf.keras.layers.CuDNNGRU(units, \n return_sequences=True, \n return_state=True, \n recurrent_initializer='glorot_uniform')\n else:\n return tf.keras.layers.GRU(units, \n return_sequences=True, \n return_state=True, \n recurrent_activation='sigmoid', \n recurrent_initializer='glorot_uniform')\n\nclass BahdanauAttention(tf.keras.Model):\n def __init__(self, units):\n super(BahdanauAttention, self).__init__()\n self.W1 = tf.keras.layers.Dense(units)\n self.W2 = tf.keras.layers.Dense(units)\n self.V = tf.keras.layers.Dense(1)\n \n def call(self, features, hidden):\n # features(CNN_encoder output) shape == (batch_size, 64, embedding_dim)\n \n # hidden shape == (batch_size, hidden_size)\n # hidden_with_time_axis shape == (batch_size, 1, hidden_size)\n hidden_with_time_axis = tf.expand_dims(hidden, 1)\n \n # score shape == (batch_size, 64, hidden_size)\n score = tf.nn.tanh(self.W1(features) + self.W2(hidden_with_time_axis))\n \n # attention_weights shape == (batch_size, 64, 1)\n # we get 1 at the last axis because we are applying score to self.V\n attention_weights = tf.nn.softmax(self.V(score), axis=1)\n \n # context_vector shape after sum == (batch_size, hidden_size)\n context_vector = attention_weights * features\n context_vector = tf.reduce_sum(context_vector, axis=1)\n \n return context_vector, attention_weights\n\nclass CNN_Encoder(tf.keras.Model):\n # Since we have already extracted the features and dumped it using pickle\n # This encoder passes those features through a Fully connected layer\n def __init__(self, embedding_dim):\n super(CNN_Encoder, self).__init__()\n # shape after fc == (batch_size, 64, embedding_dim)\n self.fc = tf.keras.layers.Dense(embedding_dim)\n \n def call(self, x):\n x = self.fc(x)\n x = tf.nn.relu(x)\n return x\n\nclass RNN_Decoder(tf.keras.Model):\n def __init__(self, embedding_dim, units, vocab_size):\n super(RNN_Decoder, self).__init__()\n self.units = units\n\n self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)\n self.gru = gru(self.units)\n self.fc1 = tf.keras.layers.Dense(self.units)\n self.fc2 = tf.keras.layers.Dense(vocab_size)\n \n self.attention = BahdanauAttention(self.units)\n \n def call(self, x, features, hidden):\n # defining attention as a separate model\n context_vector, attention_weights = self.attention(features, hidden)\n \n # x shape after passing through embedding == (batch_size, 1, embedding_dim)\n x = self.embedding(x)\n \n # x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)\n x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)\n \n # passing the concatenated vector to the GRU\n output, state = self.gru(x)\n \n # shape == (batch_size, max_length, hidden_size)\n x = self.fc1(output)\n \n # x shape == (batch_size * max_length, hidden_size)\n x = tf.reshape(x, (-1, x.shape[2]))\n \n # output shape == (batch_size * max_length, vocab)\n x = self.fc2(x)\n\n return x, state, attention_weights\n\n def reset_state(self, batch_size):\n return tf.zeros((batch_size, self.units))\n\nencoder = CNN_Encoder(embedding_dim)\ndecoder = RNN_Decoder(embedding_dim, units, vocab_size)\n\noptimizer = tf.train.AdamOptimizer()\n\n# We are masking the loss calculated for padding\ndef loss_function(real, pred):\n mask = 1 - np.equal(real, 0)\n loss_ = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=real, logits=pred) * mask\n return tf.reduce_mean(loss_)",
"Training\n\nWe extract the features stored in the respective .npy files and then pass those features through the encoder.\nThe encoder output, hidden state(initialized to 0) and the decoder input (which is the start token) is passed to the decoder.\nThe decoder returns the predictions and the decoder hidden state.\nThe decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.\nUse teacher forcing to decide the next input to the decoder.\nTeacher forcing is the technique where the target word is passed as the next input to the decoder.\nThe final step is to calculate the gradients and apply it to the optimizer and backpropagate.",
"# adding this in a separate cell because if you run the training cell \n# many times, the loss_plot array will be reset\nloss_plot = []\n\nEPOCHS = 20\n\nfor epoch in range(EPOCHS):\n start = time.time()\n total_loss = 0\n \n for (batch, (img_tensor, target)) in enumerate(dataset):\n loss = 0\n \n # initializing the hidden state for each batch\n # because the captions are not related from image to image\n hidden = decoder.reset_state(batch_size=target.shape[0])\n\n dec_input = tf.expand_dims([tokenizer.word_index['<start>']] * BATCH_SIZE, 1)\n \n with tf.GradientTape() as tape:\n features = encoder(img_tensor)\n \n for i in range(1, target.shape[1]):\n # passing the features through the decoder\n predictions, hidden, _ = decoder(dec_input, features, hidden)\n\n loss += loss_function(target[:, i], predictions)\n \n # using teacher forcing\n dec_input = tf.expand_dims(target[:, i], 1)\n \n total_loss += (loss / int(target.shape[1]))\n \n variables = encoder.variables + decoder.variables\n \n gradients = tape.gradient(loss, variables) \n \n optimizer.apply_gradients(zip(gradients, variables), tf.train.get_or_create_global_step())\n \n if batch % 100 == 0:\n print ('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1, \n batch, \n loss.numpy() / int(target.shape[1])))\n # storing the epoch end loss value to plot later\n loss_plot.append(total_loss / len(cap_vector))\n \n print ('Epoch {} Loss {:.6f}'.format(epoch + 1, \n total_loss/len(cap_vector)))\n print ('Time taken for 1 epoch {} sec\\n'.format(time.time() - start))\n\nplt.plot(loss_plot)\nplt.xlabel('Epochs')\nplt.ylabel('Loss')\nplt.title('Loss Plot')\nplt.show()",
"Caption!\n\nThe evaluate function is similar to the training loop, except we don't use teacher forcing here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.\nStop predicting when the model predicts the end token.\nAnd store the attention weights for every time step.",
"def evaluate(image):\n attention_plot = np.zeros((max_length, attention_features_shape))\n\n hidden = decoder.reset_state(batch_size=1)\n\n temp_input = tf.expand_dims(load_image(image)[0], 0)\n img_tensor_val = image_features_extract_model(temp_input)\n img_tensor_val = tf.reshape(img_tensor_val, (img_tensor_val.shape[0], -1, img_tensor_val.shape[3]))\n\n features = encoder(img_tensor_val)\n\n dec_input = tf.expand_dims([tokenizer.word_index['<start>']], 0)\n result = []\n\n for i in range(max_length):\n predictions, hidden, attention_weights = decoder(dec_input, features, hidden)\n\n attention_plot[i] = tf.reshape(attention_weights, (-1, )).numpy()\n\n predicted_id = tf.multinomial(tf.exp(predictions), num_samples=1)[0][0].numpy()\n result.append(index_word[predicted_id])\n\n if index_word[predicted_id] == '<end>':\n return result, attention_plot\n\n dec_input = tf.expand_dims([predicted_id], 0)\n\n attention_plot = attention_plot[:len(result), :]\n return result, attention_plot\n\ndef plot_attention(image, result, attention_plot):\n temp_image = np.array(Image.open(image))\n\n fig = plt.figure(figsize=(10, 10))\n \n len_result = len(result)\n for l in range(len_result):\n temp_att = np.resize(attention_plot[l], (8, 8))\n ax = fig.add_subplot(len_result//2, len_result//2, l+1)\n ax.set_title(result[l])\n img = ax.imshow(temp_image)\n ax.imshow(temp_att, cmap='gray', alpha=0.6, extent=img.get_extent())\n\n plt.tight_layout()\n plt.show()\n\n# captions on the validation set\nrid = np.random.randint(0, len(img_name_val))\nimage = img_name_val[rid]\nreal_caption = ' '.join([index_word[i] for i in cap_val[rid] if i not in [0]])\nresult, attention_plot = evaluate(image)\n\nprint ('Real Caption:', real_caption)\nprint ('Prediction Caption:', ' '.join(result))\nplot_attention(image, result, attention_plot)\n# opening the image\nImage.open(img_name_val[rid])",
"Try it on your own images\nFor fun, below we've provided a method you can use to caption your own images with the model we've just trained. Keep in mind, it was trained on a relatively small amount of data, and your images may be different from the training data (so be prepared for weird results!)",
"image_url = 'https://tensorflow.org/images/surf.jpg'\nimage_extension = image_url[-4:]\nimage_path = tf.keras.utils.get_file('image'+image_extension, \n origin=image_url)\n\nresult, attention_plot = evaluate(image_path)\nprint ('Prediction Caption:', ' '.join(result))\nplot_attention(image_path, result, attention_plot)\n# opening the image\nImage.open(image_path)",
"Next steps\nCongrats! You've just trained an image captioning model with attention. Next, we recommend taking a look at this example Neural Machine Translation with Attention. It uses a similar architecture to translate between Spanish and English sentences. You can also experiment with training the code in this notebook on a different dataset."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
metpy/MetPy
|
v0.10/_downloads/d02fda82caa4290e31f980126221b2a4/Wind_SLP_Interpolation.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Wind and Sea Level Pressure Interpolation\nInterpolate sea level pressure, as well as wind component data,\nto make a consistent looking analysis, featuring contours of pressure and wind barbs.",
"import cartopy.crs as ccrs\nimport cartopy.feature as cfeature\nfrom matplotlib.colors import BoundaryNorm\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\nfrom metpy.calc import wind_components\nfrom metpy.cbook import get_test_data\nfrom metpy.interpolate import interpolate_to_grid, remove_nan_observations\nfrom metpy.plots import add_metpy_logo\nfrom metpy.units import units\n\nto_proj = ccrs.AlbersEqualArea(central_longitude=-97., central_latitude=38.)",
"Read in data",
"with get_test_data('station_data.txt') as f:\n data = pd.read_csv(f, header=0, usecols=(2, 3, 4, 5, 18, 19),\n names=['latitude', 'longitude', 'slp', 'temperature', 'wind_dir',\n 'wind_speed'],\n na_values=-99999)",
"Project the lon/lat locations to our final projection",
"lon = data['longitude'].values\nlat = data['latitude'].values\nxp, yp, _ = to_proj.transform_points(ccrs.Geodetic(), lon, lat).T",
"Remove all missing data from pressure",
"x_masked, y_masked, pres = remove_nan_observations(xp, yp, data['slp'].values)",
"Interpolate pressure using Cressman interpolation",
"slpgridx, slpgridy, slp = interpolate_to_grid(x_masked, y_masked, pres, interp_type='cressman',\n minimum_neighbors=1, search_radius=400000,\n hres=100000)",
"Get wind information and mask where either speed or direction is unavailable",
"wind_speed = (data['wind_speed'].values * units('m/s')).to('knots')\nwind_dir = data['wind_dir'].values * units.degree\n\ngood_indices = np.where((~np.isnan(wind_dir)) & (~np.isnan(wind_speed)))\n\nx_masked = xp[good_indices]\ny_masked = yp[good_indices]\nwind_speed = wind_speed[good_indices]\nwind_dir = wind_dir[good_indices]",
"Calculate u and v components of wind and then interpolate both.\nBoth will have the same underlying grid so throw away grid returned from v interpolation.",
"u, v = wind_components(wind_speed, wind_dir)\n\nwindgridx, windgridy, uwind = interpolate_to_grid(x_masked, y_masked, np.array(u),\n interp_type='cressman', search_radius=400000,\n hres=100000)\n\n_, _, vwind = interpolate_to_grid(x_masked, y_masked, np.array(v), interp_type='cressman',\n search_radius=400000, hres=100000)",
"Get temperature information",
"x_masked, y_masked, t = remove_nan_observations(xp, yp, data['temperature'].values)\ntempx, tempy, temp = interpolate_to_grid(x_masked, y_masked, t, interp_type='cressman',\n minimum_neighbors=3, search_radius=400000, hres=35000)\n\ntemp = np.ma.masked_where(np.isnan(temp), temp)",
"Set up the map and plot the interpolated grids appropriately.",
"levels = list(range(-20, 20, 1))\ncmap = plt.get_cmap('viridis')\nnorm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)\n\nfig = plt.figure(figsize=(20, 10))\nadd_metpy_logo(fig, 360, 120, size='large')\nview = fig.add_subplot(1, 1, 1, projection=to_proj)\n\nview.set_extent([-120, -70, 20, 50])\nview.add_feature(cfeature.STATES.with_scale('50m'))\nview.add_feature(cfeature.OCEAN)\nview.add_feature(cfeature.COASTLINE.with_scale('50m'))\nview.add_feature(cfeature.BORDERS, linestyle=':')\n\ncs = view.contour(slpgridx, slpgridy, slp, colors='k', levels=list(range(990, 1034, 4)))\nview.clabel(cs, inline=1, fontsize=12, fmt='%i')\n\nmmb = view.pcolormesh(tempx, tempy, temp, cmap=cmap, norm=norm)\nfig.colorbar(mmb, shrink=.4, pad=0.02, boundaries=levels)\n\nview.barbs(windgridx, windgridy, uwind, vwind, alpha=.4, length=5)\n\nview.set_title('Surface Temperature (shaded), SLP, and Wind.')\n\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
WomensCodingCircle/CodingCirclePython
|
Lesson11_JSONandAPIs/JSONandAPIs.ipynb
|
mit
|
[
"JSON and APIs\nJSON\nWhat is JSON? From JSON.org:\nJSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language, Standard ECMA-262 3rd Edition - December 1999. JSON is a text format that is completely language independent but uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python, and many others. These properties make JSON an ideal data-interchange language.\n\nBut that isn't exactly helpful is it? JSON is a string format that allows you to store dictionaries, lists, strings, and numbers in a way that you can pass it from one source to another. You can take a Python dictionary and pass it to a perl program by printing it in JSON format and loading it or you can pull data from the web and create a python dictionary or list from that. Even if you don't understand now, after you use it, JSON will become more clear.\nYou have in this folder a json file called shapes.json. Take a look at it and then we can talk about JSON format.\nJSON Format\nJSON is a subset of python. You have a top level object that is either a list or a dictionary. You can have values and keys in the top level object be any of the following: strings, floats, ints, lists, boolean, null, or dictonary. To see how to represent these refer to the documentation www.json.org\nJSON in Python\nTo use json in python we use the module json. It is part of the standard library so you don't need to install anything.\nimport json",
"import json",
"Loading data\nYou can load data from json format into python from either a string using the loads method or a file handle using the load method. \nmy_list = json.loads('[1, 2, 3]')\nwith open(my_file.json) as fh:\n my_dict = json.load(fh)",
"# Load from file\nwith open('shapes.json') as fh:\n shapes = json.load(fh)\nprint(shapes)\n\n# Load from string\ncomplex_shapes_string = '[\"pentagon\", \"spiral\", \"double helix\"]'\ncomplex_shapes = json.loads(complex_shapes_string)\nprint(complex_shapes)",
"TRY IT\nCreate a string called three_d_json which cotains the string '[\"cube\", \"sphere\"]' and then load that data into a python list using json.load.\nUsing JSON data\nOnce you load data from python format, you can now use the data like you would any other python dictionary or list.",
"for shape in shapes:\n title_shape = shape.title()\n area_formula = shapes[shape]['area']\n print(\"{}'s area can be calculated using {}\".format(title_shape, area_formula))",
"TRY IT\nfor each shape in complex_shapes print \"shape is hard to find the area of\".\nDumping JSON Data\nIf you want to store data from your python program into JSON format, it is as simple as loading it. To dump to a string use json.dumps and to dump to a file use json.dump. Make sure that you are using only valid json values in your list or dictionary.\njson_string = json.dumps(my_list)\nwith open('json_file.json', 'w') as fh:\n json.dump(my_dict, fh)",
"# Dumping to string\nfavorite_shapes = ['hexagon', 'heart']\nfav_shapes_json = json.dumps(favorite_shapes)\nprint(fav_shapes_json)\n\n# Dumping to a file\nwith open('fav_shapes.json', 'w') as fh:\n json.dump(favorite_shapes, fh)",
"TRY IT\ncreate a list of 4 sided shapes and store in a variable called quads, dump quad to json and store the result in a variable called quads_json.\nWeb APIs\nWeb APIs are a way to retreive and send data to and from a url. The urls have a pattern so that you can retreive data programtically. With REST APIs specifically, you build a url putting data in the correct places to retreive the data you need. Many Web APIs (the best ones) return their data in JSON format. \nThere are many free api's available, most require that you sign up to recieve an API key. You will need to read the API docs for any specific api to figure out how to get the data you want.\nHere are some fun APIs to try out:\n* Dropbox: https://www.dropbox.com/developers\n* Google Maps: https://developers.google.com/maps/web/\n* Twitter: https://dev.twitter.com/docs\n* YouTube: https://developers.google.com/youtube/v3/getting-started\n* Soundcloud: http://developers.soundcloud.com/docs/api/guide#playing\n* Stripe: https://stripe.com/docs/tutorials/checkout\n* Instagram: http://instagram.com/developer/\n* Twilio: https://www.twilio.com/docs\n* Yelp: http://www.yelp.com/developers/getting_started\n* Facebook: https://developers.facebook.com/docs/facebook-login/login-flow-for-web\n* Etsy: https://www.etsy.com/developers/documentation\nWe are going to use the steam api because certain endpoints don't require an app id (and who has time to sign up for one when there is python to learn?)\nThe endpoint we will use is one that will get us metadata info about a specific game:\nhttp://store.steampowered.com/api/appdetails?appids=<id number>\n\nIf the game doesn't exist it returns json that looks like this:\n {\"1\":{\"success\":false}}\n\nIf the game does exist it returns json that looks like this:\n\"100\":{ \n\"success\":true,\n\"data\":{ \n \"type\":\"game\",\n \"name\":\"Counter-Strike: Condition Zero\",\n \"steam_appid\":80,\n \"required_age\":0,\n \"is_free\":false,\n \"detailed_description\":\"With its extensive Tour of Duty campaign, a near-limitless number of skirmish modes, updates and new content for Counter-Strike's award-winning multiplayer game play, plus over 12 bonus single player missions, Counter-Strike: Condition Zero is a tremendous offering of single and multiplayer content.\",\n \"about_the_game\":\"With its extensive Tour of Duty campaign, a near-limitless number of skirmish modes, updates and new content for Counter-Strike's award-winning multiplayer game play, plus over 12 bonus single player missions, Counter-Strike: Condition Zero is a tremendous offering of single and multiplayer content.\",\n \"supported_languages\":\"English, French, German, Italian, Spanish, Simplified Chinese, Traditional Chinese, Korean\",\n \"header_image\":\"http:\\/\\/cdn.akamai.steamstatic.com\\/steam\\/apps\\/80\\/header.jpg?t=1447889920\",\n \"website\":null,\n \"pc_requirements\":{ \n \"minimum\":\"\\r\\n\\t\\t\\t<p><strong>Minimum:<\\/strong> 500 mhz processor, 96mb ram, 16mb video card, Windows XP, Mouse, Keyboard, Internet Connection<br \\/><\\/p>\\r\\n\\t\\t\\t<p><strong>Recommended:<\\/strong> 800 mhz processor, 128mb ram, 32mb+ video card, Windows XP, Mouse, Keyboard, Internet Connection<br \\/><\\/p>\\r\\n\\t\\t\\t\"\n },\n \"mac_requirements\":[\n\n ],\n \"linux_requirements\":[\n\n ],\n \"developers\":[ \n \"Valve\"\n ],\n \"publishers\":[ \n \"Valve\"\n ],\n \"price_overview\":{ \n \"currency\":\"USD\",\n \"initial\":999,\n \"final\":999,\n \"discount_percent\":0\n },\n \"packages\":[ \n 7\n ],\n \"package_groups\":[ \n { \n \"name\":\"default\",\n \"title\":\"Buy Counter-Strike: Condition Zero\",\n \"description\":\"\",\n \"selection_text\":\"Select a purchase option\",\n \"save_text\":\"\",\n \"display_type\":0,\n \"is_recurring_subscription\":\"false\",\n \"subs\":[ \n { \n \"packageid\":7,\n \"percent_savings_text\":\"\",\n \"percent_savings\":0,\n \"option_text\":\"Counter-Strike: Condition Zero $9.99\",\n \"option_description\":\"\",\n \"can_get_free_license\":\"0\",\n \"is_free_license\":false,\n \"price_in_cents_with_discount\":999\n }\n ]\n }\n ],\n \"platforms\":{ \n \"windows\":true,\n \"mac\":true,\n \"linux\":true\n },\n \"metacritic\":{ \n \"score\":65,\n \"url\":\"http:\\/\\/www.metacritic.com\\/game\\/pc\\/counter-strike-condition-zero?ftag=MCD-06-10aaa1f\"\n },\n \"categories\":[ \n { \n \"id\":2,\n \"description\":\"Single-player\"\n },\n { \n \"id\":1,\n \"description\":\"Multi-player\"\n },\n { \n \"id\":8,\n \"description\":\"Valve Anti-Cheat enabled\"\n }\n ],\n \"genres\":[ \n { \n \"id\":\"1\",\n \"description\":\"Action\"\n }\n ],\n \"screenshots\":[ \n { \n \"id\":0,\n \"path_thumbnail\":\"http:\\/\\/cdn.akamai.steamstatic.com\\/steam\\/apps\\/80\\/0000002528.600x338.jpg?t=1447889920\",\n \"path_full\":\"http:\\/\\/cdn.akamai.steamstatic.com\\/steam\\/apps\\/80\\/0000002528.1920x1080.jpg?t=1447889920\"\n },\n { \n \"id\":1,\n \"path_thumbnail\":\"http:\\/\\/cdn.akamai.steamstatic.com\\/steam\\/apps\\/80\\/0000002529.600x338.jpg?t=1447889920\",\n \"path_full\":\"http:\\/\\/cdn.akamai.steamstatic.com\\/steam\\/apps\\/80\\/0000002529.1920x1080.jpg?t=1447889920\"\n },\n { \n \"id\":2,\n \"path_thumbnail\":\"http:\\/\\/cdn.akamai.steamstatic.com\\/steam\\/apps\\/80\\/0000002530.600x338.jpg?t=1447889920\",\n \"path_full\":\"http:\\/\\/cdn.akamai.steamstatic.com\\/steam\\/apps\\/80\\/0000002530.1920x1080.jpg?t=1447889920\"\n },\n { \n \"id\":3,\n \"path_thumbnail\":\"http:\\/\\/cdn.akamai.steamstatic.com\\/steam\\/apps\\/80\\/0000002531.600x338.jpg?t=1447889920\",\n \"path_full\":\"http:\\/\\/cdn.akamai.steamstatic.com\\/steam\\/apps\\/80\\/0000002531.1920x1080.jpg?t=1447889920\"\n },\n { \n \"id\":4,\n \"path_thumbnail\":\"http:\\/\\/cdn.akamai.steamstatic.com\\/steam\\/apps\\/80\\/0000002532.600x338.jpg?t=1447889920\",\n \"path_full\":\"http:\\/\\/cdn.akamai.steamstatic.com\\/steam\\/apps\\/80\\/0000002532.1920x1080.jpg?t=1447889920\"\n },\n { \n \"id\":5,\n \"path_thumbnail\":\"http:\\/\\/cdn.akamai.steamstatic.com\\/steam\\/apps\\/80\\/0000002533.600x338.jpg?t=1447889920\",\n \"path_full\":\"http:\\/\\/cdn.akamai.steamstatic.com\\/steam\\/apps\\/80\\/0000002533.1920x1080.jpg?t=1447889920\"\n },\n { \n \"id\":6,\n \"path_thumbnail\":\"http:\\/\\/cdn.akamai.steamstatic.com\\/steam\\/apps\\/80\\/0000002534.600x338.jpg?t=1447889920\",\n \"path_full\":\"http:\\/\\/cdn.akamai.steamstatic.com\\/steam\\/apps\\/80\\/0000002534.1920x1080.jpg?t=1447889920\"\n },\n { \n \"id\":7,\n \"path_thumbnail\":\"http:\\/\\/cdn.akamai.steamstatic.com\\/steam\\/apps\\/80\\/0000002535.600x338.jpg?t=1447889920\",\n \"path_full\":\"http:\\/\\/cdn.akamai.steamstatic.com\\/steam\\/apps\\/80\\/0000002535.1920x1080.jpg?t=1447889920\"\n }\n ],\n \"recommendations\":{ \n \"total\":6647\n },\n \"release_date\":{ \n \"coming_soon\":false,\n \"date\":\"Mar 1, 2004\"\n },\n \"support_info\":{ \n \"url\":\"http:\\/\\/steamcommunity.com\\/app\\/80\",\n \"email\":\"\"\n },\n \"background\":\"http:\\/\\/cdn.akamai.steamstatic.com\\/steam\\/apps\\/80\\/page_bg_generated_v6b.jpg?t=1447889920\"\n}\n\n}\n}\nYou can actually use the url in a browser. Try that and see if you hit on any interesting games by entering an id number\nAccessing API data with python\nThere are many options for getting data from a url with python: httplib, urllib, urllib2, requests. This isn't limited to JSON data from a web api, you can get the raw html from any website. We are going to use urllib2 because it is part of the standard library and it is easy to use.\nFirst, as with any library, we import it\nimport urllib2\n\nThen you open a url using the method urlopen\nconnection = urllib2.urlopen('url')\n\nThen you can read the data\ndata = connection.read()",
"import urllib.request, urllib.error, urllib.parse\n\ngame_id = str(251990)\nconnection = urllib.request.urlopen('http://store.steampowered.com/api/appdetails?appids=' + game_id)\ndata = connection.read()\ntype(data)",
"Now the result is a string, but it is valid json and we know how to turn a json string into a python dictionary: json.loads()",
"game_data = json.loads(data)\nprint(type(game_data))",
"Finally you can use this data just like you would any python dictionary.",
"print(game_data[game_id]['data']['name'])\nprint(game_data[game_id]['data']['about_the_game'])\nprint(game_data[game_id]['data']['price_overview']['final'])\n",
"TRY IT\nRetreive the game data for the game with the id of 212680, parse the json and print out the game's name.\nProject\n\n\nYou will need to sign up for a open weather api key here (FREE tier) http://openweathermap.org/price WARNING The free tier only let you make 60 requests an hour, so be conservitave when testing your code.\n\n\nAfter you sign up for an api key and then go look at the documentation for current weather: http://openweathermap.org/current\n\n\nPick a list of three cities [London, Paris, New York] for example and use a for loop to get the current weather for each one. Use urllib2 to fetch the data from the url and the json library to decode the data into a python dictionary. \n\n\nPrint the weather under the key 'weather' and then under 'description'. HINT Look at the result of the json in your web browser before you try to use it in your code.\nhttp://api.openweathermap.org/data/2.5/weather?q={city}&appid={appid}"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
srcole/qwm
|
misc/Sawtooth instantaneous frequency.ipynb
|
mit
|
[
"The instantaneous phase, amplitude, and frequency of a sawtooth wave\ncan changing the smoothness of a signal change the instantaneous freq dynamics? (i.e. more sharp transitions)",
"%config InlineBackend.figure_format = 'retina'\n%matplotlib inline\n\nimport numpy as np\nimport scipy as sp\nimport matplotlib.pyplot as plt\n\nimport seaborn as sns\nsns.set_style('white')",
"1. Simulate a 20Hz sawtooth wave",
"# Define sawtooth shape in some number of samples\nx1 = np.array([0,.05,.2,1,.9,.8,.7,.6,.5,.4,.3,.2,.1,.05,.01])\nt1 = np.arange(len(x1))\n\n# Interpolate sawtooth so it has 50 samples (50ms = 20Hz wave)\nfrom scipy import interpolate\nf = interpolate.interp1d(t1, x1)\nt2 = np.linspace(0,len(t1)-1,50)\nx2 = f(t2)\n\n# Tile the new sawtooth to last 5 seconds\nx = np.tile(x2,100)\nx = x - np.mean(x)\nFs = 1000.\nt = np.arange(0,5,.001)\n\n# Plot sawtooth\nplt.figure(figsize=(5,2))\nplt.plot(t, x)\nplt.ylim((-.7,.7))\nplt.xlim((0,.5))\nplt.xlabel('Time (s)')\nplt.ylabel('Voltage (a.u.)')",
"2. Filter in 13-30Hz band\nFilter is very short (like steph said in email)",
"from misshapen import nonshape\nx_filt, _ = nonshape.bandpass_default(x, (13,30),Fs, w=1.5,rmv_edge=False)",
"3. Calculate instantaneous measures",
"x_amp = np.abs(sp.signal.hilbert(x_filt))\nx_phase = np.angle(sp.signal.hilbert(x_filt))\n\n# Instantaneous freq\nx_freq = np.diff(x_phase)\nx_freq[x_freq<0] = x_freq[x_freq<0]+2*np.pi\nx_freq = x_freq*Fs/(2*np.pi)",
"4. Visualize",
"samp_plt = range(1000,1200)\n\nplt.figure(figsize=(5,5))\nplt.subplot(3,1,1)\nplt.plot(t[samp_plt],x[samp_plt],'k')\nplt.plot(t[samp_plt],x_filt[samp_plt],'r')\nplt.plot(t[samp_plt],x_amp[samp_plt],'b')\nplt.xlim((t[samp_plt][0],t[samp_plt][-1]))\nplt.xticks([])\nplt.ylabel('raw (black)\\nfiltered (red)\\nInst. Amp. (blue)')\nplt.subplot(3,1,2)\nplt.plot(t[samp_plt],x_phase[samp_plt],'k')\nplt.xlim((t[samp_plt][0],t[samp_plt][-1]))\nplt.xticks([])\nplt.ylabel('Inst. Phase (rad)')\nplt.subplot(3,1,3)\nplt.plot(t[samp_plt],x_freq[samp_plt],'k')\nplt.xlim((t[samp_plt][0],t[samp_plt][-1]))\nplt.xlabel('Time (s)')\nplt.ylabel('Inst. Freq. (Hz)')",
"5. Example with real data",
"x2 = np.load('C:/gh/bv/misshapen/exampledata.npy')\n\nfrom misshapen import nonshape\nx2_filt, _ = nonshape.bandpass_default(x2, (13,30),Fs, w=3,rmv_edge=False)\n\nx_amp = np.abs(sp.signal.hilbert(x2_filt))\nx_phase = np.angle(sp.signal.hilbert(x2_filt))\n\n# Instantaneous freq\nx_freq = np.diff(x_phase)\nx_freq[x_freq<0] = x_freq[x_freq<0]+2*np.pi\nx_freq = x_freq*Fs/(2*np.pi)\n\nsamp_plt = range(3000,5000)\n\nplt.figure(figsize=(10,6))\nplt.subplot(4,1,1)\nplt.plot(t[samp_plt],x2[samp_plt],'k')\nplt.plot(t[samp_plt],x2_filt[samp_plt],'r')\nplt.plot(t[samp_plt],x_amp[samp_plt],'b')\nplt.xlim((t[samp_plt][0],t[samp_plt][-1]))\nplt.xticks([])\nplt.ylabel('raw (black)\\nfiltered (red)\\nInst. Amp. (blue)')\nplt.subplot(4,1,2)\nplt.plot(t[samp_plt],x_phase[samp_plt],'k')\nplt.xlim((t[samp_plt][0],t[samp_plt][-1]))\nplt.xticks([])\nplt.ylabel('Inst. Phase (rad)')\nplt.subplot(4,1,3)\nplt.plot(t[samp_plt],x_freq[samp_plt],'k')\nplt.xlim((t[samp_plt][0],t[samp_plt][-1]))\nplt.xticks([])\nplt.ylim((0,30))\nplt.ylabel('Inst. Freq. (Hz)')\nplt.subplot(4,1,4)\nplt.plot(t[samp_plt],x_amp[samp_plt],'k')\nplt.ylabel('Inst. Amp.')\nplt.xlabel('Time (s)')\nax = plt.gca()\nax2 = ax.twinx()\nax2.plot(t[samp_plt],x_freq[samp_plt],'r')\nplt.xlim((t[samp_plt][0],t[samp_plt][-1]))\nplt.ylim((0,30))\nplt.ylabel('Inst. Freq. (Hz)',color='r')",
"6. Plot relationship between inst. amp. and inst. freq.",
"plt.figure(figsize=(6,6))\nplt.plot(x_freq, x_amp[1:],'k.',alpha=.01)\nplt.xlim((0,100))\nplt.xlabel('Inst. Freq. (Hz)')\nplt.ylabel('Inst. Amp. (uV)')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tpin3694/tpin3694.github.io
|
machine-learning/titanic_competition_with_random_forest.ipynb
|
mit
|
[
"Title: Titanic Competition With Random Forest\nSlug: titanic_competition_with_random_forest\nSummary: Python code to make a submission to the titanic competition using a random forest. \nDate: 2016-12-29 00:01\nCategory: Machine Learning\nTags: Trees And Forests\nAuthors: Chris Albon\nThis was my first attempt at a Kaggle submission and conducted mostly to understand the Kaggle competition process.\nPreliminaries",
"import pandas as pd\nimport numpy as np\nfrom sklearn import preprocessing\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import GridSearchCV, cross_val_score\nimport csv as csv",
"Get The Data\nYou can get the data on Kaggle's site.",
"# Load the data\ntrain = pd.read_csv('data/train.csv')\ntest = pd.read_csv('data/test.csv')",
"Data Cleaning",
"# Create a list of the features we will eventually want for our model\nfeatures = ['Age', 'SibSp','Parch','Fare','male','embarked_Q','embarked_S','Pclass_2', 'Pclass_3']",
"Sex\nHere we convert the gender labels (male, female) into a dummy variable (1, 0).",
"# Create an encoder\nsex_encoder = preprocessing.LabelEncoder()\n\n# Fit the encoder to the train data so it knows that male = 1\nsex_encoder.fit(train['Sex'])\n\n# Apply the encoder to the training data\ntrain['male'] = sex_encoder.transform(train['Sex'])\n\n# Apply the encoder to the training data\ntest['male'] = sex_encoder.transform(test['Sex'])",
"Embarked",
"# Convert the Embarked training feature into dummies using one-hot\n# and leave one first category to prevent perfect collinearity\ntrain_embarked_dummied = pd.get_dummies(train[\"Embarked\"], prefix='embarked', drop_first=True)\n\n# Convert the Embarked test feature into dummies using one-hot\n# and leave one first category to prevent perfect collinearity\ntest_embarked_dummied = pd.get_dummies(test[\"Embarked\"], prefix='embarked', drop_first=True)\n\n# Concatenate the dataframe of dummies with the main dataframes\ntrain = pd.concat([train, train_embarked_dummied], axis=1)\ntest = pd.concat([test, test_embarked_dummied], axis=1)",
"Social Class",
"# Convert the Pclass training feature into dummies using one-hot\n# and leave one first category to prevent perfect collinearity\ntrain_Pclass_dummied = pd.get_dummies(train[\"Pclass\"], prefix='Pclass', drop_first=True)\n\n# Convert the Pclass test feature into dummies using one-hot\n# and leave one first category to prevent perfect collinearity\ntest_Pclass_dummied = pd.get_dummies(test[\"Pclass\"], prefix='Pclass', drop_first=True)\n\n# Concatenate the dataframe of dummies with the main dataframes\ntrain = pd.concat([train, train_Pclass_dummied], axis=1)\ntest = pd.concat([test, test_Pclass_dummied], axis=1)",
"Impute Missing Values\nA number of values of the Age feature are missing and will prevent the random forest to train. We get around this we will fill in missing values with the mean value of age (a useful fiction).\nAge",
"# Create an imputer object\nage_imputer = preprocessing.Imputer(missing_values='NaN', strategy='mean', axis=0)\n\n# Fit the imputer object on the training data\nage_imputer.fit(train['Age'].reshape(-1, 1))\n\n# Apply the imputer object to the training and test data\ntrain['Age'] = age_imputer.transform(train['Age'].reshape(-1, 1))\ntest['Age'] = age_imputer.transform(test['Age'].reshape(-1, 1))",
"Fare",
"# Create an imputer object\nfare_imputer = preprocessing.Imputer(missing_values='NaN', strategy='mean', axis=0)\n\n# Fit the imputer object on the training data\nfare_imputer.fit(train['Fare'].reshape(-1, 1))\n\n# Apply the imputer object to the training and test data\ntrain['Fare'] = fare_imputer.transform(train['Fare'].reshape(-1, 1))\ntest['Fare'] = fare_imputer.transform(test['Fare'].reshape(-1, 1))",
"Search For Optimum Parameters",
"# Create a dictionary containing all the candidate values of the parameters\nparameter_grid = dict(n_estimators=list(range(1, 5001, 1000)),\n criterion=['gini','entropy'],\n max_features=list(range(1, len(features), 2)),\n max_depth= [None] + list(range(5, 25, 1)))\n\n# Creata a random forest object\nrandom_forest = RandomForestClassifier(random_state=0, n_jobs=-1)\n\n# Create a gridsearch object with 5-fold cross validation, and uses all cores (n_jobs=-1)\nclf = GridSearchCV(estimator=random_forest, param_grid=parameter_grid, cv=5, verbose=1, n_jobs=-1)\n\n# Nest the gridsearchCV in a 3-fold CV for model evaluation\ncv_scores = cross_val_score(clf, train[features], train['Survived'])\n\n# Print results\nprint('Accuracy scores:', cv_scores)\nprint('Mean of score:', np.mean(cv_scores))\nprint('Variance of scores:', np.var(cv_scores))",
"Retrain The Random Forest With The Optimum Parameters",
"# Retrain the model on the whole dataset\nclf.fit(train[features], train['Survived'])\n\n# Predict who survived in the test dataset\npredictions = clf.predict(test[features])",
"Create The Kaggle Submission",
"# Grab the passenger IDs\nids = test['PassengerId'].values\n\n# Create a csv\nsubmission_file = open(\"submission.csv\", \"w\")\n\n# Write to that csv\nopen_file_object = csv.writer(submission_file)\n\n# Write the header of the csv\nopen_file_object.writerow([\"PassengerId\",\"Survived\"])\n\n# Write the rows of the csv\nopen_file_object.writerows(zip(ids, predictions))\n\n# Close the file\nsubmission_file.close()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gojomo/gensim
|
docs/notebooks/WMD_tutorial.ipynb
|
lgpl-2.1
|
[
"Finding similar documents with Word2Vec and WMD\nWord Mover's Distance is a promising new tool in machine learning that allows us to submit a query and return the most relevant documents. For example, in a blog post OpenTable use WMD on restaurant reviews. Using this approach, they are able to mine different aspects of the reviews. In part 2 of this tutorial, we show how you can use Gensim's WmdSimilarity to do something similar to what OpenTable did. In part 1 shows how you can compute the WMD distance between two documents using wmdistance. Part 1 is optional if you want use WmdSimilarity, but is also useful in it's own merit.\nFirst, however, we go through the basics of what WMD is.\nWord Mover's Distance basics\nWMD is a method that allows us to assess the \"distance\" between two documents in a meaningful way, even when they have no words in common. It uses word2vec [4] vector embeddings of words. It been shown to outperform many of the state-of-the-art methods in k-nearest neighbors classification [3].\nWMD is illustrated below for two very similar sentences (illustration taken from Vlad Niculae's blog). The sentences have no words in common, but by matching the relevant words, WMD is able to accurately measure the (dis)similarity between the two sentences. The method also uses the bag-of-words representation of the documents (simply put, the word's frequencies in the documents), noted as $d$ in the figure below. The intuition behind the method is that we find the minimum \"traveling distance\" between documents, in other words the most efficient way to \"move\" the distribution of document 1 to the distribution of document 2.\n<img src='https://vene.ro/images/wmd-obama.png' height='600' width='600'>\nThis method was introduced in the article \"From Word Embeddings To Document Distances\" by Matt Kusner et al. (link to PDF). It is inspired by the \"Earth Mover's Distance\", and employs a solver of the \"transportation problem\".\nIn this tutorial, we will learn how to use Gensim's WMD functionality, which consists of the wmdistance method for distance computation, and the WmdSimilarity class for corpus based similarity queries.\n\nNote:\nIf you use this software, please consider citing [1], [2] and [3].\n\n\nRunning this notebook\nYou can download this iPython Notebook, and run it on your own computer, provided you have installed Gensim, PyEMD, NLTK, and downloaded the necessary data.\nThe notebook was run on an Ubuntu machine with an Intel core i7-4770 CPU 3.40GHz (8 cores) and 32 GB memory. Running the entire notebook on this machine takes about 3 minutes.\nPart 1: Computing the Word Mover's Distance\nTo use WMD, we need some word embeddings first of all. You could train a word2vec (see tutorial here) model on some corpus, but we will start by downloading some pre-trained word2vec embeddings. Download the GoogleNews-vectors-negative300.bin.gz embeddings here (warning: 1.5 GB, file is not needed for part 2). Training your own embeddings can be beneficial, but to simplify this tutorial, we will be using pre-trained embeddings at first.\nLet's take some sentences to compute the distance between.",
"from time import time\nstart_nb = time()\n\n# Initialize logging.\nimport logging\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s')\n\nsentence_obama = 'Obama speaks to the media in Illinois'\nsentence_president = 'The president greets the press in Chicago'\nsentence_obama = sentence_obama.lower().split()\nsentence_president = sentence_president.lower().split()",
"These sentences have very similar content, and as such the WMD should be low. Before we compute the WMD, we want to remove stopwords (\"the\", \"to\", etc.), as these do not contribute a lot to the information in the sentences.",
"# Import and download stopwords from NLTK.\nfrom nltk.corpus import stopwords\nfrom nltk import download\ndownload('stopwords') # Download stopwords list.\n\n# Remove stopwords.\nstop_words = stopwords.words('english')\nsentence_obama = [w for w in sentence_obama if w not in stop_words]\nsentence_president = [w for w in sentence_president if w not in stop_words]",
"Now, as mentioned earlier, we will be using some downloaded pre-trained embeddings. We load these into a Gensim Word2Vec model class. Note that the embeddings we have chosen here require a lot of memory.",
"import gensim.downloader as api\napi.load('word2vec-google-news-300')\n\nstart = time()\nimport os\n\n# from gensim.models import KeyedVectors\n# if not os.path.exists('/data/w2v_googlenews/GoogleNews-vectors-negative300.bin.gz'):\n# raise ValueError(\"SKIP: You need to download the google news model\")\n# \n# model = KeyedVectors.load_word2vec_format('/data/w2v_googlenews/GoogleNews-vectors-negative300.bin.gz', binary=True)\nmodel = api.load('word2vec-google-news-300')\n\nprint('Cell took %.2f seconds to run.' % (time() - start))",
"So let's compute WMD using the wmdistance method.",
"distance = model.wmdistance(sentence_obama, sentence_president)\nprint('distance = %.4f' % distance)",
"Let's try the same thing with two completely unrelated sentences. Notice that the distance is larger.",
"sentence_orange = 'Oranges are my favorite fruit'\nsentence_orange = sentence_orange.lower().split()\nsentence_orange = [w for w in sentence_orange if w not in stop_words]\n\ndistance = model.wmdistance(sentence_obama, sentence_orange)\nprint('distance = %.4f' % distance)",
"Normalizing word2vec vectors\nWhen using the wmdistance method, it is beneficial to normalize the word2vec vectors first, so they all have equal length. To do this, simply call model.init_sims(replace=True) and Gensim will take care of that for you.\nUsually, one measures the distance between two word2vec vectors using the cosine distance (see cosine similarity), which measures the angle between vectors. WMD, on the other hand, uses the Euclidean distance. The Euclidean distance between two vectors might be large because their lengths differ, but the cosine distance is small because the angle between them is small; we can mitigate some of this by normalizing the vectors.\nNote that normalizing the vectors can take some time, especially if you have a large vocabulary and/or large vectors.\nUsage is illustrated in the example below. It just so happens that the vectors we have downloaded are already normalized, so it won't do any difference in this case.",
"# Normalizing word2vec vectors.\nstart = time()\n\nmodel.init_sims(replace=True) # Normalizes the vectors in the word2vec class.\n\ndistance = model.wmdistance(sentence_obama, sentence_president) # Compute WMD as normal.\nprint('distance: %r', distance)\nprint('Cell took %.2f seconds to run.' %(time() - start))",
"Part 2: Similarity queries using WmdSimilarity\nYou can use WMD to get the most similar documents to a query, using the WmdSimilarity class. Its interface is similar to what is described in the Similarity Queries Gensim tutorial.\n\nImportant note:\nWMD is a measure of distance. The similarities in WmdSimilarity are simply the negative distance. Be careful not to confuse distances and similarities. Two similar documents will have a high similarity score and a small distance; two very different documents will have low similarity score, and a large distance.\n\nYelp data\nLet's try similarity queries using some real world data. For that we'll be using Yelp reviews, available at http://www.yelp.com/dataset_challenge. Specifically, we will be using reviews of a single restaurant, namely the Mon Ami Gabi.\nTo get the Yelp data, you need to register by name and email address. The data is 775 MB.\nThis time around, we are going to train the Word2Vec embeddings on the data ourselves. One restaurant is not enough to train Word2Vec properly, so we use 6 restaurants for that, but only run queries against one of them. In addition to the Mon Ami Gabi, mentioned above, we will be using:\n\nEarl of Sandwich.\nWicked Spoon.\nSerendipity 3.\nBacchanal Buffet.\nThe Buffet.\n\nThe restaurants we chose were those with the highest number of reviews in the Yelp dataset. Incidentally, they all are on the Las Vegas Boulevard. The corpus we trained Word2Vec on has 18957 documents (reviews), and the corpus we used for WmdSimilarity has 4137 documents.\nBelow a JSON file with Yelp reviews is read line by line, the text is extracted, tokenized, and stopwords and punctuation are removed.",
"# Pre-processing a document.\n\nfrom nltk import word_tokenize\ndownload('punkt') # Download data for tokenizer.\n\ndef preprocess(doc):\n doc = doc.lower() # Lower the text.\n doc = word_tokenize(doc) # Split into words.\n doc = [w for w in doc if not w in stop_words] # Remove stopwords.\n doc = [w for w in doc if w.isalpha()] # Remove numbers and punctuation.\n return doc\n\nstart = time()\n\nimport json\nfrom smart_open import smart_open\n\n# Business IDs of the restaurants.\nids = ['4bEjOyTaDG24SY5TxsaUNQ', '2e2e7WgqU1BnpxmQL5jbfw', 'zt1TpTuJ6y9n551sw9TaEg',\n 'Xhg93cMdemu5pAMkDoEdtQ', 'sIyHTizqAiGu12XMLX3N3g', 'YNQgak-ZLtYJQxlDwN-qIg']\n\nw2v_corpus = [] # Documents to train word2vec on (all 6 restaurants).\nwmd_corpus = [] # Documents to run queries against (only one restaurant).\ndocuments = [] # wmd_corpus, with no pre-processing (so we can see the original documents).\nwith smart_open('/data/yelp_academic_dataset_review.json', 'rb') as data_file:\n for line in data_file:\n json_line = json.loads(line)\n \n if json_line['business_id'] not in ids:\n # Not one of the 6 restaurants.\n continue\n \n # Pre-process document.\n text = json_line['text'] # Extract text from JSON object.\n text = preprocess(text)\n \n # Add to corpus for training Word2Vec.\n w2v_corpus.append(text)\n \n if json_line['business_id'] == ids[0]:\n # Add to corpus for similarity queries.\n wmd_corpus.append(text)\n documents.append(json_line['text'])\n\nprint 'Cell took %.2f seconds to run.' %(time() - start)",
"Below is a plot with a histogram of document lengths and includes the average document length as well. Note that these are the pre-processed documents, meaning stopwords are removed, punctuation is removed, etc. Document lengths have a high impact on the running time of WMD, so when comparing running times with this experiment, the number of documents in query corpus (about 4000) and the length of the documents (about 62 words on average) should be taken into account.",
"from matplotlib import pyplot as plt\n%matplotlib inline\n\n# Document lengths.\nlens = [len(doc) for doc in wmd_corpus]\n\n# Plot.\nplt.rc('figure', figsize=(8,6))\nplt.rc('font', size=14)\nplt.rc('lines', linewidth=2)\nplt.rc('axes', color_cycle=('#377eb8','#e41a1c','#4daf4a',\n '#984ea3','#ff7f00','#ffff33'))\n# Histogram.\nplt.hist(lens, bins=20)\nplt.hold(True)\n# Average length.\navg_len = sum(lens) / float(len(lens))\nplt.axvline(avg_len, color='#e41a1c')\nplt.hold(False)\nplt.title('Histogram of document lengths.')\nplt.xlabel('Length')\nplt.text(100, 800, 'mean = %.2f' % avg_len)\nplt.show()",
"Now we want to initialize the similarity class with a corpus and a word2vec model (which provides the embeddings and the wmdistance method itself).",
"# Train Word2Vec on all the restaurants.\nmodel = Word2Vec(w2v_corpus, workers=3, size=100)\n\n# Initialize WmdSimilarity.\nfrom gensim.similarities import WmdSimilarity\nnum_best = 10\ninstance = WmdSimilarity(wmd_corpus, model, num_best=10)",
"The num_best parameter decides how many results the queries return. Now let's try making a query. The output is a list of indeces and similarities of documents in the corpus, sorted by similarity.\nNote that the output format is slightly different when num_best is None (i.e. not assigned). In this case, you get an array of similarities, corresponding to each of the documents in the corpus.\nThe query below is taken directly from one of the reviews in the corpus. Let's see if there are other reviews that are similar to this one.",
"start = time()\n\nsent = 'Very good, you should seat outdoor.'\nquery = preprocess(sent)\n\nsims = instance[query] # A query is simply a \"look-up\" in the similarity class.\n\nprint 'Cell took %.2f seconds to run.' %(time() - start)",
"The query and the most similar documents, together with the similarities, are printed below. We see that the retrieved documents are discussing the same thing as the query, although using different words. The query talks about getting a seat \"outdoor\", while the results talk about sitting \"outside\", and one of them says the restaurant has a \"nice view\".",
"# Print the query and the retrieved documents, together with their similarities.\nprint 'Query:'\nprint sent\nfor i in range(num_best):\n print\n print 'sim = %.4f' % sims[i][1]\n print documents[sims[i][0]]",
"Let's try a different query, also taken directly from one of the reviews in the corpus.",
"start = time()\n\nsent = 'I felt that the prices were extremely reasonable for the Strip'\nquery = preprocess(sent)\n\nsims = instance[query] # A query is simply a \"look-up\" in the similarity class.\n\nprint 'Query:'\nprint sent\nfor i in range(num_best):\n print\n print 'sim = %.4f' % sims[i][1]\n print documents[sims[i][0]]\n\nprint '\\nCell took %.2f seconds to run.' %(time() - start)",
"This time around, the results are more straight forward; the retrieved documents basically contain the same words as the query.\nWmdSimilarity normalizes the word embeddings by default (using init_sims(), as explained before), but you can overwrite this behaviour by calling WmdSimilarity with normalize_w2v_and_replace=False.",
"print 'Notebook took %.2f seconds to run.' %(time() - start_nb)",
"References\n\nOfir Pele and Michael Werman, A linear time histogram metric for improved SIFT matching, 2008.\nOfir Pele and Michael Werman, Fast and robust earth mover's distances, 2009.\nMatt Kusner et al. From Embeddings To Document Distances, 2015.\nThomas Mikolov et al. Efficient Estimation of Word Representations in Vector Space, 2013."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ellisonbg/leafletwidget
|
examples/Numpy.ipynb
|
mit
|
[
"From NumPy to Leaflet\nThis notebook shows how to display some raster geographic data in IPyLeaflet. The data is a NumPy array, which means that you have all the power of the Python scientific stack at your disposal to process it.\nThe following libraries are needed:\n* requests\n* tqdm\n* rasterio\n* numpy\n* scipy\n* pillow\n* matplotlib\n* ipyleaflet\nThe recommended way is to try to conda install them first, and if they are not found then pip install.",
"import requests\nimport os\nfrom tqdm import tqdm\nimport zipfile\nimport rasterio\nfrom affine import Affine\nimport numpy as np\nimport scipy.ndimage\nfrom rasterio.warp import reproject, Resampling\nimport PIL\nimport matplotlib.pyplot as plt\nfrom base64 import b64encode\ntry:\n from StringIO import StringIO\n py3 = False\nexcept ImportError:\n from io import StringIO, BytesIO\n py3 = True\nfrom ipyleaflet import Map, ImageOverlay, basemap_to_tiles, basemaps",
"Download a raster file representing the flow accumulation for South America. This gives an idea of the river network.",
"url = 'https://edcintl.cr.usgs.gov/downloads/sciweb1/shared/hydrosheds/sa_30s_zip_grid/sa_acc_30s_grid.zip'\nfilename = os.path.basename(url)\nname = filename[:filename.find('_grid')]\nadffile = name + '/' + name + '/w001001.adf'\n\nif not os.path.exists(adffile):\n r = requests.get(url, stream=True)\n with open(filename, 'wb') as f:\n total_length = int(r.headers.get('content-length'))\n for chunk in tqdm(r.iter_content(chunk_size=1024), total=(total_length/1024) + 1):\n if chunk:\n f.write(chunk)\n f.flush()\n zip = zipfile.ZipFile(filename)\n zip.extractall('.')",
"We transform the data a bit so that rivers appear thicker.",
"dataset = rasterio.open(adffile)\nacc_orig = dataset.read()[0]\nacc = np.where(acc_orig<0, 0, acc_orig)\n\nshrink = 1 # if you are out of RAM try increasing this number (should be a power of 2)\nradius = 5 # you can play with this number to change the width of the rivers\ncircle = np.zeros((2*radius+1, 2*radius+1)).astype('uint8')\ny, x = np.ogrid[-radius:radius+1,-radius:radius+1]\nindex = x**2 + y**2 <= radius**2\ncircle[index] = 1\nacc = np.sqrt(acc)\nacc = scipy.ndimage.maximum_filter(acc, footprint=circle)\nacc[acc_orig<0] = np.nan\nacc = acc[::shrink, ::shrink]",
"The original data is in the WGS 84 projection, but Leaflet uses Web Mercator, so we need to reproject.",
"# At this point if GDAL complains about not being able to open EPSG support file gcs.csv, try in the terminal:\n# export GDAL_DATA=`gdal-config --datadir`\n\nwith rasterio.Env():\n rows, cols = acc.shape\n src_transform = list(dataset.transform)\n src_transform[0] *= shrink\n src_transform[4] *= shrink\n src_transform = Affine(*src_transform[:6])\n src_crs = {'init': 'EPSG:4326'}\n source = acc\n\n dst_crs = {'init': 'EPSG:3857'}\n dst_transform, width, height = rasterio.warp.calculate_default_transform(src_crs, dst_crs, cols, rows, *dataset.bounds)\n dst_shape = height, width\n \n destination = np.zeros(dst_shape)\n\n reproject(\n source,\n destination,\n src_transform=src_transform,\n src_crs=src_crs,\n dst_transform=dst_transform,\n dst_crs=dst_crs,\n resampling=Resampling.nearest)\n\nacc_web = destination",
"Let's convert our NumPy array to an image. For that we must specify a colormap (here plt.cm.jet).",
"acc_norm = acc_web - np.nanmin(acc_web)\nacc_norm = acc_norm / np.nanmax(acc_norm)\nacc_norm = np.where(np.isfinite(acc_web), acc_norm, 0)\nacc_im = PIL.Image.fromarray(np.uint8(plt.cm.jet(acc_norm)*255))\nacc_mask = np.where(np.isfinite(acc_web), 255, 0)\nmask = PIL.Image.fromarray(np.uint8(acc_mask), mode='L')\nim = PIL.Image.new('RGBA', acc_norm.shape[::-1], color=None)\nim.paste(acc_im, mask=mask)",
"The image is embedded in the URL as a PNG file, so that it can be sent to the browser.",
"if py3:\n f = BytesIO()\nelse:\n f = StringIO()\nim.save(f, 'png')\ndata = b64encode(f.getvalue())\nif py3:\n data = data.decode('ascii')\nimgurl = 'data:image/png;base64,' + data",
"Finally we can overlay our image and if everything went fine it should be exactly over South America.",
"b = dataset.bounds\nbounds = [(b.bottom, b.left), (b.top, b.right)]\nio = ImageOverlay(url=imgurl, bounds=bounds)\n\ncenter = [-10, -60]\nzoom = 2\nm = Map(center=center, zoom=zoom, interpolation='nearest')\nm\n\ntile = basemap_to_tiles(basemaps.Esri.WorldStreetMap)\nm.add_layer(tile)",
"You can play with the opacity slider and check that rivers from our data file match the rivers on OpenStreetMap.",
"m.add_layer(io)\nio.interact(opacity=(0.0,1.0,0.01))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gurbanics/pytech-vis
|
notebooks/MainSlides.ipynb
|
mit
|
[
"Python – Packages for data visualization\nContents\nOverview on plot and data types\nPython packages for visualization\nMatplotlib - the classic\nPandas visualization\nSeaborn - for statisticians\nBokeh - the interactive\nWhy visualize?\n\n\"Use a picture. It's worth a thousand words.\" 1\n\n\n\nHuman brain is much better at processing information visually\n\nExample: Anscombe's quartet 2\n\n\n\nDomain-specific visualizations help a lot the domain experts\n\ne.g. 3D engineering models\n\n\n\nIn Data Analysis\n\nExploratory Data Analysis\nCommunicating results\n\n\n\nTakes ~3-6x more time to prepare a diagram than speech/text :(\n\n\nPlot types\n\nA non-exhaustive list of often used plots\n\n<table>\n <thead>\n <tr>\n <td>Plot type</td>\n <td>Number of variables displayed</td>\n <td>Type of data displayed</td>\n </tr>\n </thead>\n <tbody>\n <tr>\n <td>barchart</td>\n <td>1</td>\n <td>categorical</td>\n </tr>\n <tr>\n <td>histogram</td>\n <td>1</td>\n <td>continuous</td>\n </tr>\n <tr>\n <td>boxplot</td>\n <td>1</td>\n <td>continuous</td>\n </tr>\n <tr>\n <td>scatterplot</td>\n <td>2 (3 --> bubblechart)</td>\n <td>continuous</td>\n </tr>\n <tr>\n <td>heatmap</td>\n <td>2 (3)</td>\n <td>mixed/continuous</td>\n </tr>\n </tbody>\n</table>\n\n\nPeriodic table of visualization",
"# importing matplotlib as usual\nimport numpy as np\n%config InlineBackend.print_figure_kwargs = {'bbox_inches': None}\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom IPython.display import set_matplotlib_formats\nset_matplotlib_formats('pdf', 'png', 'svg')\n\nsigma = 10\nmu = 5\nnormal_random = np.random.randn(1000) * sigma + mu;",
"Barchart\n\nIdeal for comparing groups within the data set",
"plt.bar([1, 2, 3, 4], [455, 404, 317, 730], tick_label=[\"XI\", \"IX\", \"VIII\",\"V\"], align='center');\nplt.xlabel(\"Districts\"); plt.ylabel(\"Avg price [10^3 HUF/ m^2]\");\nplt.title(\"Real estate prices in Budapest (2016)\");",
"Histogram\n\nDisplays the empirical distribution of a variable\nParameters\nbin width or bin count (bins are normally of equal length)\nnormalized or not",
"# provided matplotlib is imported and normal_random is a Gaussian distrib N(5, 10)\nplt.hist(normal_random, color=\"g\");",
"Boxplot\n\nA very compact representation of the samples\nAlso called box and whisker plot if whiskers are displayed\nUsually depicted\nlower (Q1) and upper (Q2) quartiles and median (25th, 75th and 50th percentile)\nextreme values outside the $(Q_1-1.5 \\cdot IQR, Q_3+1.5 \\cdot IQR)$ interval",
"plt.boxplot(normal_random, labels=[\"Normal random\"]);\n\nplt.subplot(1,2,1) # create a 1-row 2-column figure, activate the 1st subplot\nplt.boxplot(normal_random) # create a boxplot\nplt.subplot(1,2,2) # activate the 2nd subplot\nplt.hist(normal_random, orientation='horizontal', normed=True, color=\"g\"); # create rotated histogram\nplt.xticks(rotation=30);",
"Scatterplot\n\nPoints in 2 dimensions\nCoordinates are given by the (x,y) pairs",
"xvars = np.arange(1,10,0.5);\nplt.scatter(x=xvars, y=xvars**2, marker=\"x\");\nplt.scatter(x=xvars[1:6], y=xvars[1:6]**3, c=\"r\", marker='o', s=xvars[1:6]**3*10); # using scatterplot as bubble-chart",
"Heatmap\n\nthe plot is split up into equal tiles\neach tile (raster) corresponds to an (x,y) combination\ncolor of the tile is given by a 3rd attribute in the data set\n\n\nexample: git punch card as heatmap",
"from urllib.request import urlopen\nimport simplejson\nfrom pandas.core.frame import DataFrame\ncommits = DataFrame(simplejson.loads(urlopen(\"https://api.github.com/repos/pydata/pandas/stats/punch_card\").read()), \n columns=[\"weekday\",\"hour\",\"commits\"])\ncommits.head(3)\n\nimport seaborn as sns\nsns.heatmap(commits.pivot(\"weekday\",\"hour\",\"commits\"));",
"Matplotlib - the classic\n\nstarted in 2007, latest stable version 1.5.1\nrelies on numpy for data representation\nprovides an interface similar to Matlab\nstate-machine like behaviour\neffective for simple plotting needs\n\n\nan OO API is also exposed\nfull control of the graphical elements\n\n\nmultiple backends exist\nrendering the plots interactively\nsaving them to various file formats (jpg, png, svg, pdf)\n\n\n\nPandas - plot from DataFrame\n\nPandas is mainly a data analysis package\nIt adds support for visualization\nbuilds on top of Matplotlib\nhigher level API\nmost plot types are accessible form the DataFrame directly\n\n\nReasonable choice for simple plots when using DF\n\nSeaborn\n\nSeparate package with statistical visualizations in mind\nsupport a wide range of plot types (e.g. parallel, violin plot, heatmap)\nsupports facetting (previously Pandas did that)\n\n\n\nBokeh\n\nIt is based on JavaScript and canvas (client-side visualization)\nIt knows the concept of e.g. linked brushing\nBokeh also provides a server component where data can be dynamically filtered\n\nFurther references\nGeneral visualization\n\nThe visual display of quantitative information by E. Tufte\n\nMatplotlib\n\nGeneral usage (Ch 12)\nGallery with code snippets\nDetailed API doc (or simply help())\n\nPandas\n\nVisualization section of docu\n\n⬇︎\nSeaborn\n\nTutorial\nGallery\n\nBokeh\n\nUser's guide\nReference\n\nOther\n\nplot.ly Python API\nggplot for Python - the Python version of the famous R package"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jdsanch1/SimRC
|
02. Parte 2/15. Clase 15/.ipynb_checkpoints/12Class NB-checkpoint.ipynb
|
mit
|
[
"Clase 11: Algunas mejoras a los códigos para simular y optimizar portafolios\nJuan Diego Sánchez Torres, \nProfesor, MAF ITESO\n\nDepartamento de Matemáticas y Física\ndsanchez@iteso.mx\nTel. 3669-34-34 Ext. 3069\nOficina: Cubículo 4, Edificio J, 2do piso\n\n1. Motivación\nEn primer lugar, para poder bajar precios y información sobre opciones de Yahoo, es necesario cargar algunos paquetes de Python. En este caso, el paquete principal será Pandas. También, se usarán el Scipy y el Numpy para las matemáticas necesarias y, el Matplotlib y el Seaborn para hacer gráficos de las series de datos. Finalmente, se usará el paquete cvxopt para optimización convexa, para instalar ingrese en terminal la instrucción: conda install -c anaconda cvxopt",
"#importar los paquetes que se van a usar\nimport pandas as pd\nimport numpy as np\nimport datetime\nfrom datetime import datetime\nimport scipy.stats as stats\nimport scipy as sp\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport sklearn.covariance as skcov\nimport cvxopt as opt\nfrom cvxopt import blas, solvers\nsolvers.options['show_progress'] = False\n%matplotlib inline\npd.set_option('display.notebook_repr_html', True)\npd.set_option('display.max_columns', 6)\npd.set_option('display.max_rows', 10)\npd.set_option('display.width', 78)\npd.set_option('precision', 3)\n#Funciones para portafolios\nimport portfolio_func\nfrom pyomo.environ import *\ninfinity = float('inf')\nimport statsmodels.api as sm\n\nassets = ['AAPL','MSFT','AA','AMZN','KO','QAI']\ncloses = portfolio_func.get_historical_closes(assets, '2016-01-01', '2017-09-22')\n\ndaily_returns=portfolio_func.calc_daily_returns(closes)\nhuber = sm.robust.scale.Huber()\n#Mean and standar deviation returns\nreturns_av, scale = huber(daily_returns)\n\nmodel.assets = Set()\nmodel.T = Set(initialize = range(1994, 2014))\nmodel.max_risk = Param(mutable = True, initialize = .00305)\nmodel.R = Param(model.T, model.assets)\n\ndef mean_init(model, j):\n return sum(model.R[i, j] for i in model.T)/len(model.T)\nmodel.mean = Param(model.assets, initialize = mean_init)\n\ndef Q_init(model, i, j):\n return sum((model.R[k, i] - model.mean[i])*(model.R[k, j] - model.mean[j]) for k in model.T)\nmodel.Q = Param(model.assets, model.assets, initialize = Q_init)\nmodel.alloc = Var(model.assets, within=NonNegativeReals)\n\ndef risk_bound_rule(model):\n return (sum(sum(model.Q[i, j] * model.alloc[i] * model.alloc[j] for i in model.assets)for j in model.assets) <= model.max_risk)\nmodel.risk_bound = Constraint(rule=risk_bound_rule)\n\ndef tot_mass_rule(model):\n return (sum(model.alloc[j] for j in model.assets) == 1)\nmodel.tot_mass = Constraint(rule=tot_mass_rule)\n\ndef objective_rule(model):\n return summation(model.mean, model.alloc)\nmodel.objective = Objective(sense=maximize, rule=objective_rule)\n\n!type dietdata.dat\n\n!pyomo solve --solver=glpk diet.py dietdata.dat\n\n!type results.yml",
"2. Uso de Pandas para descargar datos de precios de cierre\nUna vez cargados los paquetes, es necesario definir los tickers de las acciones que se usarán, la fuente de descarga (Yahoo en este caso, pero también se puede desde Google) y las fechas de interés. Con esto, la función DataReader del paquete pandas_datareader bajará los precios solicitados.\nNota: Usualmente, las distribuciones de Python no cuentan, por defecto, con el paquete pandas_datareader. Por lo que será necesario instalarlo aparte. El siguiente comando instala el paquete en Anaconda:\n*conda install -c conda-forge pandas-datareader *\n3. Formulación del riesgo de un portafolio y simulación Montecarlo",
"r=0.0001\nresults_frame = portfolio_func.sim_mont_portfolio(daily_returns,100000,r)\n\n#Sharpe Ratio\nmax_sharpe_port = results_frame.iloc[results_frame['Sharpe'].idxmax()]\n#Menor SD\nmin_vol_port = results_frame.iloc[results_frame['SD'].idxmin()]\n\nplt.scatter(results_frame.SD,results_frame.Returns,c=results_frame.Sharpe,cmap='RdYlBu')\nplt.xlabel('Volatility')\nplt.ylabel('Returns')\nplt.colorbar()\n#Sharpe Ratio\nplt.scatter(max_sharpe_port[1],max_sharpe_port[0],marker=(5,1,0),color='r',s=1000);\n#Menor SD\nplt.scatter(min_vol_port[1],min_vol_port[0],marker=(5,1,0),color='g',s=1000);\n\npd.DataFrame(max_sharpe_port)\n\npd.DataFrame(min_vol_port)",
"4. Optimización de portafolios",
"N=5000\nresults_frame_optim = portfolio_func.optimal_portfolio(daily_returns,N,r)\n\n#Montecarlo\nplt.scatter(results_frame.SD,results_frame.Returns,c=results_frame.Sharpe,cmap='RdYlBu')\nplt.xlabel('Volatility')\nplt.ylabel('Returns')\nplt.colorbar()\n#Markowitz\nplt.plot(results_frame_optim.SD, results_frame_optim.Returns, 'b-o');\n\n#Sharpe Ratio\nmax_sharpe_port_optim = results_frame_optim.iloc[results_frame_optim['Sharpe'].idxmax()]\n#Menor SD\nmin_vol_port_optim = results_frame_optim.iloc[results_frame_optim['SD'].idxmin()]\n\n#Markowitz\nplt.scatter(results_frame_optim.SD,results_frame_optim.Returns,c=results_frame_optim.Sharpe,cmap='RdYlBu');\nplt.xlabel('Volatility')\nplt.ylabel('Returns')\nplt.colorbar()\n#Sharpe Ratio\nplt.scatter(max_sharpe_port_optim[1],max_sharpe_port_optim[0],marker=(5,1,0),color='r',s=1000);\n#SD\nplt.scatter(min_vol_port_optim[1],min_vol_port_optim[0],marker=(5,1,0),color='g',s=1000);\n\npd.DataFrame(max_sharpe_port_optim)\n\npd.DataFrame(min_vol_port_optim)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
samirma/deep-learning
|
tensorboard/Anna_KaRNNa_Name_Scoped.ipynb
|
mit
|
[
"Anna KaRNNa\nIn this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.\nThis network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.\n<img src=\"assets/charseq.jpeg\" width=\"500\">",
"import time\nfrom collections import namedtuple\n\nimport numpy as np\nimport tensorflow as tf",
"First we'll load the text file and convert it into integers for our network to use.",
"with open('anna.txt', 'r') as f:\n text=f.read()\nvocab = set(text)\nvocab_to_int = {c: i for i, c in enumerate(vocab)}\nint_to_vocab = dict(enumerate(vocab))\nchars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)\n\ntext[:100]\n\nchars[:100]",
"Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.\nHere I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.\nThe idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.",
"def split_data(chars, batch_size, num_steps, split_frac=0.9):\n \"\"\" \n Split character data into training and validation sets, inputs and targets for each set.\n \n Arguments\n ---------\n chars: character array\n batch_size: Size of examples in each of batch\n num_steps: Number of sequence steps to keep in the input and pass to the network\n split_frac: Fraction of batches to keep in the training set\n \n \n Returns train_x, train_y, val_x, val_y\n \"\"\"\n \n \n slice_size = batch_size * num_steps\n n_batches = int(len(chars) / slice_size)\n \n # Drop the last few characters to make only full batches\n x = chars[: n_batches*slice_size]\n y = chars[1: n_batches*slice_size + 1]\n \n # Split the data into batch_size slices, then stack them into a 2D matrix \n x = np.stack(np.split(x, batch_size))\n y = np.stack(np.split(y, batch_size))\n \n # Now x and y are arrays with dimensions batch_size x n_batches*num_steps\n \n # Split into training and validation sets, keep the virst split_frac batches for training\n split_idx = int(n_batches*split_frac)\n train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]\n val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]\n \n return train_x, train_y, val_x, val_y\n\ntrain_x, train_y, val_x, val_y = split_data(chars, 10, 200)\n\ntrain_x.shape\n\ntrain_x[:,:10]",
"I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.",
"def get_batch(arrs, num_steps):\n batch_size, slice_size = arrs[0].shape\n \n n_batches = int(slice_size/num_steps)\n for b in range(n_batches):\n yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]\n\ndef build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,\n learning_rate=0.001, grad_clip=5, sampling=False):\n \n if sampling == True:\n batch_size, num_steps = 1, 1\n\n tf.reset_default_graph()\n \n # Declare placeholders we'll feed into the graph\n with tf.name_scope('inputs'):\n inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')\n x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')\n \n with tf.name_scope('targets'):\n targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')\n y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')\n y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])\n \n keep_prob = tf.placeholder(tf.float32, name='keep_prob')\n \n # Build the RNN layers\n with tf.name_scope(\"RNN_layers\"):\n lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)\n \n with tf.name_scope(\"RNN_init_state\"):\n initial_state = cell.zero_state(batch_size, tf.float32)\n\n # Run the data through the RNN layers\n with tf.name_scope(\"RNN_forward\"):\n outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)\n \n final_state = state\n \n # Reshape output so it's a bunch of rows, one row for each cell output\n with tf.name_scope('sequence_reshape'):\n seq_output = tf.concat(outputs, axis=1,name='seq_output')\n output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')\n \n # Now connect the RNN putputs to a softmax layer and calculate the cost\n with tf.name_scope('logits'):\n softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),\n name='softmax_w')\n softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')\n logits = tf.matmul(output, softmax_w) + softmax_b\n\n with tf.name_scope('predictions'):\n preds = tf.nn.softmax(logits, name='predictions')\n \n \n with tf.name_scope('cost'):\n loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')\n cost = tf.reduce_mean(loss, name='cost')\n\n # Optimizer for training, using gradient clipping to control exploding gradients\n with tf.name_scope('train'):\n tvars = tf.trainable_variables()\n grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)\n train_op = tf.train.AdamOptimizer(learning_rate)\n optimizer = train_op.apply_gradients(zip(grads, tvars))\n \n # Export the nodes \n export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',\n 'keep_prob', 'cost', 'preds', 'optimizer']\n Graph = namedtuple('Graph', export_nodes)\n local_dict = locals()\n graph = Graph(*[local_dict[each] for each in export_nodes])\n \n return graph",
"Hyperparameters\nHere I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.",
"batch_size = 100\nnum_steps = 100\nlstm_size = 512\nnum_layers = 2\nlearning_rate = 0.001",
"Write out the graph for TensorBoard",
"model = build_rnn(len(vocab), \n batch_size=batch_size,\n num_steps=num_steps,\n learning_rate=learning_rate,\n lstm_size=lstm_size,\n num_layers=num_layers)\n\nwith tf.Session() as sess:\n \n sess.run(tf.global_variables_initializer())\n file_writer = tf.summary.FileWriter('./logs/3', sess.graph)",
"Training\nTime for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.",
"!mkdir -p checkpoints/anna\n\nepochs = 10\nsave_every_n = 200\ntrain_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)\n\nmodel = build_rnn(len(vocab), \n batch_size=batch_size,\n num_steps=num_steps,\n learning_rate=learning_rate,\n lstm_size=lstm_size,\n num_layers=num_layers)\n\nsaver = tf.train.Saver(max_to_keep=100)\n\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n \n # Use the line below to load a checkpoint and resume training\n #saver.restore(sess, 'checkpoints/anna20.ckpt')\n \n n_batches = int(train_x.shape[1]/num_steps)\n iterations = n_batches * epochs\n for e in range(epochs):\n \n # Train network\n new_state = sess.run(model.initial_state)\n loss = 0\n for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):\n iteration = e*n_batches + b\n start = time.time()\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: 0.5,\n model.initial_state: new_state}\n batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer], \n feed_dict=feed)\n loss += batch_loss\n end = time.time()\n print('Epoch {}/{} '.format(e+1, epochs),\n 'Iteration {}/{}'.format(iteration, iterations),\n 'Training loss: {:.4f}'.format(loss/b),\n '{:.4f} sec/batch'.format((end-start)))\n \n \n if (iteration%save_every_n == 0) or (iteration == iterations):\n # Check performance, notice dropout has been set to 1\n val_loss = []\n new_state = sess.run(model.initial_state)\n for x, y in get_batch([val_x, val_y], num_steps):\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)\n val_loss.append(batch_loss)\n\n print('Validation loss:', np.mean(val_loss),\n 'Saving checkpoint!')\n saver.save(sess, \"checkpoints/anna/i{}_l{}_{:.3f}.ckpt\".format(iteration, lstm_size, np.mean(val_loss)))\n\ntf.train.get_checkpoint_state('checkpoints/anna')",
"Sampling\nNow that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.\nThe network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.",
"def pick_top_n(preds, vocab_size, top_n=5):\n p = np.squeeze(preds)\n p[np.argsort(p)[:-top_n]] = 0\n p = p / np.sum(p)\n c = np.random.choice(vocab_size, 1, p=p)[0]\n return c\n\ndef sample(checkpoint, n_samples, lstm_size, vocab_size, prime=\"The \"):\n prime = \"Far\"\n samples = [c for c in prime]\n model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)\n saver = tf.train.Saver()\n with tf.Session() as sess:\n saver.restore(sess, checkpoint)\n new_state = sess.run(model.initial_state)\n for c in prime:\n x = np.zeros((1, 1))\n x[0,0] = vocab_to_int[c]\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n\n for i in range(n_samples):\n x[0,0] = c\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n \n return ''.join(samples)\n\ncheckpoint = \"checkpoints/anna/i3560_l512_1.122.ckpt\"\nsamp = sample(checkpoint, 2000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i200_l512_2.432.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i600_l512_1.750.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i1000_l512_1.484.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gregorjerse/rt2
|
2015_2016/lab3/triangulation.ipynb
|
gpl-3.0
|
[
"from collections import defaultdict",
"Is a triangulated surface orientable?\nEach triangle is stored as a tuple of its vertices. Each vertex is labeled with a non negative integer.\nA triangulation of a surface is stored as a simple graph. Each triangle in the triangulation is stored in the corresponding node in the graph. Two nodes in the graph are connected when the triangles stored inside them share a common edge.\nRemark: a graph is cubic if a triangulated surface is 2D manifold without a boundary.",
"class Triangle:\n \"\"\"\n A triangle is represented as a list of its\n vertices (labeled with natural numbers).\n \"\"\"\n def __init__(self, vertices, neighbours=None):\n assert len(vertices) == 3, 'A triangle should have 3 vertices'\n self.vertices = sorted(vertices)\n self.neighbours = neighbours if neighbours is not None else set()\n \n @property\n def edges(self):\n \"\"\"\n Return the generator of edges 'in the positive direction'.\n \"\"\"\n v, n = self.vertices, len(self.vertices)\n return ((v[i], v[(i + 1) % n]) for i in range(n))\n\n def contains_edge(self, edge):\n '''\n Return True if the triangle contains the given edge. \n '''\n return set(edge) <= set(self.vertices)\n \n def neighbour_edge(self, edge):\n '''\n Return the neighbouring triangle that shared the edge with out triangle.\n If no such neighbouring triangle exists an Exception is raised.\n '''\n neighbour = [n for n in self.neighbours if n.contains_edge(edge)]\n assert len(neighbour) == 1\n return neighbour[0]",
"Example triangulations: a set of triangles that represents a torus and a projective plane.",
"torus_triangles = [\n (0, 1, 6), (0, 4, 6), (1, 6, 7), (1, 2, 7), (2, 7, 4), (2, 0, 4), (4, 5, 8), (4, 6, 8), (6, 3, 8),\n (6, 7, 3), (3, 7, 5), (7, 4, 5), (0, 1, 5), (1, 5, 8), (1, 2, 8), (2, 8, 3), (2, 0, 3), (0, 5, 3),\n]\n\nprojective_plane_triangles = [\n (1, 2, 7), (1, 7, 3), (2, 7, 5), (7, 3, 6), (2, 3, 5), (4, 5, 7),\n (7, 4, 6), (2, 6, 3), (4, 5, 3), (4, 6, 2), (1, 3, 4), (4, 1, 2),\n]",
"First we create a graph from the set of triangles. No ordered structure is computed here.",
"def create_triangulation(triangles):\n '''\n Create a triangulation from a set of triangles. \n '''\n n = max([max(triangle) for triangle in triangles]) + 1\n triangles = [Triangle(triangle) for triangle in triangles]\n # For each vertex compute the set of triangles containing the vertex. \n vertex_triangles = defaultdict(set)\n for triangle in triangles:\n for vertex in triangle.vertices:\n vertex_triangles[vertex].add(triangle)\n # Make connection between neighbourhood triangles.\n for triangle in triangles:\n for (v1, v2) in triangle.edges:\n neighbour = (vertex_triangles[v1] & vertex_triangles[v2]).difference((triangle,))\n assert len(neighbour) == 1\n triangle.neighbours.add(neighbour.pop())\n # Each triangle should have exactly 3 neighbours (2D manifold without a boundary)\n assert len(triangle.neighbours) == 3\n return triangles ",
"An oriented version of a triangle is represented as a tuple (triangle, orientation). In it we define helper functions enext and fnext.",
"class OrientedTriangle:\n def __init__(self, triangle, orientation):\n self.triangle = triangle\n self.orientation = orientation\n \n @property\n def is_positively_oriented(self):\n return self.orientation in [0, 1, 2]\n\n @property\n def get_leading_edge(self):\n edge = list(self.triangle.edges)[self.orientation & 3]\n return edge if self.is_positively_oriented else tuple(reversed(edge))\n\n @property\n def enext(self):\n \"\"\"\n Get the next orientation of the same type.\n \"\"\"\n orientation = (((self.orientation & 3) + 1) % 3) + (self.orientation & 4)\n return OrientedTriangle(self.triangle, orientation)\n \n @property\n def fnext(self):\n '''\n Compute return the value of fnext for the oriented triangle.\n '''\n reversed_edge = tuple(reversed(self.get_leading_edge))\n # Get the neighbour that shares the leading edge with our triangle.\n neighbour = self.triangle.neighbour_edge(reversed_edge)\n return OrientedTriangle(neighbour, 0).orient_triangle(reversed_edge)\n\n def same_orientation(self, triangle):\n return self.is_positively_oriented == triangle.is_positively_oriented\n\n def orient_triangle(self, edge):\n \"\"\"\n Orient the triangle so that the given edge is its lead edge. Return self.\n \"\"\"\n es, re = list(self.triangle.edges), tuple(reversed(edge)) \n self.orientation = es.index(edge) if edge in es else es.index(re) + 4\n return self",
"Finally: a function which checks whether a given triangulation is orientable.",
"def orientable(triangles):\n '''\n Is a triangulation orientable?\n '''\n return is_orientable(OrientedTriangle(create_triangulation(triangles)[0], 0))\n\ndef is_orientable(oriented_triangle):\n triangle = oriented_triangle.triangle\n chosen_orientation = getattr(triangle, 'orientation', None)\n if chosen_orientation is None:\n triangle.orientation = oriented_triangle.orientation\n t1 = oriented_triangle.fnext\n t2 = oriented_triangle.enext.fnext\n t3 = oriented_triangle.enext.enext.fnext\n return is_orientable(t1) and is_orientable(t2) and is_orientable(t3)\n else:\n return oriented_triangle.same_orientation(OrientedTriangle(triangle, chosen_orientation))\n\norientable(torus_triangles)\n\norientable(projective_plane_triangles)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
NREL/bifacial_radiance
|
docs/tutorials/6 - Advanced topics - Understanding trackerdict structure.ipynb
|
bsd-3-clause
|
[
"6 - Advanced topics: Understanding trackerdict structure\nTutorial 6 gives a good, detailed introduction to the trackerdict structure step by step.\nHere is a condensed summary of functions you can use to explore the tracker dictionary.\nSteps:\n<ol>\n <li> <a href='#step1'> Create a short Simulation + tracker dictionary beginning to end for 1 day </a></li>\n <li> <a href='#step2'> Explore the tracker dictionary </a></li>\n <li> <a href='#step3'> Explore Save Options </a></li>\n</ol>\n\n<a id='step 1'></a>\n1. Create a short Simulation + tracker dictionary beginning to end for 1 day",
"import bifacial_radiance\nfrom pathlib import Path\nimport os\n\ntestfolder = str(Path().resolve().parent.parent / 'bifacial_radiance' / 'Tutorial_06')\nif not os.path.exists(testfolder):\n os.makedirs(testfolder) \n\nsimulationName = 'tutorial_6'\nmoduletype = 'test-module' \nalbedo = \"litesoil\" # this is one of the options on ground.rad\nlat = 37.5 \nlon = -77.6\n\n# Scene variables\nnMods = 3\nnRows = 1\nhub_height = 2.3 # meters\npitch = 10 # meters # We will be using pitch instead of GCR for this example.\n\n# Traking parameters\ncumulativesky = False\nlimit_angle = 45 # tracker rotation limit angle\nangledelta = 0.01 # we will be doing hourly simulation, we want the angle to be as close to real tracking as possible.\nbacktrack = True \n\n#makeModule parameters\n# x and y will be defined by the cell-level module parameters\nxgap = 0.01\nygap = 0.10\nzgap = 0.05\nnumpanels = 2\ntorquetube = True\naxisofrotationTorqueTube = False\ndiameter = 0.1\ntubetype = 'Oct' # This will make an octagonal torque tube.\nmaterial = 'black' # Torque tube will be made of this material (0% reflectivity)\n\ntubeParams = {'diameter':diameter,\n 'tubetype':tubetype,\n 'material':material,\n 'axisofrotation':axisofrotationTorqueTube,\n 'visible':torquetube}\n\n# Simulation range between two hours\nstartdate = '11_06_11' # Options: mm_dd, mm_dd_HH, mm_dd_HHMM, YYYY-mm-dd_HHMM\nenddate = '11_06_14'\n\n# Cell Parameters\nnumcellsx = 6\nnumcellsy = 12\nxcell = 0.156\nycell = 0.156\nxcellgap = 0.02\nycellgap = 0.02\n\ndemo = bifacial_radiance.RadianceObj(simulationName, path=testfolder) \ndemo.setGround(albedo) \nepwfile = demo.getEPW(lat,lon) \nmetdata = demo.readWeatherFile(epwfile, starttime=startdate, endtime=enddate) \ncellLevelModuleParams = {'numcellsx': numcellsx, 'numcellsy':numcellsy, \n 'xcell': xcell, 'ycell': ycell, 'xcellgap': xcellgap, 'ycellgap': ycellgap}\nmymodule = demo.makeModule(name=moduletype, xgap=xgap, ygap=ygap, zgap=zgap, \n numpanels=numpanels, cellModule=cellLevelModuleParams, tubeParams=tubeParams)\n\nsceneDict = {'pitch':pitch,'hub_height':hub_height, 'nMods': nMods, 'nRows': nRows} \ndemo.set1axis(limit_angle=limit_angle, backtrack=backtrack, gcr=mymodule.sceney / pitch, cumulativesky=cumulativesky)\ndemo.gendaylit1axis()\ndemo.makeScene1axis(module=mymodule, sceneDict=sceneDict)\ndemo.makeOct1axis()\ndemo.analysis1axis()",
"<a id='step2'></a>\n2. Explore the tracker dictionary\nYou can use any of the below options to explore the tracking dictionary. Copy it into an empty cell to see their contents.",
"print(demo) # Shows all keys for top-level RadianceObj\n\ntrackerkeys = sorted(demo.trackerdict.keys()) # get the trackerdict keys to see a specific hour.\n\ndemo.trackerdict[trackerkeys[0]] # This prints all trackerdict content\ndemo.trackerdict[trackerkeys[0]]['scene'] # This shows the Scene Object contents\ndemo.trackerdict[trackerkeys[0]]['scene'].module.scenex # This shows the Module Object in the Scene's contents\ndemo.trackerdict[trackerkeys[0]]['scene'].sceneDict # Printing the scene dictionary saved in the Scene Object\ndemo.trackerdict[trackerkeys[0]]['scene'].sceneDict['tilt'] # Addressing one of the variables in the scene dictionary\n\n\n# Looking at the AnalysisObj results indivudally\ndemo.trackerdict[trackerkeys[0]]['AnalysisObj'] # This shows the Analysis Object contents\ndemo.trackerdict[trackerkeys[0]]['AnalysisObj'].mattype # Addressing one of the variables in the Analysis Object\n\n# Looking at the Analysis results Accumulated for the day:\ndemo.Wm2Back # this value is the addition of every individual irradiance result for each hour simulated.\n\n# Access module values\ndemo.trackerdict[trackerkeys[0]]['scene'].module.scenex\n",
"<a id='step3'></a>\n3. Explore Save Options\nThe following lines offer ways to save your trackerdict or your demo object.",
"demo.exportTrackerDict(trackerdict = demo.trackerdict, savefile = 'results\\\\test_reindexTrue.csv', reindex = False)\ndemo.save(savefile = 'results\\\\demopickle.pickle')\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ls-cwi/eXamine
|
doc/tutorial/eXamineNotebook/eXamineTutorial.ipynb
|
gpl-2.0
|
[
"eXamine Automation Tutorial\nThis case study demonstrates how to use the REST API of eXamine to study an annotated module in Cytoscape. The module that we study has 17 nodes and 18 edges and occurs within the KEGG mouse network consisting of 3863 nodes and 29293 edges. The module is annotated with sets from four different categories: (1) KEGG pathways and the GO categories (2) molecular process, (3) biological function and (4) cellular component.\nThere are three steps for visualizing subnetwork modules with eXamine. In the following, we will describe and perform the steps using the Automation functionality of Cytoscape. We refer to tutorial.pdf for instructions using the Cytoscape GUI.",
"# HTTP Client for Python\nimport requests\n\n# Cytoscape port number\nPORT_NUMBER = 1234\n\nBASE_URL = \"https://raw.githubusercontent.com/ls-cwi/eXamine/master/data/\"\n\n# The Base path for the CyRest API\nBASE = 'http://localhost:' + str(PORT_NUMBER) + '/v1/'\n\n#Helper command to call a command via HTTP POST\ndef executeRestCommand(namespace=\"\", command=\"\", args={}):\n postString = BASE + \"commands/\" + namespace + \"/\" + command\n res = requests.post(postString,json=args)\n return res",
"Importing network and node-specific annotation\nWe start by importing the KEGG network directly from the eXamine repository on github.",
"# First we import our demo network\nexecuteRestCommand(\"network\", \"import url\", {\"indexColumnSourceInteraction\":\"1\",\n \"indexColumnTargetInteraction\":\"2\",\n \"url\": BASE_URL + \"edges.txt\"})",
"We then import node-specific annotation directly from the eXamine repository on github. The imported file contains set membership information for each node. Note that it is important to ensure that set-membership information is imported as List of String, as indicated by sl. Additionaly, note that the default list separator is a pipe character.",
"# Next we import node annotations\nexecuteRestCommand(\"table\", \"import url\",\n {\"firstRowAsColumnNames\":\"true\",\n \"keyColumnIndex\" : \"1\",\n \"startLoadRow\" : \"1\",\n \"dataTypeList\":\"s,s,f,f,f,s,s,s,sl,sl,sl,sl\",\n \"url\": BASE_URL + \"nodes_induced.txt\"})",
"Import set-specific annotation\nWe now describe how to import the set-specific annotations. In order to do so, eXamine needs to generate group nodes for each of the sets present in the module. To do so, we need to select nodes present in the module; these nodes have the value small in column Module, which we do as follows.",
"executeRestCommand(\"network\", \"select\", {\"nodeList\":\"Module:small\"})",
"Now that we have selected the nodes of the module, we can proceed with generating group nodes for each set (Process, Function, Component and Pathway).",
"executeRestCommand(\"examine\", \"generate groups\",\n {\"selectedGroupColumns\" : \"Process,Function,Component,Pathway\"})",
"We import set-specific annotation, again directly from github.",
"#Ok, time to enrich our newly greated group nodes with some interesting annotations\nexecuteRestCommand(\"table\", \"import url\",\n {\"firstRowAsColumnNames\":\"true\",\n \"keyColumnIndex\" : \"1\",\n \"startLoadRow\" : \"1\",\n \"url\" : BASE_URL + \"sets_induced.txt\"})",
"Set-based visualization using eXamine\nWe now describe how to visualize the current selection. First, we set the visualization options.",
"# Adjust the visualization settings\nexecuteRestCommand(\"examine\", \"update settings\",\n {\"labelColumn\" : \"Symbol\",\n \"urlColumn\" : \"URL\",\n \"scoreColumn\" : \"Score\",\n \"showScore\" : \"true\",\n \"selectedGroupColumns\" : \"Function,Pathway\"})",
"We then select five groups.",
"# Select groups for demarcation in the visualization\nexecuteRestCommand(\"examine\", \"select groups\",\n {\"selectedGroups\":\"GO:0008013,GO:0008083,mmu04070,mmu05200,mmu04520\"})",
"There are two options: either we launch the interactive eXamine visualization, or we directly generate an SVG.",
"# Launch the interactive eXamine visualization\nexecuteRestCommand(\"examine\", \"interact\", {})",
"The command below launches the eXamine window. If this window is blank, simply resize the window to force a redraw of the scene.",
"# Export a graphic instead of interacting with it\n# use absolute path; writes in Cytoscape directory if not changed \nexecuteRestCommand(\"examine\", \"export\", {\"path\": \"your-path-here.svg\"})"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
JoseGuzman/myIPythonNotebooks
|
Stochastic_systems/Normal_distribution.ipynb
|
gpl-2.0
|
[
"from scipy.stats import norm",
"<H2>A normally distributed random variable</H2>\n<P>\nAssume $X$ is a random variable which is normally distributed:\n\n</P>\n$X \\sim N(\\mu_X, \\sigma_X^2),$ \nwhere $\\mu_X \\in \\mathbb{R} $ is the mean (or location) and $\\sigma_X^2 > 0$ is the variance (squared scale)",
"# create a normally distributed random variable with mu and sigma\nmu = 28.74\nsigma = 8.33 # standard deviation!\nrv_X = norm(loc = mu, scale = sigma)\n\n# plot the theoretical and empirical distributions\nx = np.linspace(start = rv_X.ppf(0.001), stop = rv_X.ppf(0.999), num = 100)\nplt.plot(x, rv_X.pdf(x), color = 'r', lw=2, label='theoretical');\nplt.hist( rv_X.rvs(size = 1000), rwidth=.85, facecolor='k', normed=1, label='empirical');\nplt.legend(frameon=0);",
"<H2>Sum of 2 normally distributed random variables</H2>\n<P>\nAssume $X$ and $Y$ are <B>independent</B> random variables and are both normally distributed, then the sum of the two random variables $X+Y$ will be also a random variable $Z$ that has a normal distribution with the following properties:\n\n</P>\n$Z \\sim N(\\mu_Z, \\sigma_Z^2),$ \nThe resulting mean is simply sum of the two means: $\\mu_Z= \\mu_X+\\mu_Y,$\nand the variance is the sum of the two variances: $\\sigma_Z^2 = \\sigma_X^2 + \\sigma_Y^2,$\nor alternatively, the standard deviation: $\\sigma_Z = \\sqrt{\\sigma_X^2 + \\sigma_Y^2}$",
"# create a second normally distributed random variable with mu and sigma\nmu = 28.74\nsigma = 8.33 # standard deviation!\nrv_Y = norm(loc = mu, scale = sigma)\n\n# The theoretical distrubution of Z\nrv_Z = norm(loc = mu+mu, scale = sqrt(sigma**2+sigma**2))\n\n# The empirical distribution of Z based on the sum of two random variables\ndata = rv_X.rvs(100) + rv_Y.rvs(100)\n\n\n# Plot resulting distributions\nx = np.linspace(start = rv_Z.ppf(0.001), stop = rv_Z.ppf(0.999), num = 100)\nplt.plot(x, rv_Z.pdf(x), color = 'r', lw=2, label='theoretical');\n\nplt.hist( data, rwidth=.85, facecolor='k', normed=1, label='empirical');\nplt.legend(frameon=0);",
"<H3>Resulted mean and standard deviation</H3>",
"# Location and scale from data\nprint('Location = %f, scale = %f'%norm.fit(data)) ",
"<H3>Theoretical mean and standard deviation</H3>",
"# Theoretical location and scale\nmu_Z = mu + mu\nsigma_Z = sqrt(sigma**2 + sigma**2)\nprint('Location = %f, scale =%f'%(mu_Z , sigma_Z))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
lcharleux/numerical_analysis
|
doc/ODE/ODE_harmonic_oscillator.ipynb
|
gpl-2.0
|
[
"Ordinary Differential Equations : Practical work on the harmonic oscillator\nIn this example, you will simulate an harmonic oscillator and compare the numerical solution to the closed form one. \nTheory\nRead about the theory of harmonic oscillators on Wikipedia\nMechanical oscillator\nThe case of the one dimensional mechanical oscillator leads to the following equation:\n$$\nm \\ddot x + \\mu \\dot x + k x = m \\ddot x_d\n$$\nWhere:\n\n$x$ is the position,\n$\\dot x$ and $\\ddot x$ are respectively the speed and acceleration,\n$m$ is the mass,\n$\\mu$ the \n$k$ the stiffness,\nand $\\ddot x_d$ the driving acceleration which is null if the oscillator is free.\n\nCanonical equation\nMost 1D oscilators follow the same canonical equation:\n$$\n\\ddot x + 2 \\zeta \\omega_0 \\dot x + \\omega_0^2 x = \\ddot x_d\n$$\nWhere:\n\n$\\omega_0$ is the undamped pulsation,\n$\\zeta$ is damping ratio,\n$\\ddot x_d$ is the imposed acceleration. \n\nIn the case of the mechanical oscillator:\n$$\n\\omega_0 = \\sqrt{\\dfrac{k}{m}}\n$$\n$$\n\\zeta = \\dfrac{\\mu}{2\\sqrt{mk}} \n$$\nUndampened oscillator\nFirst, you will focus on the case of an undamped free oscillator ($\\zeta = 0$, $\\ddot x_d = 0$) with the following initial conditions:\n$$\n\\left \\lbrace\n\\begin{split}\nx(t = 0) = 1 \\\n\\dot x(t = 0) = 0\n\\end{split}\\right.\n$$\nThe closed form solution is:\n$$\nx(t) = a\\cos \\omega_0 t\n$$",
"# Setup\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.integrate import odeint\n\n# Setup\nf0 = 1.\nomega0 = 2. * np.pi * f0\na = 1.",
"Part 1: theoretical solution\nPlot the closed form solution of the undamped free oscillator for 5 periods.\nSteps:\n\nCreate an array $t$ reprenting time,\nCreate a function $x_{th}$ representing the amplitude of the closed form solution,\nPlot $x_{th}$ vs $t$.",
"# Complete here\n#t = \n#xth = \n\n",
"Part 2: Numerical solution with Euler integrator\nSolve the problem introduced in question 1 with the Euler integrator.\nSteps:\n\nRewrite the canonical equation as a system of first order ODEs depending of the variable $X = [x, \\dot x]$,\nCode the derivative function $f(X,t) = \\dot X$,\nDefine initial conditions $X_0$,\nSolve the problem.\nPlot the position $x$ along and compare it with the theoretical solution.\n\nPart 3: Energies an errors\nCalculate and plot the kinetic energy $E_c$, the potential energy $E_p$ and the total energy $E_t = E_c + E_p$, comment the result.\nSteps:\n\nCalculate $E_c$,\nCalculate $E_p$,\nCalculate $E_t$,\nPlot the evolution of the 3 energies. You can use plt.fill_between instead of plt.plot,\nUse the results to define a relative error estimator base on energies.\n\nPart 4: Numerical solution convergence\nPlot the effect of the number time steps $n_t$ on the error $e$.\nSteps:\n\nCreate an array containing the different number of time steps from 100 to 100000,\nLoop over this array and calculate the the error for each configuration,\nPlot the error as a function of $n_t$.\n\nPart 5: integrator benchmark\nRewrite the code of part 4 in order to compare the RK4 and ODEint solvers with the Euler solver. Comment the efficiency of each solver.\nPart 6: Error vs. time\nModify the code of part 5 in order to measure the computing time of each method in each case. Plot the error vs. the computing time."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
sdpython/ensae_teaching_cs
|
_doc/notebooks/td2a_ml/td2a_ethics.ipynb
|
mit
|
[
"Machine Learning éthique\nCe notebook est inspiré de l'article FairTest: Discovering Unwarranted Associations in Data-Driven Applications et propose d'étudier une façon de vérifier qu'un modèle ou une métrique n'est pas biaisé par rapport à certains critères.",
"from jyquickhelper import add_notebook_menu\nadd_notebook_menu()",
"Principe\nUn problème de machine learning dans sa définition la plus simple consiste à prédire $Y$ en fonction de $I$. On souhaite construire une fonction $f(I)$ qui s'approche le plus possible de $Y$. On appelle $I$ l'ensemble des données utilisateurs ou user input et $f(I)$ la sortie du modèle ou output to users (O). On veut découvrir si la sortie du modèle est biaisée d'une façon non-éthique par rapport à des attributs dits protégés ou protected attributes (S) tels que l'origine éthnique ou le genre. On tient compte également du contexte, code postal, ...) ou context attributes (X) et de variables explicative ou explanotory attributes (E). On a donc :\n\nI : les variables en entrée du modèles,\nO : la prédiction du modèle,\nS : les attributs protégés, toute corrélation entre la sortie et ces attributs est indésirable,\nX : le contexte (principalement le lieu géographique)\nE : les variables explicatives (contraintes qualitification), la sortie est a priori corrélée avec ces attributs, de façon quasi prévisible mais cela reste éthique.\n\nUn exemple. Supposons qu'on cherche à recommander un objet à la vente sur un site internet à partir des données de navigation de cette personne (I). Comme cette personne est identifiée, on peut savoir où elle habite (X), son origine éthnique (S) et son niveau d'étude (E). Le site lui recommande un livre de mathématiques (O). Est-ce que cette recommandation dépend principalement de l'origine éthnique (S) ? Et si les deux informations semblent corrélées, ne serait-ce pas plutôt son niveau d'étude (E) qui explique la recommandation ?\nConcrètement, la sortie O ne dépend que de I mais on souhaite savoir si I ne serait pas corrélées à certains facteurs qui ne sont pas éthique. C'est une corrélation inattendue mais observable. Le contexte X permet de partitionner et de vérifier si le modèle est éthique sur l'ensemble des sous-groupes de population. Le processus de recherche des corrélations indésirées s'effectue en cinq temps.\n\nOn choisit un attribut protégé $S_i$ et on réalise une partition de la population pour cet attribut.\nOn vérifie pour chaque sous-population (ou chaque partie) que la sortie O et l'attribut $S_i$ ne sont pas corrélés. C'est l'étape de détection.\nPour chaque biais détecté, on vérifie qu'il ne peut être expliqué par une des variables explicatrice auquel ce biais est acceptable. C'est l'étape de debugging.",
"%matplotlib inline",
"Installation de FairTest\nLe module ne peut pas être installé tel quel car il a été implémenté pour Python 2. Voici un lien qui devrait vous permettre de l'installer pour Python 3 si vous ne l'avez pas déjà fait.",
"# !pip install https://github.com/sdpython/fairtest/releases/download/0.1/fairtest-0.1-py3-none-any.whl",
"Données\nOn récupère les données adult pour lequel il faut prédire le fait qu'une personne ait un revenu supérieur à 50.000 dollars par an.",
"import pyensae.datasource as ds\ndata = ds.download_data(\"adult.data\",\n url=\"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/\")\nnames = (\"age,workclass,fnlwgt,education,education-num,marital-status,occupation,relationship,\"\n \"race,sexe,capital-gain,capital-loss,hours-per-week,native-country,income\").split(\",\")\n\nimport pandas\ndf = pandas.read_csv(data, names=names)\ndf.head()",
"Exemple avec aequitas\naequitas propose un ensemble de métrique pour mesurer les biais éthiques.",
"from sklearn.model_selection import train_test_split\ndf2 = df.copy()\ndf2_train, df2_test = train_test_split(df2)\n\nimport numpy\nlabel = \"income\"\nX_train = df2_train.drop(label, axis=1)\ny_train = df2_train[label] == ' >50K'\ny_train = pandas.Series(numpy.array([1.0 if y else 0.0 for y in y_train]))\nX_test = df2_test.drop(label, axis=1)\ny_test = df2_test[label] == ' >50K'\ny_test = pandas.Series(numpy.array([1.0 if y else 0.0 for y in y_test]))\n\nX_train = X_train.drop(['fnlwgt'], axis=1).copy()\nX_test = X_test.drop(['fnlwgt'], axis=1).copy()\n\ncat_col = list(_ for _ in X_train.select_dtypes(\"object\").columns)\ncat_col\n\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import LabelEncoder, OneHotEncoder\n\n\n\npipe = make_pipeline(\n OneHotEncoder(handle_unknown=\"ignore\"),\n DecisionTreeClassifier(min_samples_leaf=10))\npipe.fit(X_train, y_train)\n\ndata = X_test.copy().reset_index(drop=True)\ndata[\"score\"] = pipe.predict_proba(X_test)[:, 1]\ndata[\"label_value\"] = y_test\ndata.head(n=4)\n\ndata_small = data[[\"sexe\", \"education\", \"score\", \"label_value\"]].copy()\ndata_small.head(n=5)\n\nfrom seaborn import catplot, violinplot\ng = violinplot(x=\"sexe\", y=\"score\", hue=\"label_value\", data=data_small)\n\nfrom aequitas.group import Group\ng = Group()\nxtab, _ = g.get_crosstabs(data_small)\n\nxtab\n\nfrom aequitas.plotting import Plot\n\naqp = Plot()\nfpr_plot = aqp.plot_group_metric(xtab, 'fpr')\n\nset(data_small['education'])\n\nfrom aequitas.bias import Bias\n\nb = Bias()\nbdf = b.get_disparity_predefined_groups(xtab, \n original_df=data_small, \n ref_groups_dict={'sexe': ' Male', 'education':' Masters'},\n alpha=0.05, \n check_significance=False)\n\n fpr_disparity = aqp.plot_disparity(bdf, group_metric='fpr_disparity', \n attribute_name='education')",
"Exemple avec fairtest\nNe fonctionne pas encore. Il faut passer à Python trois.",
"try:\n from fairtest import DataSource\n cont = True\nexcept ImportError:\n cont = False\n# On prend les 1000 premières lignes.\n# Le processus est assez long.\nif cont:\n dsdat = DataSource(df[:1000], budget=1, conf=0.95)\n\nif cont:\n from fairtest import Testing, train, test, report\n\n SENS = ['sex', 'race'] # Protected features\n TARGET = 'income' # Output\n EXPL = '' # Explanatory feature\n\n inv = Testing(dsdat, SENS, TARGET, EXPL)\n train([inv])\n\nif cont:\n test([inv])\n\nif cont:\n try:\n report([inv], \"adult\")\n except Exception as e:\n print(\"ERROR\")\n print(e)\n # à corriger encore",
"Exercice 1 : Construire un arbre de décision avec son propre critère\nOn pourra s'inspirer de l'article Pure Python Decision Trees,\nou How To Implement The Decision Tree Algorithm From Scratch In Python. Cet arbre de décision doit produire un résultat similaire à ceci : Contexte-education-num-in-(9.5, 11.5)-age-in(46.5, inf). Il s'agit d'implémenter l'algorithme qui suit avec la métrique Mutual Information (MI).",
"from pyquickhelper.helpgen import NbImage\nNbImage(\"fairtesttree.png\")",
"Exercice 2 : appliquer l'algorithme sur le jeu de données adulte\nExercice 3 : apprendre sans interactions",
"df2 = df.copy()\ndf2[\"un\"] = 1\ndf2[\"age10\"] = (df[\"age\"] // 10) * 10\ngr = df2[[\"age10\", \"sexe\", \"income\", \"un\"]].groupby([\"age10\", \"sexe\", \"income\"], as_index=False).sum()\ngr.head()\n\ng = gr.pivot_table(\"un\", \"age10\", [\"income\", \"sexe\"])\ng\n\ng.columns\n\ng[\"rF\"] = g[(\" <=50K\", \" Female\")] / g[(\" >50K\", \" Female\")]\ng[\"rM\"] = g[(\" <=50K\", \" Male\")] / g[(\" >50K\", \" Male\")]\ng",
"Les deux variables paraissent corrélées.\nOn apprend trois modèles :\n\nOn prédit 'income' avec l'âge et le genre, modèle M0\nOn prédit 'income' avec l'âge uniquement, modèle M1\nOn prédit 'income' avec le genre uniquement, modèle M2\nOn prédit 'income' avec les modèles M1 et M2.\n\nLe modèle prédit-il aussi bien, moins bien ? Qu'en déduisez-vous ?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
akseshina/dl_course
|
seminar_6/hw_RNN.ipynb
|
gpl-3.0
|
[
"Protein Family Classification",
"import numpy as np\nimport pandas as pd\nimport tensorflow as tf\nfrom lazy import lazy\nfrom attrdict import AttrDict\nfrom collections import Counter\n\nimport warnings\nwarnings.filterwarnings('ignore')\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sns\n\nimport random\n\nfamily_classification_metadata = pd.read_table('../seminar_5/data/family_classification_metadata.tab')\nfamily_classification_sequences = pd.read_table('../seminar_5/data/family_classification_sequences.tab')\n\nfamily_classification_metadata.head()\n\nfamily_classification_sequences.head()",
"Task:\nUse your ProtVec embedding from homework 5 to perform protein family classification using RNN.\n\nuse 1000 most frequent families for classification\nvalidate your results on the train-test split\nreduce the dimensionality of the protein-space using Stochastic Neighbor Embedding and visualize two most frequent classes\ncompare your RNN results with SVM\n\nLet's read the embedding matrix from the original article data.",
"table = pd.read_csv('data/protVec_100d_3grams_without_quotes.csv', sep='\\t', header=None)\ntable = table.T\nheader = table.iloc[0] # grab the first row for the header\nprot2vec = table[1:] # take the data less the header row\nprot2vec.columns = header # set the header row as the df header\nprot2vec[\"AAA\"].head()",
"Let's find 1000 most frequent families.",
"most_common_families = Counter(family_classification_metadata['FamilyID']).most_common(1000)\nmost_common_families = [family for (family, count) in most_common_families]\nfamily2num = {f: i for (i, f) in enumerate(most_common_families)}",
"Let's determine what protein length we should consider.",
"seq_lens = [len(i) for i in family_classification_sequences[\"Sequences\"]]\nsns.distplot(seq_lens);\n\nsns.distplot(seq_lens, bins=100).set(xlim=(0, 5000));\n\nMAX_PROTEIN_LEN = 501",
"Now we'll generate batches.",
"all_proteins = family_classification_sequences['Sequences']\nall_families = family_classification_metadata['FamilyID']\n\nselected_ids = [i for i in range(len(all_proteins)) \n if all_families[i] in family2num and len(all_proteins[i]) <= MAX_PROTEIN_LEN]\n\nrandom.shuffle(selected_ids)\n\ntrain_ratio = 0.9\nnum_train = int(len(selected_ids) * train_ratio)\n\ntrain_ids = selected_ids[:num_train]\ntest_ids = selected_ids[num_train:]\n\nEMBED_LEN = 100\nNUM_CLASSES = 1000\nBATCH_SIZE = 128\n\ndef embedding(protein):\n res = np.zeros((MAX_PROTEIN_LEN // 3, EMBED_LEN))\n for i in range(0, (len(protein) - 3) // 3):\n try:\n res[i] = prot2vec[protein[i*3: i*3 + 3]]\n except KeyError:\n res[i] = prot2vec['<unk>']\n\n return res\n \nembedding(all_proteins[selected_ids[0]])\n\ndef batches(steps):\n for i in range(steps):\n cur_ids = random.sample(train_ids, BATCH_SIZE)\n \n cur_proteins = [embedding(p) for p in all_proteins[cur_ids]]\n \n fam_ids = [family2num[f] for f in all_families[cur_ids]]\n cur_families = np.eye(NUM_CLASSES)[fam_ids]\n \n yield cur_proteins, cur_families",
"Model",
"class SequenceClassificationModel:\n \n def __init__(self, params):\n self.params = params\n self._create_placeholders()\n self.prediction\n self.cost\n self.error\n self.optimize\n #self._create_summaries()\n \n def _create_placeholders(self):\n with tf.name_scope(\"data\"):\n self.data = tf.placeholder(tf.float32, [None, self.params.seq_length, self.params.embed_length])\n self.target = tf.placeholder(tf.float32, [None, NUM_CLASSES]) #####\n\n def _create_summaries(self):\n with tf.name_scope(\"summaries\"):\n tf.summary.scalar('loss', self.cost)\n tf.summary.scalar('error', self.error)\n self.summary = tf.summary.merge_all()\n saver = tf.train.Saver() \n \n @lazy\n def length(self):\n # lengths of sequences in the current data batch \n with tf.name_scope(\"seq_length\"):\n used = tf.sign(tf.reduce_max(tf.abs(self.data), reduction_indices=2))\n length = tf.reduce_sum(used, reduction_indices=1)\n length = tf.cast(length, tf.int32)\n return length\n \n @lazy\n def prediction(self):\n with tf.name_scope(\"recurrent_layer\"):\n output, _ = tf.nn.dynamic_rnn(\n self.params.rnn_cell(self.params.rnn_hidden),\n self.data,\n dtype=tf.float32,\n sequence_length=self.length\n )\n last = self._last_relevant(output, self.length)\n\n with tf.name_scope(\"softmax_layer\"):\n num_classes = int(self.target.get_shape()[1])\n weight = tf.Variable(tf.truncated_normal(\n [self.params.rnn_hidden, num_classes], stddev=0.01))\n bias = tf.Variable(tf.constant(0.1, shape=[num_classes]))\n prediction = tf.nn.softmax(tf.matmul(last, weight) + bias)\n return prediction\n \n @lazy\n def cost(self):\n cross_entropy = -tf.reduce_sum(self.target * tf.log(self.prediction))\n return cross_entropy\n \n @lazy\n def error(self):\n self.mistakes = tf.not_equal(\n tf.argmax(self.target, 1), tf.argmax(self.prediction, 1))\n return tf.reduce_mean(tf.cast(self.mistakes, tf.float32))\n \n @lazy\n def optimize(self):\n # we will limit the maximum weight updates for gradient clipping\n with tf.name_scope(\"optimization\"):\n gradient = self.params.optimizer.compute_gradients(self.cost)\n if self.params.gradient_clipping:\n limit = self.params.gradient_clipping\n gradient = [\n (tf.clip_by_value(g, -limit, limit), v)\n if g is not None else (None, v)\n for g, v in gradient]\n optimize = self.params.optimizer.apply_gradients(gradient)\n return optimize\n \n @staticmethod\n def _last_relevant(output, length):\n with tf.name_scope(\"last_relevant\"):\n batch_size = tf.shape(output)[0]\n max_length = int(output.get_shape()[1])\n output_size = int(output.get_shape()[2])\n index = tf.range(0, batch_size) * max_length + (length - 1)\n flat = tf.reshape(output, [-1, output_size])\n relevant = tf.gather(flat, index)\n return relevant\n\nparams = AttrDict(\n rnn_cell=tf.contrib.rnn.GRUCell,\n rnn_hidden=300,\n optimizer=tf.train.AdamOptimizer(0.002),\n batch_size=BATCH_SIZE,\n gradient_clipping=100,\n seq_length=MAX_PROTEIN_LEN//3,\n embed_length=EMBED_LEN\n)\n\ntf.reset_default_graph()\nmodel = SequenceClassificationModel(params)",
"Training and testing",
"gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.3)\ngpu_config = tf.ConfigProto(gpu_options=gpu_options)\n\nNUM_STEPS = 1000\n\nwith tf.Session(config=gpu_config) as sess:\n\n sess.run(tf.global_variables_initializer())\n \n for index, batch in enumerate(batches(NUM_STEPS)):\n feed = {model.data: batch[0], model.target: batch[1]}\n error, _ = sess.run([model.error, model.optimize], feed)\n \n if index % 50 == 0:\n print('Accuracy on step {}: {:3.1f}%'.format(index + 1, 100 * (1 - error)))\n \n # Testing\n # I run model on a one sample at time because otherwise I had problems with memory.\n sum_error = 0\n for i in range(len(test_ids)):\n if i % 1000 == 0:\n print(i, end='_')\n cur_id = [test_ids[i]]\n test_proteins = [embedding(p) for p in all_proteins[cur_id]]\n\n test_fam_ids = [family2num[f] for f in all_families[cur_id]]\n test_families = np.eye(NUM_CLASSES)[test_fam_ids]\n\n feed = {model.data: test_proteins, model.target: test_families}\n error = sess.run(model.error, feed)\n sum_error += error\n\nprint('Accuracy on test set: {:3.1f}%'.format(100 * (1 - sum_error/len(test_ids))))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mediagit2016/workcamp-maschinelles-lernen-grundlagen
|
18-05-14-ml-workcamp/sensor-daten-10/Projekt-Sensordaten-Feature-Selektion-Workcamp-ML.ipynb
|
gpl-3.0
|
[
"<h1>WorkCamp # Maschinelles Lernen - ## Grundlagen - ###2018</h1>\n\n<h2>Praktische Übung</h2>\n\n<h3>Beispiel xx # Arbeiten mit Sensordaten ## Feature Selektion</h3>\n\nProblemstellung:<br>\nIn diesem Jupyter Notebook , werden Sie in einer Fallstudie die Aufbereitung von Daten durch Skalierung, Normalisierung, Skalenänderung und Binärisierung kennenlernen. Dies ist für einige Algorithmen für Maschinelles Lernen notwendig.\nNach Abschluss dieses Notebooks sollten Sie wissen:\n<ul>\n<li>Wie man ein Vorhersagemodellierungsproblem auf Basis einer Fragestelluung zur Classification durchgehend abarbeitet.\n<li>Wie man bisher unbekannte Daten in panda DataFrames lädt: (csv, xlsx, xls, xml, json, hdf5 etc.).\n<li>Wie man unbekannte Daten mit einer deskriptiven Statistik in python analysiert.\n<li>Wie man unbekannte Daten mit python Bibliotheken visualisiert.\n<li>Wie man erzeugte Plots, speichert und dokumentiert.\n<li>Wie man Datentransformationen verwendet, um die Performance des Modells zu verbessern, zum Beispiel Normalisierung oder Standardisierung.\n<li>Wie man Algorithmus-, oder Hyperparameter-Tuning verwendet, um die Modell-Leistung zu verbessern.\n<li>Wie man Ensemble-Methoden verwendet und eine Abstimmung der Parameter zur Verbesserung der Modell-Performance durchführt.\n<li>Wie man die Kreuz-Validierung zur Beurteilung der Performance von ML-Algorithmen einsetzt.\n<li> Auf welcher Basis eine Beurteilung der verwendetn Classification Algorithmen stattfindet. (Classification Matrix, Confusion Matrix)\n</ul>\nDie Module und Bibliotheken stehen alle in der <b>Anaconda scikit-learn</b> Umgebung zum Maschinellen Lernen direkt zur Verfügung.<br>\n<b>Arbeiten mit Zeitreihen:</b><br>\nInsbesondere beim arbeiten mit Zeitreihen (timeseries) wird, falls notwendig, statsmodels und dessen Klassen, Bibliotheken und Module nachgeladen.<br>\n<b>Tipp:</b><br>\n<b>Falls in Ihrer Version statsmodels nicht vorhanden ist, mit: !pip install statsmodels in einer Jupyter Zelle\nnachinstallieren.</b><br>\nInformationen zu statsmodels finden Sie hier: http://www.statsmodels.org/<br>\n##Eventuell Strukturbild einbauen\n##Evtl. nochmals Vorgehen als Ablaufmodell",
"# Laden der entsprechenden Module (kann etwas dauern !)\n# Wir laden die Module offen, damit man einmal sieht, was da alles benötigt wird\n# Allerdings aufpassen, dann werden die Module anderst angesprochen wie beim Standard\n# zum Beispiel pyplot und nicht plt\nfrom matplotlib import pyplot\npyplot.rcParams[\"figure.figsize\"] = (15,12)\n%matplotlib inline\nimport numpy as np #wird allerdings nicht benötigt\nfrom pandas import read_csv\nfrom pandas import set_option\nfrom pandas.plotting import scatter_matrix\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.model_selection import KFold\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysis\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.svm import SVC\nfrom sklearn.ensemble import AdaBoostClassifier\nfrom sklearn.ensemble import GradientBoostingClassifier\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.ensemble import ExtraTreesClassifier",
"Problem Beschreibung:<br>\nDer Fokus dieses Projektes liegt auf dem Datensatz \"sensordaten-10.csv\". Das Problem ist die Vorhersage von guten und schlechten Werkstücken aus den 10 Sensordaten. Jedes Muster ist ein Satz von 10 Zahlen. Die Sensoren decken unterschiedliche Wertebereiche ab.Das Label, das jeder Datenreihe zugeordnet ist, enthält 0 oder 1. Wenn das Werkstück die Beurteilung gut hat steht eine 1 in der Spalte Label, sonst eine 0.<br>\n<b>Aufgabe:</b><br>\nLaden Sie die Daten und verschaffen Sie sich einen ersten Überblick<br>",
"#Laden der Daten [12100 Datensätze mit 10 Sensoren und einer Spalte Label (12100x11)Matrix]\nurl = 'sensordaten-10.csv'\ndatensatz = read_csv(url, sep=';', header=0)",
"<h3>Beschreibende Statistik</h3>",
"# Ausgabe df.shape\nprint(datensatz.shape)\n\n# Ausgabe df.dtypes\n# Spalte enthält die Classifikation R oder M\nset_option('display.max_rows', 50)\nprint(datensatz.dtypes)\n\n# Ausgabe df.head mit vergösserter display width\nset_option('display.width', 100)\nprint(datensatz.head(20))\n\n# Ausgabe df.describe() mit 4 Nachkomma Stellen\nset_option('precision', 4)\nprint(datensatz.describe())\n\n# Ausgabe der Klassen Verteilung in der Spalte 60\nprint(datensatz.groupby('Label').size())\n",
"<h3>Visualisierung der Daten</h3>",
"# Ausgabe Histogramm\npyplot.rcParams[\"figure.figsize\"] = (15,12)\ndatensatz.hist()\npyplot.show()",
"<h3>Univariate Feature Selektion</h3>\n\nUum die Merkmale auszuwählen, die am stärksten mit der Ausgabevariablen verknüpft sind, können Statistische Tests verwendet werden,. Die scikit-learn Bibliothek stellt die SelectKBest Klasse zur Verfügung, die mit einer Reihe von unterschiedlichen statistischen Tests zur Auswahl der relavantesten Features eingesetzt werden kann. Das folgende Beispiel verwendet den Chi-squared (Chi2) statistischen Test für nicht-negative Merkmale, um 5 der besten Merkmale aus den Sensordaten auszuwählen.",
"from numpy import set_printoptions\nfrom sklearn.feature_selection import SelectKBest\nfrom sklearn.feature_selection import chi2\n# Übergabe der Dtaen\narray = datensatz.values\nX = array[:,0:10]\nY = array[:,10]\n# Feature Extraktion\ntest = SelectKBest(score_func=chi2, k=5)\nfit = test.fit(X, Y)\n# Zusammenfassung der Ergebnisse\nset_printoptions(precision=3)\nprint(fit.scores_)\nfeatures = fit.transform(X)\n# Ausgewählte Features\nprint(features[0:9,:])",
"Im Sensor Datensatz sind also die Sensoren Sens-5, Sens-7, Sens-8, Sens-9 und Sens-10 besonders relevant.\n<h3>Rekursive Feature Elimination</h3>\n\nDie rekursive Feature Elimination (oder RFE) funktioniert durch rekursives Entfernen von Attributen und Aufbau eines Modells auf den verbleibenden Attributen. Anhand der Modellgenauigkeit wird ermittelt, welche Attribute (und die Kombination von Attributen) tragen am meisten zur Vorhersage des Zielattributs bei. Das folgende Beispiel verwendet RFE mit dem logistischen Regressionsalgorithmus, um die 3 wichtigsten Features auszuwählen. Die Wahl des Algorithmus spielt keine Rolle, solange er geschickt und konsistent ist.",
"# Rekursives Feature Engineering\n# Laden des Moduls RFE\nfrom sklearn.feature_selection import RFE\n#Verwendung der Logistischen Regression als Algorithmus\nfrom sklearn.linear_model import LogisticRegression\n# Übergabe der Werte in datensatz an ein array2\narray2 = datensatz.values\n# Aufteilen des arrays in abhängige Variable Y und unabhängige Variable X\nX = array2[:,0:10]\nY = array2[:,10]\n# feature extraction\nmodel = LogisticRegression()\nrfe = RFE(model, 3)\nfit = rfe.fit(X, Y)\nprint(\"Num Features: %d\" % fit.n_features_)\nprint(\"Selected Features: %s\" % fit.support_)\nprint(\"Feature Ranking: %s\" % fit.ranking_)",
"Die rekursive Feature Elimination wählt die gleichen Sensoren aus wie bei der univariaten Auswahl.\n<h3>Principal Component Analysis</h3>\n\nDie Principal Component Analysis (oder PCA) verwendet lineare Algebra, um den Datensatz in eine komprimierte Form zu transformieren. Im Allgemeinen wird dies als Datenreduktionstechnik bezeichnet. Eine Eigenschaft von PCA ist, dass wir die Anzahl der Dimensionen oder Hauptkomponenten im transformierten Ergebnis wählen können. Im folgenden Beispiel verwenden wir PCA und wählen 3 Hauptkomponenten aus.",
"# Binärisierung der Daten \n# Laden des Moduls Binarizer aus sklearn.preprocessing\nfrom sklearn.decomposition import PCA\n# Übergabe der Werte in datensatz an ein array3\narray3 = datensatz.values\n# Aufteilen des arrays in abhängige Variable Y und unabhängige Variable X\nX = array3[:,0:10]\nY = array3[:,10]\n# feature extraction\npca = PCA(n_components=3)\nfit = pca.fit(X)\n# summarize components\nprint(\"Explained Variance: %s\" % fit.explained_variance_ratio_)\nprint(fit.components_)",
"<h3>Abschätzung der Bedeutung von Merkmalen</h3>\n\nRandom Forest und Extra Trees können verwendet werden, um die Bedeutung von Merkmalen abzuschätzen. Im folgenden Beispiel konstruieren wir einen ExtraTreesClassifier für den Datensatz der Sonardaten.",
"# Abschätzung der Bedeutung von Merkmalen\n# Laden des Moduls ExtraTreesClassifier aus sklearn.ensemble\nfrom sklearn.ensemble import ExtraTreesClassifier\n# Übergabe der Werte in datensatz an ein array4\narray4 = datensatz.values\n# Aufteilen des arrays in abhängige Variable Y und unabhängige Variable X\nX = array4[:,0:10]\nY = array4[:,10]\n# feature extraction\nmodel = ExtraTreesClassifier()\nmodel.fit(X, Y)\nprint(model.feature_importances_)",
"Die Abschätzung der Bedeutung von Features wählt die gleichen Sensoren aus wie bei der univariaten Auswahl.\n<h3>Weiteres Beispiel</h3>",
"# Feature Importance with Extra Trees Classifier\nfrom sklearn.ensemble import ExtraTreesClassifier\n# Übergabe des Dateinamens an die Variable dateiname\ndateiname = 'pima-indians-diabetes.data.csv'\n# Festlegen der Spalten Namen für den DataFrame\nnamen = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']\n# Einlesen der Daten in einen panda DataFrame mit read_csv()\ndf = read_csv(dateiname, names=namen)\n# Übergabe der Werte in df an ein array5\narray5 = df.values\n# Aufteilen des arrays in abhängige Variable Y und unabhängige Variable X - hier steht die Klasse in Spalte 9\nX = array5[:,0:8]\nY = array5[:,8]\n# feature extraction\nmodel = ExtraTreesClassifier()\nmodel.fit(X, Y)\nprint(model.feature_importances_)",
"<h2>Weiterführende Links:</h2>\n\n<ul>\n<li> https://www.stuttgart.ihk.de\n</ul>\n\n<h2>Weiterführende Literatur:</h2>\n\n<ul>\n<li> https://www.stuttgart.ihk.de\n</ul>\n\n<b>Ansprechpartner IHK-Region Stuttgart:</b><br>\nDipl. Wirtsch-Ing. R. Rank"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
joshamilton/Hamilton_acI_2017
|
code/01-genomeAnnotationAndModelProcessing.ipynb
|
mit
|
[
"Reverse Ecology and Metatranscriptomics of Uncultivated Freshwater Actinobacteria\nGenome Annotation\nKBase (http://kbase.us) is a powerful resource for obtaining genome-scale network reconstructions for microbial genomes. These reconstructions are distributed as SBML files, which must be processed prior to reverse ecology analysis. This notebook describes how to obtain reconstructions from KBase, and how to process them.\nObtaining and Preparing SBML Files\nBriefly, genomes (as fasta files containing unannotated contigs) can be pushed from a computer to KBase. Once there, a KBase Narrative (iPython notebook) can be used to build reconstructions for your genomes. The script code\\pushToKBase\\loadGenomes.pl pushes these genomes to the KBase narratives created for this project. For more details, follow the instructions in code/pushToKbase/README.md.\nGenomes were pushed to four separate narratives:\n\n\nSAGs: KBase Narrative, workspace ID: joshamilton:1452727482251\n\n\nMendota MAGs: KBase Narrative, workspace ID: joshamilton:1452793144933\n\n\nTrout Bog MAGs KBase Narrative, workspace ID: joshamilton:1452798604003\n\n\nMAGs from other research groups KBase Narrative, workspace ID: joshamilton:1452801292037\n\n\nWithin each narrative, the \"Annotate Contigs\" and \"Build Metabolic Model\" KBase apps were run for each genome. Annotated genomes (Genbank format), and models (SBML and tsv formats) were downloaded for each genome and stored in:\n\ndata/refGenomes - genomes and annotations in a variety for formats\nmodels/rawModels - metabolic models in SBML and 'tsv' formats\n\nOnce the genomes were downloaded, the Genbank-formatted genomes were converted to fasta nucleotide (ffn), fasta amino acid (ffa), and gff format, using the scripts described below. All scripts are located in the code/pushToKBase directory:\n\nNo script given - For each genome, this script concatenates Genbank files for each indiviual scaffold, giving a single Genbank file for the genome.\nkBaseGenbankToFasta.py - Converts Genbank files to fasta nucleotide (ffn), fasta amino acid (ffa) format.\nkBaseGenbankToGff.py - Converts Genbank files to gff format.\ncleanUpGFF.pl - Performs some additional processing on the gff files.\nNo script given - KBase SBML and 'tsv' file names were simplified.\n\nProcessing SBML Files\nReconstructions from KBase require further processing before they are suitable for use in reverse ecology. These processing steps include:\n\nReformat gene locus tags, for compatability with the CobraPy software package\nRemove biomass, exchange, transport, spontaneous, DNA/RNA biosynthesis reactions and their corresponding genes\nImport metabolite formulas\nCheck mass- and charge-balancing of reactions in the reconstruction\nReformat reaction and metabolite names, for compatability with the cobrapy software package\n\nA note about post-processing: When KBase detects that one or more subunits of a complex are present, it creates a \"full\" GPR by adding 'Unknown' genes for the other subunits. At the time of analysis, CobraPy lakced functions to remove these genes. As such, these models should not be used to perform any simulations which rely on GPRs.\nAs output, the code returns processed SBML files in the 'processedDataDir' folder. It also returns a summary of the model sizes, in the 'summaryStatsDir' folder.\nThe first chunk of code imports the Python packages necessary for this analysis, and the second calls the function which processes each SBML file and preps it for analysis.",
"# Import Python modules \n# These custom-written modules should have been included with the package\n# distribution. \nfrom reverseEcology import sbmlFunctions as sf\n\n# Define local folder structure for data input and processing.\nrawModelDir = '../models/rawModels'\nprocessedDataDir = '../models/processedModels'\nsummaryStatsDir = '../data/modelStats'\n\nsf.processSBMLforRE(rawModelDir, processedDataDir, summaryStatsDir)",
"Pruning Currency Metabolites\nThe next step is to prune currency metabolites from the metabolic network, in order for the network's directed graph to better reflect physiological metabolic transformations. The approach is similar to one outlined by Ma et al [1,2], and later adopted by Borenstein et al [3].\nBriefly, \"currency metabolites\" are defined as metabolites which serve to transfer functional groups (such as ATP), as well as the functional groups themselves (such as phosphorous). To identify such metabolites in KBase, we scanned the ModelSEED reaction database for metabolites listed in Ma et al [1,2], with the addition of cytochromes and quinones (H+ transfer) and acetyl-CoA/CoA (acetate transfer). The full set of such metabolites are included in the reverseEcology Python package.\nThe function below updates the stoichiometry of all reactions containing these metabolites.\n1. First, all pairs of currency metabolites are removed. The set of such pairs is included in the reverseEcology Python package, in the file packageData\\currencyRemovePairs.txt. \n2. Some currency pairs involved in amino acid metabolism are subject to additional scrutiny, and removed only if a free amino group does not participate in the reaction. This ensures that reactions which synthesize these compounds are retained. The set of such pairs is included in the reverseEcology Python package, in the file packageData\\currencyAminoPairs.txt.\n3. Finally, all metabolites which represent free forms of functional groups are removed (H+, NH4+, CO2, O2, H2O, etc). The set of such pairs is included in the reverseEcology Python package, in the file packageData\\currencyRemoveSingletons.txt. \nThe script below loops over the set of processed models and removes these metabolites from their associated reactions.",
"# Import Python modules \n# These custom-written modules should have been included with the package\n# distribution. \nfrom reverseEcology import sbmlFunctions as sf\n\n# Define local folder structure for data input and processing.\nmodelDir = '../models/processedModels'\nsummaryStatsDir = '../data/modelStats'\n\nsf.pruneCurrencyMetabs(modelDir, summaryStatsDir)",
"Genome Merging\nConverting KBase SBML files to Graphs\nThe first step in reverse ecology analysis is to convert the SBML representation of the metabolic network to a graph. The network is represented as a directed graph, where nodes denote compounds and edges denote reactions. A directed edge from A to B indicates that compound A is a substrate in a reaction which produces compound B. That is, for a given reaction, all the nodes that represent its substrates are connected by directed edges to all the nodes that represent its products.\nThe code below converts the SBML representations to metabolic network graphs.",
"# Import Python modules \n# These custom-written modules should have been included with the package\n# distribution. \nfrom reverseEcology import metadataFunctions as mf\nfrom reverseEcology import sbmlFunctions as sf\n\n# Define local folder structure for data input and processing.\nmodelDir = '../models/processedModels'\nsummaryStatsDir = '../data/modelStats'\n\n# Import the list of models\ndirList = mf.getDirList(modelDir)\n\nmodelStatArray = sf.dirListToAdjacencyList(dirList, modelDir, summaryStatsDir)",
"Merging Network Graphs Belonging to the Same Clade\nAs shown by Borenstein et al [3], reverse ecology analysis is sensitive to genome incompleteness. To overcome this issue, we decided to merge metabolic models of all those genomes belonging to a particular clade.\nThe process begins with a single genome from that clade. For the next genome from that clade, unique metabolic pathways are identified. Those unique pathways are appended to the original graph, giving a metabolic network graph which contains the content of both genomes. The process is repeated, with unique metabolic pathways being appended to the composite network graph until all genomes have been exhausted.\nReverse ecology analysis will then be performed on these merged models.\nThe code below creates the necessary folder structure. Then, it reads in the taxonomy {lineage, clade, tribe} for each genomes and aggregates all genomes belonging to the same tribe.",
"# Import Python modules \n# These custom-written modules should have been included with the package\n# distribution. \nfrom reverseEcology import metadataFunctions as mf\nfrom reverseEcology import graphFunctions as gf\n\n# Define local folder structure for data input and processing.\ngenomeModelDir = '../models/processedModels'\nmergedModelDir = '../models/merged'\nexternalDataDir = '../data/externalData'\ntaxonomyFile= externalDataDir+'/taxonomy.csv'\n\nmergeLevel = 'Clade'\ncladeSampleDict = mf.importTaxonomy(taxonomyFile, mergeLevel)\ngf.createMergedGraph(cladeSampleDict, mergedModelDir, genomeModelDir)",
"References\n\nMa, H., & Zeng, A. P. (2003). Reconstruction of metabolic networks from genome data and analysis of their global structure for various organisms. Bioinformatics, 19(2), 270–277. http://doi.org/10.1093/bioinformatics/19.2.270\nMa, H. W., & Zeng, A.-P. (2003). The connectivity structure, giant strong component and centrality of metabolic networks. Bioinformatics, 19(11), 1423–1430. http://doi.org/10.1093/bioinformatics/btg177\nBorenstein, E., Kupiec, M., Feldman, M. W., & Ruppin, E. (2008). Large-scale reconstruction and phylogenetic analysis of metabolic environments. Proceedings of the National Academy of Sciences, 105(38), 14482–14487. http://doi.org/10.1073/pnas.0806162105"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
neuro-data-science/neuro_data_science
|
python/modeling/modeling_data.ipynb
|
gpl-3.0
|
[
"import scipy.io as si\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sys\nsys.path.append('../src/')\nimport opencourse.modeling_data_funcs as md\n%matplotlib inline",
"Modeling data involves using observed datapoints to try to make a more general description of patterns that we see. It can be useful to describe the trajectory of a neuron's behavior in time, or to describe the relationship between two variables. Here we will cover the basics of modeling, as well as how we can investigate variability in data and how it affects modeling results.",
"# Load data and pull out important values\ndata = si.loadmat('../../data/StevensonV2.mat')\n\n# Only keep x, y dimensions, transpose to (trials, dims)\nhand_vel = data['handVel'][:2].T\nhand_pos = data['handPos'][:2].T\n# (neurons, trials)\nspikes = data['spikes']\n\n# Remove all times where speeds are very slow\nthreshold = .015\nis_good = np.where(np.linalg.norm(hand_vel, axis=1)**2 > threshold)[0]\nhand_vel = hand_vel[is_good]\nhand_pos = hand_pos[is_good]\nspikes = spikes[:, is_good]\nangle = np.arctan2(hand_vel[:, 0], hand_vel[:, 1])",
"Part 1\nPlot the spike counts as a function of angle. A small amount of meaningless vertical noise has been added to make visualation easier.",
"# Plot Raw Data\nnNeuron = 193\nfig, ax = plt.subplots()\nspikes_noisy = spikes[nNeuron] + 0.75 * np.random.rand(spikes[nNeuron].shape[0])\nmax_s = spikes[nNeuron].max()+1\nax.plot(angle, spikes_noisy, 'r.')\nmd.format_plot(ax, max_s)",
"We'll also plot the mean spiking activity over time below. Calculating the mean across time is already a kind of model. It makes the assumption that the mean is a \"good\" description of spiking activity at any given time point.",
"# Make a simple tuning curve\nangles = np.arange(-np.pi, np.pi, np.pi / 8.)\nn_spikes = np.zeros(len(angles))\nangle_bins = np.digitize(angle, angles)\nfor ii in range(len(angles)):\n mask_angle = angle_bins == (ii + 1)\n n_spikes[ii] = np.mean(spikes[nNeuron, mask_angle])\n\nfig, ax = plt.subplots()\nax.plot(angle, spikes_noisy, 'r.')\nax.plot(angles + np.pi / 16., n_spikes, lw=3)\nmd.format_plot(ax, max_s)",
"Bootstrap error bars\nThe mean is useful, but it also removes a lot of information about the data. In particular, it doesn't tell us anything about how variable the data is. For this, we should calculate error bars.\nIt is possible to calculate error bars analytically, i.e. with mathematical equations, but this requires making assumpts about the distribution of deviations from the mean. Instead, it is recommended to use bootstrapping if possible. This is a method for computationally calculating error bars in order to avoid making as many assumptions about your data. We'll perform this below.",
"n_angle_samples = angle.size\nn_angles = angles.size\nn_boots = 1000\nsimulations = np.zeros([n_boots, n_angles])\n\nfor ii in range(n_boots):\n # Take a random sample of angle values\n ixs = np.random.randint(0, n_angle_samples, n_angle_samples)\n angle_sample = angle[ixs]\n spike_sample = spikes[:, ixs]\n # Group these samples by bins of angle\n angle_bins = np.digitize(angle_sample, angles)\n \n # For each angle, calculate the datapoints corresponding to that angle\n # Take the mean spikes for each bin of angles\n for jj in range(n_angles):\n mask_angle = angle_bins == (jj + 1)\n this_spikes = spike_sample[nNeuron, mask_angle]\n simulations[ii, jj] = np.mean(this_spikes)\n\nfig, ax = plt.subplots()\n_ = ax.plot(angles[:, np.newaxis], simulations.T, color='k', alpha=.01)\n_ = ax.plot(angles, simulations.mean(0), color='b', lw=3)\nmd.format_plot(ax, np.ceil(simulations.max()))",
"As you can see, there is some variability in the calculated mean across bootstrap samples. We can incorporate this variability into our original mean plot by including error bars. We calculate these by taking the 2.5th and 97.5th percentiles of the mean at each timepoint across all of our bootstraps. This is called building a 95% confidence interval.",
"# Plot data + error bars\nclo, chi = np.percentile(simulations, [2.5, 97.5], axis=0)\n\nfig, ax = plt.subplots()\nax.plot(angle, spikes_noisy, 'r.', zorder=-1)\nax.errorbar(angles, n_spikes, yerr=[n_spikes-clo, chi-n_spikes], lw=3)\nmd.format_plot(ax, max_s)",
"Advanced exercise\nDo this for all neurons. Do they actually have cosine tuning as indicated by the research?\nPart 2\nWe can also fit a parameterized model to the spike count. In this case we'll use a Poisson distribution where the rate parameters depends on the exponential of the cosine of the angle away from the arm and a scaling parameters.\n$$P(n; \\theta) = \\frac{\\lambda(\\theta)^n\\exp(-\\lambda(\\theta))}{n!}$$\nwhere\n$$\\lambda = \\exp\\left(\\alpha+\\beta\\cos(\\theta-\\theta_\\text{arm})\\right).$$",
"# This package allows us to perform optimizations\nfrom scipy import optimize as opt",
"We'll use the fmin function in python, which allows us to define an arbitrary \"cost\" function, that is then minimized by tuning model parameters.",
"initial_guess= [.8, 0.1, 4]\nparams = opt.fmin(md.evaluate_score_ExpCos, initial_guess,\n args=(spikes[nNeuron, :], angle))\n\nplt_angle = np.arange(-np.pi, np.pi, np.pi / 80.)\nout = np.exp(params[0] + params[1] * np.cos(plt_angle - params[2]))\nfig, ax = plt.subplots()\nax.plot(angle, spikes_noisy, 'r.')\nax.plot(plt_angle, out, lw=3)\nmd.format_plot(ax, max_s)",
"By optimizing this cost function, the model has uncovered the above structure (blue line) in the data. Does it seem to describe the data well? Try using more complicated model functions and see how it affects the result.\nAdvanced exercise\nIs exponential better than linear-threshold?\nPart 3\nWe can also use more powerful machine learning tools to regress onto the spike count.\nWe'll use Random Forests and Regression models to predict spike count as a function of arm position and velocity. For each of these models we can either regress onto the spike count treating it like a continuous value, or we can predict discreet values for spike count treating it like a classification problem.\nWe'll fit a number of models, then calculate their ability to predict the values of datapoints they were not trained on. This is called \"cross-validating\" your model, which is a crucial component of machine learning.\nThe Random Forest models have the integer parameter n_estimators which increases the complexity of the model. The Regression models have continuous regularization parameters: alpha for Ridge regression (larger = more regularization) and C for Logistic Regression (small = more regularization). Try changing these parameteres and see how it affects training and validation performance.",
"from sklearn.ensemble import RandomForestRegressor as RFR, RandomForestClassifier as RFC\nfrom sklearn.linear_model import Ridge as RR, LogisticRegression as LgR\n\nnNeuron = 0\n# First lets have some meaningful regressors\nY = spikes[nNeuron]\nX = np.hstack((hand_vel, hand_pos))\n\nmodels = [RFR(n_estimators=10), RFC(n_estimators=10),\n RR(alpha=1.), LgR(C=1., solver='lbfgs', multi_class='multinomial')]\nmodel_names = ['Random Forest\\nRegresion', 'Random Forest\\nClassification',\n 'Ridge Regression', 'Logistic Regression']\n\nfolds = 10\nmse = np.zeros((len(models), folds))\nmse_train = np.zeros((len(models), folds))\n\ndef mse_func(y, y_hat):\n return ((y-y_hat)**2).mean()\n\nfor ii in range(folds):\n inds_train = np.arange(Y.size)\n inds_test = inds_train[np.mod(inds_train, folds) == ii]\n inds_train = inds_train[np.logical_not(np.mod(inds_train, folds) == ii)]\n for jj, model in enumerate(models):\n model.fit(X[inds_train], Y[inds_train])\n mse_train[jj, ii] = mse_func(model.predict(X[inds_train]), Y[inds_train])\n mse[jj, ii] = mse_func(model.predict(X[inds_test]), Y[inds_test])\n\nf, ax = plt.subplots(figsize=(10, 4))\nax.plot(np.arange(4)-.1, mse_train, 'r.')\nax.plot(np.arange(4)+.1, mse, 'b.')\nax.plot(-10, 10, 'r.', label='Train mse')\nax.plot(-10, 10, 'b.', label='Validation mse')\nplt.legend(loc='best')\nax.set_xticks(range(4))\nax.set_xticklabels(model_names)\nax.set_ylim(0, 2)\nax.set_ylabel('Mean-squared Error')\n_ = ax.set_xlim(-.5, 3.5)",
"Advanced exercise\nTry adding the arm speed (norm of the velocity vector) as an additional regression variable. Does this improve the model's ability to predict held-out datapoints?"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/vertex-ai-samples
|
notebooks/community/gapic/custom/showcase_custom_tabular_regression_batch_explain.ipynb
|
apache-2.0
|
[
"# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Vertex client library: Custom training tabular regression model for batch prediction with explanation\n<table align=\"left\">\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_custom_tabular_regression_batch_explain.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_custom_tabular_regression_batch_explain.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n</table>\n<br/><br/><br/>\nOverview\nThis tutorial demonstrates how to use the Vertex client library for Python to train and deploy a custom tabular regression model for batch prediction with explanation.\nDataset\nThe dataset used for this tutorial is the Boston Housing Prices dataset. The version of the dataset you will use in this tutorial is built into TensorFlow. The trained model predicts the median price of a house in units of 1K USD.\nObjective\nIn this tutorial, you create a custom model, with a training pipeline, from a Python script in a Google prebuilt Docker container using the Vertex client library, and then do a batch prediction with explanations on the uploaded model. You can alternatively create custom models using gcloud command-line tool or online using Google Cloud Console.\nThe steps performed include:\n\nCreate a Vertex custom job for training a model.\nTrain the TensorFlow model.\nRetrieve and load the model artifacts.\nView the model evaluation.\nSet explanation parameters.\nUpload the model as a Vertex Model resource.\nMake a batch prediction with explanations.\n\nCosts\nThis tutorial uses billable components of Google Cloud (GCP):\n\nVertex AI\nCloud Storage\n\nLearn about Vertex AI\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nInstallation\nInstall the latest version of Vertex client library.",
"import os\nimport sys\n\n# Google Cloud Notebook\nif os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n USER_FLAG = \"--user\"\nelse:\n USER_FLAG = \"\"\n\n! pip3 install -U google-cloud-aiplatform $USER_FLAG",
"Install the latest GA version of google-cloud-storage library as well.",
"! pip3 install -U google-cloud-storage $USER_FLAG",
"Install other packages\nInstall other packages required for this tutorial.",
"! pip3 install -U tabulate $USER_FLAG",
"Restart the kernel\nOnce you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.",
"if not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)",
"Before you begin\nGPU runtime\nMake sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the Vertex APIs and Compute Engine APIs.\n\n\nThe Google Cloud SDK is already installed in Google Cloud Notebook.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.",
"PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID",
"Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation",
"REGION = \"us-central1\" # @param {type: \"string\"}",
"Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.",
"from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")",
"Authenticate your Google Cloud account\nIf you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\nIn the Cloud Console, go to the Create service account key page.\nClick Create service account.\nIn the Service account name field, enter a name, and click Create.\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex\" into the filter box, and select Vertex Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\nClick Create. A JSON file that contains your key downloads to your local environment.\nEnter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.",
"# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# If on Google Cloud Notebook, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''",
"Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you submit a custom training job using the Vertex client library, you upload a Python package\ncontaining your training code to a Cloud Storage bucket. Vertex runs\nthe code from this package. In this tutorial, Vertex also saves the\ntrained model that results from your job in the same bucket. You can then\ncreate an Endpoint resource based on this output in order to serve\nonline predictions.\nSet the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.",
"BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP",
"Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.",
"! gsutil mb -l $REGION $BUCKET_NAME",
"Finally, validate access to your Cloud Storage bucket by examining its contents:",
"! gsutil ls -al $BUCKET_NAME",
"Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants\nImport Vertex client library\nImport the Vertex client library into our Python environment.",
"import time\n\nimport google.cloud.aiplatform_v1beta1 as aip\nfrom google.protobuf import json_format\nfrom google.protobuf.json_format import MessageToJson, ParseDict\nfrom google.protobuf.struct_pb2 import Struct, Value",
"Vertex constants\nSetup up the following constants for Vertex:\n\nAPI_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.\nPARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.",
"# API service endpoint\nAPI_ENDPOINT = \"{}-aiplatform.googleapis.com\".format(REGION)\n\n# Vertex location root path for your dataset, model and endpoint resources\nPARENT = \"projects/\" + PROJECT_ID + \"/locations/\" + REGION",
"Hardware Accelerators\nSet the hardware accelerators (e.g., GPU), if any, for training and prediction.\nSet the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:\n(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)\n\nFor GPU, available accelerators include:\n - aip.AcceleratorType.NVIDIA_TESLA_K80\n - aip.AcceleratorType.NVIDIA_TESLA_P100\n - aip.AcceleratorType.NVIDIA_TESLA_P4\n - aip.AcceleratorType.NVIDIA_TESLA_T4\n - aip.AcceleratorType.NVIDIA_TESLA_V100\nOtherwise specify (None, None) to use a container image to run on a CPU.\nNote: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.",
"if os.getenv(\"IS_TESTING_TRAIN_GPU\"):\n TRAIN_GPU, TRAIN_NGPU = (\n aip.AcceleratorType.NVIDIA_TESLA_K80,\n int(os.getenv(\"IS_TESTING_TRAIN_GPU\")),\n )\nelse:\n TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)\n\nif os.getenv(\"IS_TESTING_DEPOLY_GPU\"):\n DEPLOY_GPU, DEPLOY_NGPU = (\n aip.AcceleratorType.NVIDIA_TESLA_K80,\n int(os.getenv(\"IS_TESTING_DEPOLY_GPU\")),\n )\nelse:\n DEPLOY_GPU, DEPLOY_NGPU = (None, None)",
"Container (Docker) image\nNext, we will set the Docker container images for training and prediction\n\nTensorFlow 1.15\ngcr.io/cloud-aiplatform/training/tf-cpu.1-15:latest\ngcr.io/cloud-aiplatform/training/tf-gpu.1-15:latest\nTensorFlow 2.1\ngcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest\ngcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest\nTensorFlow 2.2\ngcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest\ngcr.io/cloud-aiplatform/training/tf-gpu.2-2:latest\nTensorFlow 2.3\ngcr.io/cloud-aiplatform/training/tf-cpu.2-3:latest\ngcr.io/cloud-aiplatform/training/tf-gpu.2-3:latest\nTensorFlow 2.4\ngcr.io/cloud-aiplatform/training/tf-cpu.2-4:latest\ngcr.io/cloud-aiplatform/training/tf-gpu.2-4:latest\nXGBoost\ngcr.io/cloud-aiplatform/training/xgboost-cpu.1-1\nScikit-learn\ngcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest\nPytorch\ngcr.io/cloud-aiplatform/training/pytorch-cpu.1-4:latest\ngcr.io/cloud-aiplatform/training/pytorch-cpu.1-5:latest\ngcr.io/cloud-aiplatform/training/pytorch-cpu.1-6:latest\ngcr.io/cloud-aiplatform/training/pytorch-cpu.1-7:latest\n\nFor the latest list, see Pre-built containers for training.\n\nTensorFlow 1.15\ngcr.io/cloud-aiplatform/prediction/tf-cpu.1-15:latest\ngcr.io/cloud-aiplatform/prediction/tf-gpu.1-15:latest\nTensorFlow 2.1\ngcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest\ngcr.io/cloud-aiplatform/prediction/tf2-gpu.2-1:latest\nTensorFlow 2.2\ngcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest\ngcr.io/cloud-aiplatform/prediction/tf2-gpu.2-2:latest\nTensorFlow 2.3\ngcr.io/cloud-aiplatform/prediction/tf2-cpu.2-3:latest\ngcr.io/cloud-aiplatform/prediction/tf2-gpu.2-3:latest\nXGBoost\ngcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-2:latest\ngcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest\ngcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-90:latest\ngcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-82:latest\nScikit-learn\ngcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-23:latest\ngcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-22:latest\ngcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-20:latest\n\nFor the latest list, see Pre-built containers for prediction",
"if os.getenv(\"IS_TESTING_TF\"):\n TF = os.getenv(\"IS_TESTING_TF\")\nelse:\n TF = \"2-1\"\n\nif TF[0] == \"2\":\n if TRAIN_GPU:\n TRAIN_VERSION = \"tf-gpu.{}\".format(TF)\n else:\n TRAIN_VERSION = \"tf-cpu.{}\".format(TF)\n if DEPLOY_GPU:\n DEPLOY_VERSION = \"tf2-gpu.{}\".format(TF)\n else:\n DEPLOY_VERSION = \"tf2-cpu.{}\".format(TF)\nelse:\n if TRAIN_GPU:\n TRAIN_VERSION = \"tf-gpu.{}\".format(TF)\n else:\n TRAIN_VERSION = \"tf-cpu.{}\".format(TF)\n if DEPLOY_GPU:\n DEPLOY_VERSION = \"tf-gpu.{}\".format(TF)\n else:\n DEPLOY_VERSION = \"tf-cpu.{}\".format(TF)\n\nTRAIN_IMAGE = \"gcr.io/cloud-aiplatform/training/{}:latest\".format(TRAIN_VERSION)\nDEPLOY_IMAGE = \"gcr.io/cloud-aiplatform/prediction/{}:latest\".format(DEPLOY_VERSION)\n\nprint(\"Training:\", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)\nprint(\"Deployment:\", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)",
"Machine Type\nNext, set the machine type to use for training and prediction.\n\nSet the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.\nmachine type\nn1-standard: 3.75GB of memory per vCPU.\nn1-highmem: 6.5GB of memory per vCPU\nn1-highcpu: 0.9 GB of memory per vCPU\n\n\nvCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]\n\nNote: The following is not supported for training:\n\nstandard: 2 vCPUs\nhighcpu: 2, 4 and 8 vCPUs\n\nNote: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.",
"if os.getenv(\"IS_TESTING_TRAIN_MACHINE\"):\n MACHINE_TYPE = os.getenv(\"IS_TESTING_TRAIN_MACHINE\")\nelse:\n MACHINE_TYPE = \"n1-standard\"\n\nVCPU = \"4\"\nTRAIN_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\nprint(\"Train machine type\", TRAIN_COMPUTE)\n\nif os.getenv(\"IS_TESTING_DEPLOY_MACHINE\"):\n MACHINE_TYPE = os.getenv(\"IS_TESTING_DEPLOY_MACHINE\")\nelse:\n MACHINE_TYPE = \"n1-standard\"\n\nVCPU = \"4\"\nDEPLOY_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\nprint(\"Deploy machine type\", DEPLOY_COMPUTE)",
"Tutorial\nNow you are ready to start creating your own custom model and training for Boston Housing.\nSet up clients\nThe Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.\nYou will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.\n\nModel Service for Model resources.\nEndpoint Service for deployment.\nJob Service for batch jobs and custom training.\nPrediction Service for serving.",
"# client options same for all services\nclient_options = {\"api_endpoint\": API_ENDPOINT}\n\n\ndef create_job_client():\n client = aip.JobServiceClient(client_options=client_options)\n return client\n\n\ndef create_model_client():\n client = aip.ModelServiceClient(client_options=client_options)\n return client\n\n\ndef create_endpoint_client():\n client = aip.EndpointServiceClient(client_options=client_options)\n return client\n\n\ndef create_prediction_client():\n client = aip.PredictionServiceClient(client_options=client_options)\n return client\n\n\nclients = {}\nclients[\"job\"] = create_job_client()\nclients[\"model\"] = create_model_client()\nclients[\"endpoint\"] = create_endpoint_client()\nclients[\"prediction\"] = create_prediction_client()\n\nfor client in clients.items():\n print(client)",
"Train a model\nThere are two ways you can train a custom model using a container image:\n\n\nUse a Google Cloud prebuilt container. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model.\n\n\nUse your own custom container image. If you use your own container, the container needs to contain your code for training a custom model.\n\n\nPrepare your custom job specification\nNow that your clients are ready, your first step is to create a Job Specification for your custom training job. The job specification will consist of the following:\n\nworker_pool_spec : The specification of the type of machine(s) you will use for training and how many (single or distributed)\npython_package_spec : The specification of the Python package to be installed with the pre-built container.\n\nPrepare your machine specification\nNow define the machine specification for your custom training job. This tells Vertex what type of machine instance to provision for the training.\n - machine_type: The type of GCP instance to provision -- e.g., n1-standard-8.\n - accelerator_type: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable TRAIN_GPU != None, you are using a GPU; otherwise you will use a CPU.\n - accelerator_count: The number of accelerators.",
"if TRAIN_GPU:\n machine_spec = {\n \"machine_type\": TRAIN_COMPUTE,\n \"accelerator_type\": TRAIN_GPU,\n \"accelerator_count\": TRAIN_NGPU,\n }\nelse:\n machine_spec = {\"machine_type\": TRAIN_COMPUTE, \"accelerator_count\": 0}",
"Prepare your disk specification\n(optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training.\n\nboot_disk_type: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD.\nboot_disk_size_gb: Size of disk in GB.",
"DISK_TYPE = \"pd-ssd\" # [ pd-ssd, pd-standard]\nDISK_SIZE = 200 # GB\n\ndisk_spec = {\"boot_disk_type\": DISK_TYPE, \"boot_disk_size_gb\": DISK_SIZE}",
"Define the worker pool specification\nNext, you define the worker pool specification for your custom training job. The worker pool specification will consist of the following:\n\nreplica_count: The number of instances to provision of this machine type.\nmachine_spec: The hardware specification.\n\ndisk_spec : (optional) The disk storage specification.\n\n\npython_package: The Python training package to install on the VM instance(s) and which Python module to invoke, along with command line arguments for the Python module.\n\n\nLet's dive deeper now into the python package specification:\n-executor_image_spec: This is the docker image which is configured for your custom training job.\n-package_uris: This is a list of the locations (URIs) of your python training packages to install on the provisioned instance. The locations need to be in a Cloud Storage bucket. These can be either individual python files or a zip (archive) of an entire package. In the later case, the job service will unzip (unarchive) the contents into the docker image.\n-python_module: The Python module (script) to invoke for running the custom training job. In this example, you will be invoking trainer.task.py -- note that it was not neccessary to append the .py suffix.\n-args: The command line arguments to pass to the corresponding Pythom module. In this example, you will be setting:\n - \"--model-dir=\" + MODEL_DIR : The Cloud Storage location where to store the model artifacts. There are two ways to tell the training script where to save the model artifacts:\n - direct: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or\n - indirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification.\n - \"--epochs=\" + EPOCHS: The number of epochs for training.\n - \"--steps=\" + STEPS: The number of steps (batches) per epoch.\n - \"--distribute=\" + TRAIN_STRATEGY\" : The training distribution strategy to use for single or distributed training.\n - \"single\": single device.\n - \"mirror\": all GPU devices on a single compute instance.\n - \"multi\": all GPU devices on all compute instances.",
"JOB_NAME = \"custom_job_\" + TIMESTAMP\nMODEL_DIR = \"{}/{}\".format(BUCKET_NAME, JOB_NAME)\n\nif not TRAIN_NGPU or TRAIN_NGPU < 2:\n TRAIN_STRATEGY = \"single\"\nelse:\n TRAIN_STRATEGY = \"mirror\"\n\nEPOCHS = 20\nSTEPS = 100\n\nDIRECT = True\nif DIRECT:\n CMDARGS = [\n \"--model-dir=\" + MODEL_DIR,\n \"--epochs=\" + str(EPOCHS),\n \"--steps=\" + str(STEPS),\n \"--distribute=\" + TRAIN_STRATEGY,\n ]\nelse:\n CMDARGS = [\n \"--epochs=\" + str(EPOCHS),\n \"--steps=\" + str(STEPS),\n \"--distribute=\" + TRAIN_STRATEGY,\n ]\n\nworker_pool_spec = [\n {\n \"replica_count\": 1,\n \"machine_spec\": machine_spec,\n \"disk_spec\": disk_spec,\n \"python_package_spec\": {\n \"executor_image_uri\": TRAIN_IMAGE,\n \"package_uris\": [BUCKET_NAME + \"/trainer_boston.tar.gz\"],\n \"python_module\": \"trainer.task\",\n \"args\": CMDARGS,\n },\n }\n]",
"Assemble a job specification\nNow assemble the complete description for the custom job specification:\n\ndisplay_name: The human readable name you assign to this custom job.\njob_spec: The specification for the custom job.\nworker_pool_specs: The specification for the machine VM instances.\nbase_output_directory: This tells the service the Cloud Storage location where to save the model artifacts (when variable DIRECT = False). The service will then pass the location to the training script as the environment variable AIP_MODEL_DIR, and the path will be of the form: <output_uri_prefix>/model",
"if DIRECT:\n job_spec = {\"worker_pool_specs\": worker_pool_spec}\nelse:\n job_spec = {\n \"worker_pool_specs\": worker_pool_spec,\n \"base_output_directory\": {\"output_uri_prefix\": MODEL_DIR},\n }\n\ncustom_job = {\"display_name\": JOB_NAME, \"job_spec\": job_spec}",
"Examine the training package\nPackage layout\nBefore you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.\n\nPKG-INFO\nREADME.md\nsetup.cfg\nsetup.py\ntrainer\n__init__.py\ntask.py\n\nThe files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.\nThe file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).\nPackage Assembly\nIn the following cells, you will assemble the training package.",
"# Make folder for Python training script\n! rm -rf custom\n! mkdir custom\n\n# Add package information\n! touch custom/README.md\n\nsetup_cfg = \"[egg_info]\\n\\ntag_build =\\n\\ntag_date = 0\"\n! echo \"$setup_cfg\" > custom/setup.cfg\n\nsetup_py = \"import setuptools\\n\\nsetuptools.setup(\\n\\n install_requires=[\\n\\n 'tensorflow_datasets==1.3.0',\\n\\n ],\\n\\n packages=setuptools.find_packages())\"\n! echo \"$setup_py\" > custom/setup.py\n\npkg_info = \"Metadata-Version: 1.0\\n\\nName: Boston Housing tabular regression\\n\\nVersion: 0.0.0\\n\\nSummary: Demostration training script\\n\\nHome-page: www.google.com\\n\\nAuthor: Google\\n\\nAuthor-email: aferlitsch@google.com\\n\\nLicense: Public\\n\\nDescription: Demo\\n\\nPlatform: Vertex\"\n! echo \"$pkg_info\" > custom/PKG-INFO\n\n# Make the training subfolder\n! mkdir custom/trainer\n! touch custom/trainer/__init__.py",
"Task.py contents\nIn the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary:\n\nGet the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR.\nLoads Boston Housing dataset from TF.Keras builtin datasets\nBuilds a simple deep neural network model using TF.Keras model API.\nCompiles the model (compile()).\nSets a training distribution strategy according to the argument args.distribute.\nTrains the model (fit()) with epochs specified by args.epochs.\nSaves the trained model (save(args.model_dir)) to the specified model directory.\nSaves the maximum value for each feature f.write(str(params)) to the specified parameters file.",
"%%writefile custom/trainer/task.py\n# Single, Mirror and Multi-Machine Distributed Training for Boston Housing\n\nimport tensorflow_datasets as tfds\nimport tensorflow as tf\nfrom tensorflow.python.client import device_lib\nimport numpy as np\nimport argparse\nimport os\nimport sys\ntfds.disable_progress_bar()\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--model-dir', dest='model_dir',\n default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')\nparser.add_argument('--lr', dest='lr',\n default=0.001, type=float,\n help='Learning rate.')\nparser.add_argument('--epochs', dest='epochs',\n default=20, type=int,\n help='Number of epochs.')\nparser.add_argument('--steps', dest='steps',\n default=100, type=int,\n help='Number of steps per epoch.')\nparser.add_argument('--distribute', dest='distribute', type=str, default='single',\n help='distributed training strategy')\nparser.add_argument('--param-file', dest='param_file',\n default='/tmp/param.txt', type=str,\n help='Output file for parameters')\nargs = parser.parse_args()\n\nprint('Python Version = {}'.format(sys.version))\nprint('TensorFlow Version = {}'.format(tf.__version__))\nprint('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))\n\n# Single Machine, single compute device\nif args.distribute == 'single':\n if tf.test.is_gpu_available():\n strategy = tf.distribute.OneDeviceStrategy(device=\"/gpu:0\")\n else:\n strategy = tf.distribute.OneDeviceStrategy(device=\"/cpu:0\")\n# Single Machine, multiple compute device\nelif args.distribute == 'mirror':\n strategy = tf.distribute.MirroredStrategy()\n# Multiple Machine, multiple compute device\nelif args.distribute == 'multi':\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()\n\n# Multi-worker configuration\nprint('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))\n\n\ndef make_dataset():\n\n # Scaling Boston Housing data features\n def scale(feature):\n max = np.max(feature)\n feature = (feature / max).astype(np.float)\n return feature, max\n\n (x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data(\n path=\"boston_housing.npz\", test_split=0.2, seed=113\n )\n params = []\n for _ in range(13):\n x_train[_], max = scale(x_train[_])\n x_test[_], _ = scale(x_test[_])\n params.append(max)\n\n # store the normalization (max) value for each feature\n with tf.io.gfile.GFile(args.param_file, 'w') as f:\n f.write(str(params))\n return (x_train, y_train), (x_test, y_test)\n\n\n# Build the Keras model\ndef build_and_compile_dnn_model():\n model = tf.keras.Sequential([\n tf.keras.layers.Dense(128, activation='relu', input_shape=(13,)),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dense(1, activation='linear')\n ])\n model.compile(\n loss='mse',\n optimizer=tf.keras.optimizers.RMSprop(learning_rate=args.lr))\n return model\n\nNUM_WORKERS = strategy.num_replicas_in_sync\n# Here the batch size scales up by number of workers since\n# `tf.data.Dataset.batch` expects the global batch size.\nBATCH_SIZE = 16\nGLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS\n\nwith strategy.scope():\n # Creation of dataset, and model building/compiling need to be within\n # `strategy.scope()`.\n model = build_and_compile_dnn_model()\n\n# Train the model\n(x_train, y_train), (x_test, y_test) = make_dataset()\nmodel.fit(x_train, y_train, epochs=args.epochs, batch_size=GLOBAL_BATCH_SIZE)\nmodel.save(args.model_dir)",
"Store training script on your Cloud Storage bucket\nNext, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.",
"! rm -f custom.tar custom.tar.gz\n! tar cvf custom.tar custom\n! gzip custom.tar\n! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_boston.tar.gz",
"Train the model\nNow start the training of your custom training job on Vertex. Use this helper function create_custom_job, which takes the following parameter:\n-custom_job: The specification for the custom job.\nThe helper function calls job client service's create_custom_job method, with the following parameters:\n-parent: The Vertex location path to Dataset, Model and Endpoint resources.\n-custom_job: The specification for the custom job.\nYou will display a handful of the fields returned in response object, with the two that are of most interest are:\nresponse.name: The Vertex fully qualified identifier assigned to this custom training job. You save this identifier for using in subsequent steps.\nresponse.state: The current state of the custom training job.",
"def create_custom_job(custom_job):\n response = clients[\"job\"].create_custom_job(parent=PARENT, custom_job=custom_job)\n print(\"name:\", response.name)\n print(\"display_name:\", response.display_name)\n print(\"state:\", response.state)\n print(\"create_time:\", response.create_time)\n print(\"update_time:\", response.update_time)\n return response\n\n\nresponse = create_custom_job(custom_job)",
"Now get the unique identifier for the custom job you created.",
"# The full unique ID for the custom job\njob_id = response.name\n# The short numeric ID for the custom job\njob_short_id = job_id.split(\"/\")[-1]\n\nprint(job_id)",
"Get information on a custom job\nNext, use this helper function get_custom_job, which takes the following parameter:\n\nname: The Vertex fully qualified identifier for the custom job.\n\nThe helper function calls the job client service'sget_custom_job method, with the following parameter:\n\nname: The Vertex fully qualified identifier for the custom job.\n\nIf you recall, you got the Vertex fully qualified identifier for the custom job in the response.name field when you called the create_custom_job method, and saved the identifier in the variable job_id.",
"def get_custom_job(name, silent=False):\n response = clients[\"job\"].get_custom_job(name=name)\n if silent:\n return response\n\n print(\"name:\", response.name)\n print(\"display_name:\", response.display_name)\n print(\"state:\", response.state)\n print(\"create_time:\", response.create_time)\n print(\"update_time:\", response.update_time)\n return response\n\n\nresponse = get_custom_job(job_id)",
"Deployment\nTraining the above model may take upwards of 20 minutes time.\nOnce your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, we will need to know the location of the saved model, which the Python script saved in your local Cloud Storage bucket at MODEL_DIR + '/saved_model.pb'.",
"while True:\n response = get_custom_job(job_id, True)\n if response.state != aip.JobState.JOB_STATE_SUCCEEDED:\n print(\"Training job has not completed:\", response.state)\n model_path_to_deploy = None\n if response.state == aip.JobState.JOB_STATE_FAILED:\n break\n else:\n if not DIRECT:\n MODEL_DIR = MODEL_DIR + \"/model\"\n model_path_to_deploy = MODEL_DIR\n print(\"Training Time:\", response.update_time - response.create_time)\n break\n time.sleep(60)\n\nprint(\"model_to_deploy:\", model_path_to_deploy)",
"Load the saved model\nYour model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.\nTo load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.",
"import tensorflow as tf\n\nmodel = tf.keras.models.load_model(MODEL_DIR)",
"Evaluate the model\nNow let's find out how good the model is.\nLoad evaluation data\nYou will load the Boston Housing test (holdout) data from tf.keras.datasets, using the method load_data(). This will return the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the feature data, and the corresponding labels (median value of owner-occupied home).\nYou don't need the training data, and hence why we loaded it as (_, _).\nBefore you can run the data through evaluation, you need to preprocess it:\nx_test:\n1. Normalize (rescaling) the data in each column by dividing each value by the maximum value of that column. This will replace each single value with a 32-bit floating point number between 0 and 1.",
"import numpy as np\nfrom tensorflow.keras.datasets import boston_housing\n\n(_, _), (x_test, y_test) = boston_housing.load_data(\n path=\"boston_housing.npz\", test_split=0.2, seed=113\n)\n\n\ndef scale(feature):\n max = np.max(feature)\n feature = (feature / max).astype(np.float32)\n return feature\n\n\n# Let's save one data item that has not been scaled\nx_test_notscaled = x_test[0:1].copy()\n\nfor _ in range(13):\n x_test[_] = scale(x_test[_])\nx_test = x_test.astype(np.float32)\n\nprint(x_test.shape, x_test.dtype, y_test.shape)\nprint(\"scaled\", x_test[0])\nprint(\"unscaled\", x_test_notscaled)",
"Perform the model evaluation\nNow evaluate how well the model in the custom job did.",
"model.evaluate(x_test, y_test)",
"Upload the model for serving\nNext, you will upload your TF.Keras model from the custom job to Vertex Model service, which will create a Vertex Model resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex, your serving function ensures that the data is decoded on the model server before it is passed as input to your model.\nHow does the serving function work\nWhen you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a tf.string.\nThe serving function consists of two parts:\n\npreprocessing function:\nConverts the input (tf.string) to the input shape and data type of the underlying model (dynamic graph).\nPerforms the same preprocessing of the data that was done during training the underlying model -- e.g., normalizing, scaling, etc.\npost-processing function:\nConverts the model output to format expected by the receiving application -- e.q., compresses the output.\nPackages the output for the the receiving application -- e.g., add headings, make JSON object, etc.\n\nBoth the preprocessing and post-processing functions are converted to static graphs which are fused to the model. The output from the underlying model is passed to the post-processing function. The post-processing function passes the converted/packaged output back to the HTTP server. The HTTP server returns the output as the HTTP response content.\nOne consideration you need to consider when building serving functions for TF.Keras models is that they run as static graphs. That means, you cannot use TF graph operations that require a dynamic graph. If you do, you will get an error during the compile of the serving function which will indicate that you are using an EagerTensor which is not supported.\nGet the serving function signature\nYou can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.\nWhen making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.\nYou also need to know the name of the serving function's input and output layer for constructing the explanation metadata -- which is discussed subsequently.",
"loaded = tf.saved_model.load(model_path_to_deploy)\n\nserving_input = list(\n loaded.signatures[\"serving_default\"].structured_input_signature[1].keys()\n)[0]\nprint(\"Serving function input:\", serving_input)\nserving_output = list(loaded.signatures[\"serving_default\"].structured_outputs.keys())[0]\nprint(\"Serving function output:\", serving_output)\n\ninput_name = model.input.name\nprint(\"Model input name:\", input_name)\noutput_name = model.output.name\nprint(\"Model output name:\", output_name)",
"Explanation Specification\nTo get explanations when doing a prediction, you must enable the explanation capability and set corresponding settings when you upload your custom model to an Vertex Model resource. These settings are referred to as the explanation metadata, which consists of:\n\nparameters: This is the specification for the explainability algorithm to use for explanations on your model. You can choose between:\nShapley - Note, not recommended for image data -- can be very long running\nXRAI\nIntegrated Gradients\nmetadata: This is the specification for how the algoithm is applied on your custom model.\n\nExplanation Parameters\nLet's first dive deeper into the settings for the explainability algorithm.\nShapley\nAssigns credit for the outcome to each feature, and considers different permutations of the features. This method provides a sampling approximation of exact Shapley values.\nUse Cases:\n - Classification and regression on tabular data.\nParameters:\n\npath_count: This is the number of paths over the features that will be processed by the algorithm. An exact approximation of the Shapley values requires M! paths, where M is the number of features. For the CIFAR10 dataset, this would be 784 (28*28).\n\nFor any non-trival number of features, this is too compute expensive. You can reduce the number of paths over the features to M * path_count.\nIntegrated Gradients\nA gradients-based method to efficiently compute feature attributions with the same axiomatic properties as the Shapley value.\nUse Cases:\n - Classification and regression on tabular data.\n - Classification on image data.\nParameters:\n\nstep_count: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.\n\nXRAI\nBased on the integrated gradients method, XRAI assesses overlapping regions of the image to create a saliency map, which highlights relevant regions of the image rather than pixels.\nUse Cases:\n\nClassification on image data.\n\nParameters:\n\nstep_count: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.\n\nIn the next code cell, set the variable XAI to which explainabilty algorithm you will use on your custom model.",
"XAI = \"ig\" # [ shapley, ig, xrai ]\n\nif XAI == \"shapley\":\n PARAMETERS = {\"sampled_shapley_attribution\": {\"path_count\": 10}}\nelif XAI == \"ig\":\n PARAMETERS = {\"integrated_gradients_attribution\": {\"step_count\": 50}}\nelif XAI == \"xrai\":\n PARAMETERS = {\"xrai_attribution\": {\"step_count\": 50}}\n\nparameters = aip.ExplanationParameters(PARAMETERS)",
"Explanation Metadata\nLet's first dive deeper into the explanation metadata, which consists of:\n\n\noutputs: A scalar value in the output to attribute -- what to explain. For example, in a probability output [0.1, 0.2, 0.7] for classification, one wants an explanation for 0.7. Consider the following formulae, where the output is y and that is what we want to explain.\ny = f(x)\n\n\nConsider the following formulae, where the outputs are y and z. Since we can only do attribution for one scalar value, we have to pick whether we want to explain the output y or z. Assume in this example the model is object detection and y and z are the bounding box and the object classification. You would want to pick which of the two outputs to explain.\ny, z = f(x)\n\nThe dictionary format for outputs is:\n{ \"outputs\": { \"[your_display_name]\":\n \"output_tensor_name\": [layer]\n }\n}\n\n<blockquote>\n - [your_display_name]: A human readable name you assign to the output to explain. A common example is \"probability\".<br/>\n - \"output_tensor_name\": The key/value field to identify the output layer to explain. <br/>\n - [layer]: The output layer to explain. In a single task model, like a tabular regressor, it is the last (topmost) layer in the model.\n</blockquote>\n\n\n\ninputs: The features for attribution -- how they contributed to the output. Consider the following formulae, where a and b are the features. We have to pick which features to explain how the contributed. Assume that this model is deployed for A/B testing, where a are the data_items for the prediction and b identifies whether the model instance is A or B. You would want to pick a (or some subset of) for the features, and not b since it does not contribute to the prediction.\ny = f(a,b)\n\n\nThe minimum dictionary format for inputs is:\n{ \"inputs\": { \"[your_display_name]\":\n \"input_tensor_name\": [layer]\n }\n}\n\n<blockquote>\n - [your_display_name]: A human readable name you assign to the input to explain. A common example is \"features\".<br/>\n - \"input_tensor_name\": The key/value field to identify the input layer for the feature attribution. <br/>\n - [layer]: The input layer for feature attribution. In a single input tensor model, it is the first (bottom-most) layer in the model.\n</blockquote>\n\nSince the inputs to the model are tabular, you can specify the following two additional fields as reporting/visualization aids:\n<blockquote>\n - \"modality\": \"image\": Indicates the field values are image data.\n</blockquote>\n\nSince the inputs to the model are tabular, you can specify the following two additional fields as reporting/visualization aids:\n<blockquote>\n - \"encoding\": \"BAG_OF_FEATURES\" : Indicates that the inputs are set of tabular features.<br/>\n - \"index_feature_mapping\": [ feature-names ] : A list of human readable names for each feature. For this example, we use the feature names specified in the dataset.<br/>\n - \"modality\": \"numeric\": Indicates the field values are numeric.\n</blockquote>",
"INPUT_METADATA = {\n \"input_tensor_name\": serving_input,\n \"encoding\": \"BAG_OF_FEATURES\",\n \"modality\": \"numeric\",\n \"index_feature_mapping\": [\n \"crim\",\n \"zn\",\n \"indus\",\n \"chas\",\n \"nox\",\n \"rm\",\n \"age\",\n \"dis\",\n \"rad\",\n \"tax\",\n \"ptratio\",\n \"b\",\n \"lstat\",\n ],\n}\n\nOUTPUT_METADATA = {\"output_tensor_name\": serving_output}\n\ninput_metadata = aip.ExplanationMetadata.InputMetadata(INPUT_METADATA)\noutput_metadata = aip.ExplanationMetadata.OutputMetadata(OUTPUT_METADATA)\n\nmetadata = aip.ExplanationMetadata(\n inputs={\"features\": input_metadata}, outputs={\"medv\": output_metadata}\n)\n\nexplanation_spec = aip.ExplanationSpec(metadata=metadata, parameters=parameters)",
"Upload the model\nUse this helper function upload_model to upload your model, stored in SavedModel format, up to the Model service, which will instantiate a Vertex Model resource instance for your model. Once you've done that, you can use the Model resource instance in the same way as any other Vertex Model resource instance, such as deploying to an Endpoint resource for serving predictions.\nThe helper function takes the following parameters:\n\ndisplay_name: A human readable name for the Endpoint service.\nimage_uri: The container image for the model deployment.\nmodel_uri: The Cloud Storage path to our SavedModel artificat. For this tutorial, this is the Cloud Storage location where the trainer/task.py saved the model artifacts, which we specified in the variable MODEL_DIR.\n\nThe helper function calls the Model client service's method upload_model, which takes the following parameters:\n\nparent: The Vertex location root path for Dataset, Model and Endpoint resources.\nmodel: The specification for the Vertex Model resource instance.\n\nLet's now dive deeper into the Vertex model specification model. This is a dictionary object that consists of the following fields:\n\ndisplay_name: A human readable name for the Model resource.\nmetadata_schema_uri: Since your model was built without an Vertex Dataset resource, you will leave this blank ('').\nartificat_uri: The Cloud Storage path where the model is stored in SavedModel format.\ncontainer_spec: This is the specification for the Docker container that will be installed on the Endpoint resource, from which the Model resource will serve predictions. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.\nexplanation_spec: This is the specification for enabling explainability for your model.\n\nUploading a model into a Vertex Model resource returns a long running operation, since it may take a few moments. You call response.result(), which is a synchronous call and will return when the Vertex Model resource is ready.\nThe helper function returns the Vertex fully qualified identifier for the corresponding Vertex Model instance upload_model_response.model. You will save the identifier for subsequent steps in the variable model_to_deploy_id.",
"IMAGE_URI = DEPLOY_IMAGE\n\n\ndef upload_model(display_name, image_uri, model_uri):\n\n model = aip.Model(\n display_name=display_name,\n artifact_uri=model_uri,\n metadata_schema_uri=\"\",\n explanation_spec=explanation_spec,\n container_spec={\"image_uri\": image_uri},\n )\n\n response = clients[\"model\"].upload_model(parent=PARENT, model=model)\n print(\"Long running operation:\", response.operation.name)\n upload_model_response = response.result(timeout=180)\n print(\"upload_model_response\")\n print(\" model:\", upload_model_response.model)\n return upload_model_response.model\n\n\nmodel_to_deploy_id = upload_model(\n \"boston-\" + TIMESTAMP, IMAGE_URI, model_path_to_deploy\n)",
"Get Model resource information\nNow let's get the model information for just your model. Use this helper function get_model, with the following parameter:\n\nname: The Vertex unique identifier for the Model resource.\n\nThis helper function calls the Vertex Model client service's method get_model, with the following parameter:\n\nname: The Vertex unique identifier for the Model resource.",
"def get_model(name):\n response = clients[\"model\"].get_model(name=name)\n print(response)\n\n\nget_model(model_to_deploy_id)",
"Model deployment for batch prediction\nNow deploy the trained Vertex Model resource you created for batch prediction. This differs from deploying a Model resource for on-demand prediction.\nFor online prediction, you:\n\n\nCreate an Endpoint resource for deploying the Model resource to.\n\n\nDeploy the Model resource to the Endpoint resource.\n\n\nMake online prediction requests to the Endpoint resource.\n\n\nFor batch-prediction, you:\n\n\nCreate a batch prediction job.\n\n\nThe job service will provision resources for the batch prediction request.\n\n\nThe results of the batch prediction request are returned to the caller.\n\n\nThe job service will unprovision the resoures for the batch prediction request.\n\n\nMake a batch prediction request\nNow do a batch prediction to your deployed model.",
"test_item_1 = x_test[0]\ntest_label_1 = y_test[0]\ntest_item_2 = x_test[1]\ntest_label_2 = y_test[1]\nprint(test_item_1.shape)",
"Make the batch input file\nNow make a batch input file, which you will store in your local Cloud Storage bucket. Each instance in the prediction request is a dictionary entry of the form:\n {serving_input: content}\n\n\nserving_input: the name of the input layer of the underlying model.\ncontent: The feature values of the test item as a list.",
"import json\n\ngcs_input_uri = BUCKET_NAME + \"/\" + \"test.jsonl\"\nwith tf.io.gfile.GFile(gcs_input_uri, \"w\") as f:\n data = {serving_input: test_item_1.tolist()}\n f.write(json.dumps(data) + \"\\n\")\n data = {serving_input: test_item_2.tolist()}\n f.write(json.dumps(data) + \"\\n\")",
"Compute instance scaling\nYou have several choices on scaling the compute instances for handling your batch prediction requests:\n\nSingle Instance: The batch prediction requests are processed on a single compute instance.\n\nSet the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.\n\n\nManual Scaling: The batch prediction requests are split across a fixed number of compute instances that you manually specified.\n\n\nSet the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and batch prediction requests are evenly distributed across them.\n\n\nAuto Scaling: The batch prediction requests are split across a scaleable number of compute instances.\n\nSet the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.\n\nThe minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.",
"MIN_NODES = 1\nMAX_NODES = 1",
"Make batch prediction request\nNow that your batch of two test items is ready, let's do the batch request. Use this helper function create_batch_prediction_job, with the following parameters:\n\ndisplay_name: The human readable name for the prediction job.\nmodel_name: The Vertex fully qualified identifier for the Model resource.\ngcs_source_uri: The Cloud Storage path to the input file -- which you created above.\ngcs_destination_output_uri_prefix: The Cloud Storage path that the service will write the predictions to.\nparameters: Additional filtering parameters for serving prediction results.\n\nThe helper function calls the job client service's create_batch_prediction_job metho, with the following parameters:\n\nparent: The Vertex location root path for Dataset, Model and Pipeline resources.\nbatch_prediction_job: The specification for the batch prediction job.\n\nLet's now dive into the specification for the batch_prediction_job:\n\ndisplay_name: The human readable name for the prediction batch job.\nmodel: The Vertex fully qualified identifier for the Model resource.\ndedicated_resources: The compute resources to provision for the batch prediction job.\nmachine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.\nstarting_replica_count: The number of compute instances to initially provision, which you set earlier as the variable MIN_NODES.\nmax_replica_count: The maximum number of compute instances to scale to, which you set earlier as the variable MAX_NODES.\nmodel_parameters: Additional filtering parameters for serving prediction results. No Additional parameters are supported for custom models.\ninput_config: The input source and format type for the instances to predict.\ninstances_format: The format of the batch prediction request file: csv or jsonl.\ngcs_source: A list of one or more Cloud Storage paths to your batch prediction requests.\noutput_config: The output destination and format for the predictions.\nprediction_format: The format of the batch prediction response file: csv or jsonl.\ngcs_destination: The output destination for the predictions.\n\nThis call is an asychronous operation. You will print from the response object a few select fields, including:\n\nname: The Vertex fully qualified identifier assigned to the batch prediction job.\ndisplay_name: The human readable name for the prediction batch job.\nmodel: The Vertex fully qualified identifier for the Model resource.\ngenerate_explanations: Whether True/False explanations were provided with the predictions (explainability).\nstate: The state of the prediction job (pending, running, etc).\n\nSince this call will take a few moments to execute, you will likely get JobState.JOB_STATE_PENDING for state.",
"BATCH_MODEL = \"boston_batch-\" + TIMESTAMP\n\n\ndef create_batch_prediction_job(\n display_name,\n model_name,\n gcs_source_uri,\n gcs_destination_output_uri_prefix,\n parameters=None,\n):\n\n if DEPLOY_GPU:\n machine_spec = {\n \"machine_type\": DEPLOY_COMPUTE,\n \"accelerator_type\": DEPLOY_GPU,\n \"accelerator_count\": DEPLOY_NGPU,\n }\n else:\n machine_spec = {\n \"machine_type\": DEPLOY_COMPUTE,\n \"accelerator_count\": 0,\n }\n\n batch_prediction_job = {\n \"display_name\": display_name,\n # Format: 'projects/{project}/locations/{location}/models/{model_id}'\n \"model\": model_name,\n \"model_parameters\": json_format.ParseDict(parameters, Value()),\n \"input_config\": {\n \"instances_format\": IN_FORMAT,\n \"gcs_source\": {\"uris\": [gcs_source_uri]},\n },\n \"output_config\": {\n \"predictions_format\": OUT_FORMAT,\n \"gcs_destination\": {\"output_uri_prefix\": gcs_destination_output_uri_prefix},\n },\n \"dedicated_resources\": {\n \"machine_spec\": machine_spec,\n \"starting_replica_count\": MIN_NODES,\n \"max_replica_count\": MAX_NODES,\n },\n \"generate_explanation\": True,\n }\n response = clients[\"job\"].create_batch_prediction_job(\n parent=PARENT, batch_prediction_job=batch_prediction_job\n )\n print(\"response\")\n print(\" name:\", response.name)\n print(\" display_name:\", response.display_name)\n print(\" model:\", response.model)\n try:\n print(\" generate_explanation:\", response.generate_explanation)\n except:\n pass\n print(\" state:\", response.state)\n print(\" create_time:\", response.create_time)\n print(\" start_time:\", response.start_time)\n print(\" end_time:\", response.end_time)\n print(\" update_time:\", response.update_time)\n print(\" labels:\", response.labels)\n return response\n\n\nIN_FORMAT = \"jsonl\"\nOUT_FORMAT = \"jsonl\"\n\nresponse = create_batch_prediction_job(\n BATCH_MODEL, model_to_deploy_id, gcs_input_uri, BUCKET_NAME\n)",
"Now get the unique identifier for the batch prediction job you created.",
"# The full unique ID for the batch job\nbatch_job_id = response.name\n# The short numeric ID for the batch job\nbatch_job_short_id = batch_job_id.split(\"/\")[-1]\n\nprint(batch_job_id)",
"Get information on a batch prediction job\nUse this helper function get_batch_prediction_job, with the following paramter:\n\njob_name: The Vertex fully qualified identifier for the batch prediction job.\n\nThe helper function calls the job client service's get_batch_prediction_job method, with the following paramter:\n\nname: The Vertex fully qualified identifier for the batch prediction job. In this tutorial, you will pass it the Vertex fully qualified identifier for your batch prediction job -- batch_job_id\n\nThe helper function will return the Cloud Storage path to where the predictions are stored -- gcs_destination.",
"def get_batch_prediction_job(job_name, silent=False):\n response = clients[\"job\"].get_batch_prediction_job(name=job_name)\n if silent:\n return response.output_config.gcs_destination.output_uri_prefix, response.state\n\n print(\"response\")\n print(\" name:\", response.name)\n print(\" display_name:\", response.display_name)\n print(\" model:\", response.model)\n try: # not all data types support explanations\n print(\" generate_explanation:\", response.generate_explanation)\n except:\n pass\n print(\" state:\", response.state)\n print(\" error:\", response.error)\n gcs_destination = response.output_config.gcs_destination\n print(\" gcs_destination\")\n print(\" output_uri_prefix:\", gcs_destination.output_uri_prefix)\n return gcs_destination.output_uri_prefix, response.state\n\n\npredictions, state = get_batch_prediction_job(batch_job_id)",
"Get the predictions\nWhen the batch prediction is done processing, the job state will be JOB_STATE_SUCCEEDED.\nFinally you view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated at the time you made the batch prediction job, under a subfolder starting with the name prediction, and under that folder will be a file called prediction.results-xxxxx-of-xxxxx.\nNow display (cat) the contents. You will see multiple JSON objects, one for each prediction.\nFinally you view the explanations stored at the Cloud Storage path you set as output. The explanations will be in a JSONL format, which you indicated at the time you made the batch explanation job, under a subfolder starting with the name prediction, and under that folder will be a file called explanation-results-xxxx-of-xxxx.\nLet's display (cat) the contents. You will a row for each prediction -- in this case, there is just one row. The row contains:\n\ndense_input: The input for the prediction.\nprediction: The predicted value.",
"def get_latest_predictions(gcs_out_dir):\n \"\"\" Get the latest prediction subfolder using the timestamp in the subfolder name\"\"\"\n folders = !gsutil ls $gcs_out_dir\n latest = \"\"\n for folder in folders:\n subfolder = folder.split(\"/\")[-2]\n if subfolder.startswith(\"prediction-\"):\n if subfolder > latest:\n latest = folder[:-1]\n return latest\n\n\nwhile True:\n predictions, state = get_batch_prediction_job(batch_job_id, True)\n if state != aip.JobState.JOB_STATE_SUCCEEDED:\n print(\"The job has not completed:\", state)\n if state == aip.JobState.JOB_STATE_FAILED:\n break\n else:\n folder = get_latest_predictions(predictions)\n ! gsutil ls $folder/explanation.results*\n\n print(\"Results:\")\n ! gsutil cat $folder/explanation.results*\n\n print(\"Errors:\")\n ! gsutil cat $folder/prediction.errors*\n break\n time.sleep(60)",
"Cleaning up\nTo clean up all GCP resources used in this project, you can delete the GCP\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:\n\nDataset\nPipeline\nModel\nEndpoint\nBatch Job\nCustom Job\nHyperparameter Tuning Job\nCloud Storage Bucket",
"delete_dataset = True\ndelete_pipeline = True\ndelete_model = True\ndelete_endpoint = True\ndelete_batchjob = True\ndelete_customjob = True\ndelete_hptjob = True\ndelete_bucket = True\n\n# Delete the dataset using the Vertex fully qualified identifier for the dataset\ntry:\n if delete_dataset and \"dataset_id\" in globals():\n clients[\"dataset\"].delete_dataset(name=dataset_id)\nexcept Exception as e:\n print(e)\n\n# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline\ntry:\n if delete_pipeline and \"pipeline_id\" in globals():\n clients[\"pipeline\"].delete_training_pipeline(name=pipeline_id)\nexcept Exception as e:\n print(e)\n\n# Delete the model using the Vertex fully qualified identifier for the model\ntry:\n if delete_model and \"model_to_deploy_id\" in globals():\n clients[\"model\"].delete_model(name=model_to_deploy_id)\nexcept Exception as e:\n print(e)\n\n# Delete the endpoint using the Vertex fully qualified identifier for the endpoint\ntry:\n if delete_endpoint and \"endpoint_id\" in globals():\n clients[\"endpoint\"].delete_endpoint(name=endpoint_id)\nexcept Exception as e:\n print(e)\n\n# Delete the batch job using the Vertex fully qualified identifier for the batch job\ntry:\n if delete_batchjob and \"batch_job_id\" in globals():\n clients[\"job\"].delete_batch_prediction_job(name=batch_job_id)\nexcept Exception as e:\n print(e)\n\n# Delete the custom job using the Vertex fully qualified identifier for the custom job\ntry:\n if delete_customjob and \"job_id\" in globals():\n clients[\"job\"].delete_custom_job(name=job_id)\nexcept Exception as e:\n print(e)\n\n# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job\ntry:\n if delete_hptjob and \"hpt_job_id\" in globals():\n clients[\"job\"].delete_hyperparameter_tuning_job(name=hpt_job_id)\nexcept Exception as e:\n print(e)\n\nif delete_bucket and \"BUCKET_NAME\" in globals():\n ! gsutil rm -r $BUCKET_NAME"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
char-lie/python_presentations
|
ctypes/change_integer.ipynb
|
mit
|
[
"Basic example\nMultiplication of two integer numbers is pretty simple.\nAdditional information can be found in the documentation.\nWrite C code\nFirst of all you should open your favourite text editor and write C code there.\nLinux\nWe want to multiply two integers and get integer as the result\nc\nint mul(int a, int b) {\n return a * b;\n}\nNotice, that it's not mandatory to have main entry point function\nWindows\nYou should specify that function is written in C and needs to be exported to compile DLL in Visual Studio and use it with ctypes\n```c\nifdef __cplusplus\nextern \"C\" {\nendif\n__declspec(dllexport) int mul(int a, int b) {\nreturn a * b;\n\n}\nifdef __cplusplus\n}\nendif\n```\nCompile shared library\nYou can use gcc in Linux\nbash\ngcc -shared basics.c -o lib_basics.so\nLoad created library in Python",
"from ctypes import cdll",
"In Linux",
"basics = cdll.LoadLibrary('./lib_basics.so')",
"In Windows\npython\nbasics = cdll.LoadLibrary('lib_basics.dll')\nUsage\nNow it's pretty simple to call mul function just like it was a property of module basics",
"basics.mul(2, 5)",
"See? It's easy\nDot product of two arrays\nArray is a block of memory split into chunks of a single type and it's easy to use C arrays with ctypes\nC function\nJust multiply two arrays element-wise and sum the result\n```c\ninclude <stdlib.h>\nint dot(int a, int b, size_t length) {\n int result = 0;\n while (length --> 0) {\n result += a[length] * b[length];\n }\n return result;\n}\n```\nCreate arrays\nWe have to import int data type from ctypes",
"from ctypes import c_int",
"Say, we need to multiply 3-dimensional vectors",
"first = (c_int * 3)(1, 2, 3)",
"We can create an alias for this data type and use it",
"vector3D = c_int * 3\nsecond = vector3D(4, 5, 6)",
"Call the function",
"c_result = basics.dot(first, second, 3)\npython_result = sum(a * b for a, b in zip([1, 2, 3], [4, 5, 6]))\nprint('C code returned', c_result, 'and Python code returned', python_result)\n\nbasics.dot((c_int*1)(2), (c_int*1)(*[3]), 1)",
"Following examples will cause errors",
"try:\n vector3D([1, 2, 3])\nexcept:\n print('You cannot pass lists')\ntry:\n vector3D(0, 1, 2, 3)\nexcept:\n print('Forbidden to provide more elements than it should accept') ",
"Available types\nFollowing types can be used to pass arguments to C functions\n| ctypes | type | C type Python type |\n| ----------- | -------------------------------------- | ------------------- |\n| c_bool | _Bool | bool (1) |\n| c_char | char | 1-character bytes object |\n| c_wchar | wchar_t | 1-character string |\n| c_byte | char | int |\n| c_ubyte | unsigned | char int |\n| c_short | short | int |\n| c_ushort | unsigned | short int |\n| c_int | int | int |\n| c_uint | unsigned | int int |\n| c_long | long | int |\n| ctypes | type | C type Python type |\n| ----------- | -------------------------------------- | ------------------- |\n| c_ulong | unsigned long | int |\n| c_longlong | __int64 or long long | int |\n| c_ulonglong | unsigned __int64 or unsigned long long | int |\n| c_size_t | size_t | int |\n| c_ssize_t | ssize_t or Py_ssize_t | int |\n| c_float | float | float |\n| c_double | double | float |\n| c_longdouble | long double | float |\n| c_char_p | char * (NUL terminated) | bytes object or None |\n| c_wchar_p | wchar_t * (NUL terminated) | string or None |\n| c_void_p | void * | int or None |\nChange the long passed to your function\nLarge numbers (bignum) are represented as arrays of longs in Python:\n{d0, d1, d2, ...}\nWe can change each one of them, so why not?\nRead the documentation about Python API for C.\nPrepare shared library\nC function:\n```c\ninclude <Python.h>\nint set_long(PyLongObject* o, long new_value,\n size_t digit) {\n\no->ob_digit[digit] = new_value;\n\nreturn 0;\n\n}\n```\nFile should be compiled to shared library (dll in Windows).\nMakefile for Linux:\nFLAGS=-shared\nLIBRARIES=-I/usr/include/python3.4\nBUILD_LIBRARY=gcc $(FLAGS) $(LIBRARIES)\nall:\n $(BUILD_LIBRARY) setters.c -o lib_setters.so\nUse shared library\nIt's handy to create Python wrapper for this C function",
"from ctypes import cdll, c_long, c_size_t, c_voidp\n\nsetters = cdll.LoadLibrary('./lib_setters.so')\n\ndef change_long(a, b=0, digit=0):\n setters.set_long(c_voidp(id(a)), c_long(b), c_size_t(digit))",
"Don't forget that Python interpreter will not create new objects for small integers like 0, so we should avoid assigning new values to such numbers, because they will be changed everywhere they're used",
"from ctypes import c_long, c_size_t, c_voidp\n\ndef change_long(a, b=0, digit=0):\n args = (a, b, digit)\n if not all(type(a) is int for a in args):\n raise TypeError('All parameters should be of type \"int\", '\n 'but {} provided'.format(map(type, args)))\n if a + 0 is a:\n raise ValueError('No way. You don\\'t want to break '\n 'your interpreter, right?')\n setters.set_long(c_voidp(id(a)), c_long(b), c_size_t(digit))",
"Recall that we cannot change values of integers inside the Python functions",
"def variable_info(text, variable):\n print('{:^30}: {:#05x} ({:#x})'.format(text, variable, id(variable)))\n\ndef foo(a, new_value):\n a = new_value\n\na = 2**10\nvariable_info('Before function call', a)\nfoo(a, 5)\nvariable_info('After function call', a)",
"Now forget it and take a look at what we've done",
"a = 2**10\nb = a\nvariable_info('Before function call', a)\nchange_long(a, 2, 0)\nvariable_info('After function call', a)\nvariable_info('What\\'s about b? Here it is', b)",
"Cross product\n| i | j | k |\n|---|---|---|\n|ux |uy |uz |\n|vx |vy |vz |",
"from numpy import array, cross\n\nbasis = [\n [1, 0, 0],\n [0, 1, 0],\n [0, 0, 1]\n]",
"Plain Python",
"def py_cross(u, v):\n return [\n u[1] * v[2] - u[2] * v[1],\n u[2] * v[0] - u[0] * v[2],\n u[0] * v[1] - u[1] * v[0]\n ]\n\npy_cross(basis[0], basis[1])",
"NumPy",
"cross(array(basis[0]), array(basis[1]))",
"C\nIt's better not to create new array in C, but to provide resulting one to store result in it\n```c\nint cross(float u, float v, float* w) {\n w[0] = u[1] * v[2] - u[2] * v[1];\n w[1] = u[2] * v[0] - u[0] * v[2];\n w[2] = u[0] * v[1] - u[1] * v[0];\n return 0;\n}\n```",
"from ctypes import cdll\nfrom numpy import empty_like\n\nc_cross = cdll.LoadLibrary('./lib_cross.so')\nu = array(basis[0]).astype('f')\nv = array(basis[1]).astype('f')\nw = empty_like(u)\n\ndef cross_wrapper(u, v, w):\n return c_cross.cross(u.ctypes.get_as_parameter(),\n v.ctypes.get_as_parameter(),\n w.ctypes.get_as_parameter())\n\ncross_wrapper(u, v, w)\nprint(w)",
"Let's run performance tests",
"from numpy.random import rand\n\nBIG_ENOUGH_INTEGER = int(1E5)\n\nvectors_u = rand(BIG_ENOUGH_INTEGER, 3).astype('f')\nvectors_v = rand(BIG_ENOUGH_INTEGER, 3).astype('f')\n\nprint('Vectors u:', vectors_u)\n\n%%timeit\nfor i in range(BIG_ENOUGH_INTEGER):\n py_cross(vectors_u[i], vectors_v[i])\n\n%%timeit\ncross(vectors_u, vectors_v)\n\n%%timeit\nvectors_w = empty_like(vectors_u)\n\nfor i in range(BIG_ENOUGH_INTEGER):\n cross_wrapper(vectors_u[i], vectors_v[i], vectors_w[i])",
"Are calculations right?",
"from numpy import allclose\n\nnp_result = cross(vectors_u, vectors_v)\n\npy_result = [py_cross(vectors_u[i], vectors_v[i])\n for i in range(BIG_ENOUGH_INTEGER)]\nprint(allclose(np_result, py_result))\n\nvectors_w = empty_like(vectors_u)\nassert sum([cross_wrapper(vectors_u[i], vectors_v[i], vectors_w[i])\n for i in range(BIG_ENOUGH_INTEGER)]) == 0\nprint(allclose(np_result, vectors_w))\n",
"NumPy versus human: final battle\nWhat have we done wrong? C code should be faster! Maybe Python loop is an issue?\n```c\nint cross_vectors(float u, float v, float *w,\n size_t amount) {\nwhile(amount --> 0) {\n cross(&u[amount * 3], &v[amount * 3],\n &w[amount * 3]);\n}\n\n}\n```\nIt's better to compile with optimization. Also to use -fPIC flag to avoid following compilation error\nrelocation against symbol `cross' can not be used when making a shared object\nWhat we get:\ngcc -shared -fPIC cross.c -O3 -o lib_cross.so\nNumpy arrays are flattened when got as C arrays.\nAlso len operator returns amount of rows of matrix. If you want to get the total amount of elements, you should use size method.",
"vectors_w = empty_like(vectors_u)\n\nc_vectors_u = vectors_u.ctypes.get_as_parameter()\nc_vectors_v = vectors_v.ctypes.get_as_parameter()\nc_vectors_w = vectors_w.ctypes.get_as_parameter()\n\n%%timeit\nvectors_w = empty_like(vectors_u)\n\nc_vectors_w = vectors_w.ctypes.get_as_parameter()\nc_cross.cross_vectors(c_vectors_u, c_vectors_v, c_vectors_w, len(vectors_u))\n\nc_cross.cross_vectors(c_vectors_u, c_vectors_v, c_vectors_w, len(vectors_u))\nprint(allclose(np_result, vectors_w))",
"Are you surprised?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
djfan/yelp-challenge
|
data_urbana_champaign/UC02_FilterCat_TransformAtt.ipynb
|
mit
|
[
"import numpy as np\nimport pandas as pd\nimport yaml\nimport seaborn\nimport pylab as pl\n%pylab inline",
"Part 1 - Filter Categories",
"with open(\"./cat_food.txt\", 'r') as fi:\n food = fi.read().splitlines() \n# Example: \n# target_cat = ['Restaurants', 'Food'] # to be continued...\ntarget_cat = food\n\ndf = pd.read_pickle(\"./UC01_df_uc_open.p\")\nprint df.shape\ndf = df[df.categories.apply(lambda x: not set(x).isdisjoint(set(target_cat)))]\nprint df.shape",
"For now, start from here...\nPart 2 - Build Attribute Table",
"# load data\n# df = pd.read_pickle(\"./UC01_df_uc_open.p\") \n\ndf.index = df.business_id.values\natt = df.attributes\n\n# extract attributes\na = att.apply(lambda x: yaml.load('['+','.join(x)+']'))\n\n# find full-size attribute set\n# if subattribute exists, use '_' to connect them.\natt_all = set()\nfor row in a:\n for i in row:\n if not isinstance(i.values()[0], dict):\n att_all.add(i.keys()[0])\n else:\n prefix = i.keys()[0]\n for k in i.values()[0].iterkeys():\n suffix = k\n temp = prefix + '_' + suffix\n att_all.add(temp)\nlen(att_all)\n\n# create full-size attribute table\n# index = business_id\n# col = att_all\ntab = pd.DataFrame(columns=att_all, index=df.index)\n\nfor ind in tab.index:\n for j in a[ind]:\n if not isinstance(j.values()[0], dict):\n tab.loc[ind, j.keys()[0]] = j.values()[0]\n else:\n prefix = j.keys()[0]\n for k, v in j.values()[0].iteritems():\n suffix = k\n temp = prefix + '_' + suffix\n tab.loc[ind, temp] = v",
"Part 3 - Missing Values\n(missing, not False)",
"print tab.shape[0]\ntab.count(axis=0).sort_values(ascending=False)[1:20]\n# 729 * 50% = 360 -> 3 attributes -> RestaurantsPriceRange2 / BusinessParking / BikeParking \n\nprint tab.shape[1]\ntab.count(axis=1).sort_values(ascending=False)[1:20]\n\npl.violinplot(tab.count(axis=1).values,vert=False)",
"Part 4 - Join & output",
"# sort column by alphabeta\ntab.columns = tab.columns.sort_values()\n# shape\nprint df.shape, tab.shape\n# join two table\ndf_with_attribute = df.join(tab)\n\ndf_with_attribute.to_pickle(\"./UC02_df_uc_food_att.p\")",
"Summary\nLabel:\n* stars\nFilter:\n* city - u & c\n* is_open == 1\n* categories <- target category list\nTransform:\n* attributes\n<br/><br/>\nUseful Attribute:\n* From attributes\n* review_count\n* hours - not preprocess\n* lat/long, address, postal_code\n* (maybe) name - text?\nNot Useful:\n* state - all identical\n* type - all identical\n* neighborhood - all nan\n<br/><br/>\nIdea from last discussion:\n* make full use of remaining categories"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
napsternxg/blog
|
content/Metropolis Hastings algorithm.ipynb
|
gpl-3.0
|
[
"%matplotlib inline\nfrom ipywidgets import interact\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import stats \n\ndef plot_distributions(samples, thetas, true_rv, prior_rv):\n x = np.arange(samples.min()-1,samples.max()+1,0.01)\n fig,ax = plt.subplots(1,2, figsize=(8,4))\n ax[0].plot(thetas, prior_rv.pdf(thetas), color=\"b\", label=r'Prior')\n ax[0].axvline(x=true_rv.mean(), color=\"k\", linestyle=\"--\", lw=1, label=\"True mean\")\n ax[1].plot(x, true_rv.pdf(x), color=\"r\", label=r'True')\n ax[1].hist(samples, weights=np.ones_like(samples)/samples.shape[0], alpha=0.5, label=r'Samples')\n ax[0].legend()\n ax[1].legend()\n ax[0].set_xlabel(\"theta\")\n ax[1].set_xlabel(\"x\")\n ax[0].set_title(\"Dist theta\")\n ax[1].set_title(\"Dist data\")\n fig.tight_layout()\n return fig, ax\n\n\ndef construct_ll_func(samples, *rv_fixed_args, rv_class=stats.norm):\n def log_likelihood(*rv_params):\n return np.sum(rv_class(*rv_params, *rv_fixed_args).logpdf(samples), axis=-1)\n return log_likelihood",
"Inspiration: https://www.youtube.com/watch?v=4gNpgSPal_8\nNotes\n\nMetropolis hastings - Sample next location from distribution at the currect location. Move to next location based on the MH equation.\nUse a gaussian as the distribution and show the movement for arbitrary distributions.\n\n\nGibbs sampling - Move along one dimension of the location conditional on the full current location. \n\nProblem definition\nLet us say we are given $n$ samples $X$ which are supposedly generated from a true distribution $p(X\\mid\\theta)$ which is parametrized by variables $\\theta$. We wish to know the probability distribution over all possible $\\theta$s, $p(\\theta \\mid X)$, which could have generated the data samples $X$. We start with some prior distribution $\\psi(\\theta)$\nOfcourse, $p(\\theta \\mid X)$ and $p(X\\mid\\theta)$ are related by the following equation:\n$$\n\\begin{equation}\np(\\theta \\mid X) = \\frac{p(X\\mid\\theta)\\psi(\\theta)}{Z}\\\nZ = \\int_{\\theta}{p(X\\mid\\theta)\\psi(\\theta)d{\\theta}}\n\\end{equation}\n$$\nIf $p(\\theta \\mid X)$ doesn't have an analytic solution, then calculating it is hard because of the integral involved in calculation of $Z$. Hence, we use the monte-carlo methods to samples from $p(\\theta \\mid X)$ to approximate its value.",
"N=10\ntrue_mean, true_std = -3, 2\ntrue_rv = stats.norm(true_mean, true_std)\nsamples = true_rv.rvs(N)\nprior_rv = stats.norm(0, 1)\nthetas_prior = np.arange(-5,5, 0.01)\nplot_distributions(samples, thetas_prior, true_rv, prior_rv)",
"Sampling from the markov chain\nWe want to ensure that we sample $\\theta$'s such that their distribution converges to the true distribution over $\\theta$s. This distribution will help us quantify our uncertainity in the values of $\\theta$s. So, if the number of data samples $X$ is too few, then the uncertainity should be higher else uncertainity should reduce.",
"current_theta = prior_rv.rvs()\ncurrent_rv = stats.norm(current_theta, true_std)\nll = np.sum(current_rv.logpdf(samples)) # Manually specify the log likelihood\npi_theta = np.exp(ll)*prior_rv.pdf(current_theta)\nprint(current_theta, pi_theta, ll)\nfig, ax = plot_distributions(samples, thetas_prior, true_rv, prior_rv)\nax[0].axvline(x=current_theta, color=\"r\", linestyle=\"--\", lw=1)",
"Getting the maximum likelihood estimate\nThe likelihood of the data $X$ given the parameter $\\theta$ equals $p(X \\mid \\theta)$. If we assume, X includes independent and identically distributed (i.i.d) samples from a gaussian distribution with $\\theta$ as its mean and some fixed std. deviation. Then $p(X \\mid \\theta) = \\prod_{x \\in X}p(x \\mid \\theta)$. In most cases we will be interested in finding the value of $\\theta$ which maximized the likelihood of the data. For our current samples, the below figure show how the likelihood changes as the data changes.",
"log_likelihood = construct_ll_func(samples, true_std)\n\nthetas = np.arange(-10,10,0.01)\nplt.plot(thetas, [log_likelihood(theta) for theta in thetas], label=\"True\")\nlls = [log_likelihood(theta)+prior_rv.logpdf(theta)\n for theta in thetas]\nplt.plot(thetas, lls, color=\"r\", label=\"With prior\")\nplt.axvline(x=-3, color=\"k\", linestyle=\"--\", lw=1)\nplt.axvline(x=thetas[np.argmax(lls)], color=\"r\", linestyle=\"--\", lw=1)\nplt.xlabel(\"theta\")\nplt.ylabel(\"Log Likelihood\")\nplt.legend()",
"Finding the posterior distribution of $\\theta$ given $X$\nHowever, if we are insterested in quantifying the uncertainity in the estimate of $\\theta$ calculated using the data $X$, based on some prior belief of the distribution of $\\theta$, the we need to find the full posterior distribution $p(\\theta \\mid X)$. MCMC methods help in this regard, by allowing us to sample from $p(\\theta \\mid X)$ without finding the value of $Z$. \nMetropolis-Hastings (MH) algorithm is a simple way to get samples from this distribution. If we define a new quantity $\\pi(\\theta) = p(X \\mid \\theta)p(\\theta)$, then the MH algorithm relies on the following assumption for some given transition probabilty distribution $p(\\theta_{i+1} | \\theta_{i})$. Then points samples from this Markov chain $\\theta_{0}, \\theta_{1}, ...$, will resemble the samples from the true distribution, given that $p(\\theta_{i+1} | \\theta_{i})$ is defined in a way, such that this sequence of samples is ergodic. i.e.,\n$$\n\\begin{equation}\n\\pi(\\theta_{i})p(\\theta_{i+1} | \\theta_{i}) = \\pi(\\theta_{i+1})p(\\theta_{i} | \\theta_{i+1})\n\\end{equation}\n$$\nThe required $p(\\theta_{i+1} | \\theta_{i})$ can be formulated by the following method:\n* Use a proposal distribution $q(\\theta_{i+1} | \\theta_{i})$. E.g. $\\mathcal{N}(\\theta_{i}, 1)$, a gaussian distribution around the current value of $\\theta_{i}$\n* Now, define a probability of selecting $\\theta_{i+1}$ as follows:\n$$\n\\begin{equation}\n\\alpha(\\theta_{i+1}, \\theta_{i}) = min(1, \\frac{\\pi(\\theta_{i+1})q(\\theta_{i} | \\theta_{i+1})}{\\pi(\\theta_{i})q(\\theta_{i+1} | \\theta_{i})})\n\\end{equation}\n$$\n* Now, define $p(\\theta_{i+1} | \\theta_{i}) = \\alpha(\\theta_{i+1}, \\theta_{i})q(\\theta_{i+1} | \\theta_{i})$",
"def get_alpha(theta_curr, theta_next, log_likelihood, log_prior):\n ll_diff = log_likelihood(theta_next) - log_likelihood(theta_curr)\n prior_diff = log_prior(theta_next) - log_prior(theta_curr)\n return np.exp(np.min([0, ll_diff + prior_diff]))\n\ntheta_diffs = np.arange(-0.01,0.01,0.0001)\nalpha = [\n get_alpha(current_theta, current_theta + theta_diff, log_likelihood, prior_rv.logpdf)\n for theta_diff in theta_diffs\n]\nplt.plot(current_theta + theta_diffs, alpha)\nplt.axvline(x=current_theta, color=\"k\", linestyle=\"--\", lw=1)\nplt.xlabel(\"Theta next\")\nplt.ylabel(\"alpha(theta_next, theta_curr)\")\n\nthetas = [current_theta]\nn_samples=10000\nfor i in range(n_samples):\n next_theta = thetas[-1] + np.random.randn()\n alpha = get_alpha(thetas[-1], next_theta, log_likelihood, prior_rv.logpdf)\n next_theta = [thetas[-1], next_theta][stats.bernoulli.rvs(alpha)]\n thetas.append(next_theta)\nplt.plot(thetas)\nplt.axhline(y=np.mean(thetas[500:]), color=\"k\", linestyle=\"--\", lw=1)\n\nfig, ax = plot_distributions(samples, thetas_prior, true_rv, prior_rv)\nax[0].axvline(x=np.mean(thetas[500:]), color=\"r\", linestyle=\"--\", lw=1)\nax[0].hist(thetas[500:], weights=np.ones_like(thetas[500:])/len(thetas[500:]), color=\"r\", alpha=0.5)\nposterior_samples = stats.norm(\n np.mean(thetas[500:]), 2\n).rvs(1000)\n\nax[1].hist(posterior_samples,\n weights=np.ones_like(posterior_samples)/posterior_samples.shape[0],\n color=\"r\", alpha=0.3)",
"2D case",
"mean_vector = np.array([0.5, -0.2])\ncov_matrix = np.array([[0.5, 0.3], [0.3, 0.5]])\ntrue_rv_2d = stats.multivariate_normal(mean_vector, cov_matrix)\nprior_rv_2d = stats.multivariate_normal(np.zeros_like(mean_vector), np.eye(mean_vector.shape[0]))\nsamples_2d = true_rv_2d.rvs(100)\nthetas_prior_2d = np.array([[-1,-1], [1,1]])\n\ndef get_2d_dist_vals(x_range, y_range, step=0.01):\n x, y = np.mgrid[\n x_range[0]:x_range[1]:step,\n y_range[0]:y_range[1]:step,\n ]\n pos = np.empty(x.shape + (2,))\n pos[:, :, 0] = x; pos[:, :, 1] = y\n return x, y, pos\n \ndef plot_distributions_2d(samples, thetas_prior, true_rv, prior_rv):\n step = 0.01\n fig,ax = plt.subplots(1,2, figsize=(8,4))\n ## Plot theta distribution\n mean_vector = true_rv.mean\n x_range = [thetas_prior[:, 0].min()-1*step, thetas_prior[:, 0].max()+1*step]\n y_range = [thetas_prior[:, 1].min()-1*step, thetas_prior[:, 1].max()+1*step]\n x, y, pos = get_2d_dist_vals(x_range, y_range, step=step)\n ax[0].contourf(x, y, prior_rv.pdf(pos), alpha=0.3, cmap=\"Blues\", label=\"Prior\")\n ax[0].axvline(x=mean_vector[0], linestyle=\"--\", color=\"r\", alpha=0.5)\n ax[0].axhline(y=mean_vector[1], linestyle=\"--\", color=\"r\", alpha=0.5)\n \n ax[0].set_title(\"Dist theta\")\n \n ## Plot data distribution\n x_range = [samples[:, 0].min()-1*step, samples[:, 0].max()+1*step]\n y_range = [samples[:, 1].min()-1*step, samples[:, 1].max()+1*step]\n x, y, pos = get_2d_dist_vals(x_range, y_range, step=step)\n ax[1].contourf(x, y, true_rv.pdf(pos), alpha=0.7, cmap=\"Reds\", label=\"True\")\n \n ## Plot samples\n ax[1].plot(samples[:, 0], samples[:, 1], marker=\"x\", linestyle=\"none\", color=\"k\", label=\"Samples\")\n \n ax[1].set_title(\"Dist data\")\n fig.tight_layout()\n return fig, ax\n\n\nplot_distributions_2d(samples_2d, thetas_prior_2d, true_rv_2d, prior_rv_2d)\n\nlog_likelihood_2d = construct_ll_func(samples_2d, cov_matrix, rv_class=stats.multivariate_normal)\n\ncurrent_theta_2d = prior_rv_2d.rvs()\nthetas = [current_theta_2d]\ndiff_rv = stats.multivariate_normal(np.zeros_like(mean_vector), np.eye(mean_vector.shape[0]))\n\ndiff_theta = diff_rv.rvs()\nnext_theta = thetas[-1] + diff_theta\nalpha = get_alpha(thetas[-1], next_theta, log_likelihood_2d, prior_rv_2d.logpdf)\nprint(thetas[-1], next_theta, alpha, diff_theta)\nnext_theta = [thetas[-1], next_theta][stats.bernoulli.rvs(alpha)]\nthetas.append(next_theta)\n\nplt.plot(*zip(*thetas[:-2]), lw=1, marker=\"x\")\nplt.plot(thetas[-2][0], thetas[-2][1], marker=\"x\", color=\"r\")\nplt.plot(thetas[-1][0], thetas[-1][1], marker=\"x\", color=\"k\")\nplt.axvline(x=mean_vector[0], linestyle=\"--\", color=\"r\", alpha=0.5)\nplt.axhline(y=mean_vector[1], linestyle=\"--\", color=\"r\", alpha=0.5)\n\ncurrent_theta_2d = prior_rv_2d.rvs()\nthetas = [current_theta_2d]\ndiff_rv = stats.multivariate_normal(np.zeros_like(mean_vector), np.eye(mean_vector.shape[0]))\n\nn_samples=10000\nfor i in range(n_samples):\n next_theta = thetas[-1] + diff_rv.rvs()\n alpha = get_alpha(thetas[-1], next_theta, log_likelihood_2d, prior_rv_2d.logpdf)\n next_theta = [thetas[-1], next_theta][stats.bernoulli.rvs(alpha)]\n thetas.append(next_theta)\n\nthetas = np.array(thetas)\nthetas.shape\n\nfig, ax = plot_distributions_2d(samples_2d, thetas_prior_2d, true_rv_2d, prior_rv_2d)\n\nhb = ax[0].hexbin(thetas[500:, 0], thetas[500:, 1], gridsize=50, cmap='Reds', alpha=0.3)\ncb = fig.colorbar(hb, ax=ax[0])\ncb.set_label('counts')\n\n\nburn_in=500\nposterior_mean = thetas[burn_in:].mean(axis=0)\nfig = plt.figure(figsize=(8,4))\nax = plt.subplot2grid((2,2), (0,0))\nax.plot(thetas[burn_in:, 0], lw=1)\nax.axhline(y=posterior_mean[0], lw=1, linestyle=\"--\", color=\"k\")\nax.axhline(y=mean_vector[0], lw=1, linestyle=\"--\", color=\"r\")\nax.set_xlabel(\"theta 0\")\n\nax = plt.subplot2grid((2,2), (1,0))\nax.plot(thetas[burn_in:, 1], lw=1)\nax.axhline(y=posterior_mean[1], lw=1, linestyle=\"--\", color=\"k\")\nax.axhline(y=mean_vector[1], lw=1, linestyle=\"--\", color=\"r\")\nax.set_xlabel(\"theta 1\")\n\nax = plt.subplot2grid((2,2), (0,1), rowspan=2)\n\nax.plot(thetas[:, 0], thetas[:, 1], lw=1, linestyle=\"--\", marker=\"x\", ms=5)\nax.plot(posterior_mean[0], posterior_mean[1], marker=\"o\", color=\"k\", ms=10)\nax.axvline(x=mean_vector[0], linestyle=\"--\", color=\"r\", alpha=0.5)\nax.axhline(y=mean_vector[1], linestyle=\"--\", color=\"r\", alpha=0.5)\nax.set_xlabel(\"theta 0\")\nax.set_ylabel(\"theta 1\")\n\nfig.tight_layout()\n\nthetas.mean(axis=0), mean_vector\n\nstep=0.01\nfig = plt.figure(figsize=(8,4))\nposterior_mean = thetas[500:].mean(axis=0)\nposterior_mean_rv = stats.multivariate_normal(posterior_mean, true_rv_2d.cov)\n\nax = plt.subplot2grid((2,2), (0,0))\nax.hist(thetas[500:, 0], alpha=0.5)\nax.axvline(x=posterior_mean[0], linestyle=\"--\", color=\"b\", alpha=0.5)\nax.axvline(x=mean_vector[0], linestyle=\"--\", color=\"r\", alpha=0.5)\nax.set_xlabel(\"theta 0\")\n\nax = plt.subplot2grid((2,2), (1,0))\nax.hist(thetas[500:, 1], alpha=0.5)\nax.axvline(x=posterior_mean[1], linestyle=\"--\", color=\"b\", alpha=0.5)\nax.axvline(x=mean_vector[1], linestyle=\"--\", color=\"r\", alpha=0.5)\nax.set_xlabel(\"theta 1\")\n\nax = plt.subplot2grid((2,2), (0,1), rowspan=2)\n\nx_range = [samples_2d[:, 0].min()-1*step, samples_2d[:, 0].max()+1*step]\ny_range = [samples_2d[:, 1].min()-1*step, samples_2d[:, 1].max()+1*step]\nx, y, pos = get_2d_dist_vals(x_range, y_range)\nax.contourf(x, y, true_rv_2d.pdf(pos), alpha=0.3, cmap=\"Reds\", label=\"True\")\nax.axvline(x=mean_vector[0], linestyle=\"--\", color=\"r\", alpha=0.5, label=\"True\")\nax.axhline(y=mean_vector[1], linestyle=\"--\", color=\"r\", alpha=0.5)\n\nax.contourf(x, y, posterior_mean_rv.pdf(pos), alpha=0.3, cmap=\"Blues\", label=\"Posterior mean\")\nax.axvline(x=posterior_mean[0], linestyle=\"--\", color=\"b\", alpha=0.5, label=\"Posterior mean\")\nax.axhline(y=posterior_mean[1], linestyle=\"--\", color=\"b\", alpha=0.5)\n\nax.set_title(\"Data dist\")\nax.set_xlabel(\"theta 0\")\nax.set_ylabel(\"theta 1\")\nax.legend()\n\nfig.tight_layout()",
"Gibbs sampling\nThe Gibbs sampling is an extension of the MH to allow for conditional exploration along each parameter, keeping all the other parameters constant."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
fastai/course-v3
|
nbs/dl2/02b_initializing.ipynb
|
apache-2.0
|
[
"import torch",
"Why you need a good init\nTo understand why initialization is important in a neural net, we'll focus on the basic operation you have there: matrix multiplications. So let's just take a vector x, and a matrix a initiliazed randomly, then multiply them 100 times (as if we had 100 layers). \nJump_to lesson 9 video",
"x = torch.randn(512)\na = torch.randn(512,512)\n\nfor i in range(100): x = a @ x\n\nx.mean(),x.std()",
"The problem you'll get with that is activation explosion: very soon, your activations will go to nan. We can even ask the loop to break when that first happens:",
"x = torch.randn(512)\na = torch.randn(512,512)\n\nfor i in range(100): \n x = a @ x\n if x.std() != x.std(): break\n\ni",
"It only takes 27 multiplications! On the other hand, if you initialize your activations with a scale that is too low, then you'll get another problem:",
"x = torch.randn(512)\na = torch.randn(512,512) * 0.01\n\nfor i in range(100): x = a @ x\n\nx.mean(),x.std()",
"Here, every activation vanished to 0. So to avoid that problem, people have come with several strategies to initialize their weight matices, such as:\n- use a standard deviation that will make sure x and Ax have exactly the same scale\n- use an orthogonal matrix to initialize the weight (orthogonal matrices have the special property that they preserve the L2 norm, so x and Ax would have the same sum of squares in that case)\n- use spectral normalization on the matrix A (the spectral norm of A is the least possible number M such that torch.norm(A@x) <= M*torch.norm(x) so dividing A by this M insures you don't overflow. You can still vanish with this)\nThe magic number for scaling\nHere we will focus on the first one, which is the Xavier initialization. It tells us that we should use a scale equal to 1/math.sqrt(n_in) where n_in is the number of inputs of our matrix.\nJump_to lesson 9 video",
"import math\n\nx = torch.randn(512)\na = torch.randn(512,512) / math.sqrt(512)\n\nfor i in range(100): x = a @ x\n\nx.mean(),x.std()",
"And indeed it works. Note that this magic number isn't very far from the 0.01 we had earlier.",
"1/ math.sqrt(512)",
"But where does it come from? It's not that mysterious if you remember the definition of the matrix multiplication. When we do y = a @ x, the coefficients of y are defined by\n$$y_{i} = a_{i,0} x_{0} + a_{i,1} x_{1} + \\cdots + a_{i,n-1} x_{n-1} = \\sum_{k=0}^{n-1} a_{i,k} x_{k}$$\nor in code:\ny[i] = sum([c*d for c,d in zip(a[i], x)])\nNow at the very beginning, our x vector has a mean of roughly 0. and a standard deviation of roughly 1. (since we picked it that way).",
"x = torch.randn(512)\nx.mean(), x.std()",
"NB: This is why it's extremely important to normalize your inputs in Deep Learning, the intialization rules have been designed with inputs that have a mean 0. and a standard deviation of 1.\nIf you need a refresher from your statistics course, the mean is the sum of all the elements divided by the number of elements (a basic average). The standard deviation represents if the data stays close to the mean or on the contrary gets values that are far away. It's computed by the following formula:\n$$\\sigma = \\sqrt{\\frac{1}{n}\\left[(x_{0}-m)^{2} + (x_{1}-m)^{2} + \\cdots + (x_{n-1}-m)^{2}\\right]}$$\nwhere m is the mean and $\\sigma$ (the greek letter sigma) is the standard deviation. Here we have a mean of 0, so it's just the square root of the mean of x squared.\nIf we go back to y = a @ x and assume that we chose weights for a that also have a mean of 0, we can compute the standard deviation of y quite easily. Since it's random, and we may fall on bad numbers, we repeat the operation 100 times.",
"mean,sqr = 0.,0.\nfor i in range(100):\n x = torch.randn(512)\n a = torch.randn(512, 512)\n y = a @ x\n mean += y.mean().item()\n sqr += y.pow(2).mean().item()\nmean/100,sqr/100",
"Now that looks very close to the dimension of our matrix 512. And that's no coincidence! When you compute y, you sum 512 product of one element of a by one element of x. So what's the mean and the standard deviation of such a product? We can show mathematically that as long as the elements in a and the elements in x are independent, the mean is 0 and the std is 1. This can also be seen experimentally:",
"mean,sqr = 0.,0.\nfor i in range(10000):\n x = torch.randn(1)\n a = torch.randn(1)\n y = a*x\n mean += y.item()\n sqr += y.pow(2).item()\nmean/10000,sqr/10000",
"Then we sum 512 of those things that have a mean of zero, and a mean of squares of 1, so we get something that has a mean of 0, and mean of square of 512, hence math.sqrt(512) being our magic number. If we scale the weights of the matrix a and divide them by this math.sqrt(512), it will give us a y of scale 1, and repeating the product has many times as we want won't overflow or vanish.\nAdding ReLU in the mix\nWe can reproduce the previous experiment with a ReLU, to see that this time, the mean shifts and the standard deviation becomes 0.5. This time the magic number will be math.sqrt(2/512) to properly scale the weights of the matrix.",
"mean,sqr = 0.,0.\nfor i in range(10000):\n x = torch.randn(1)\n a = torch.randn(1)\n y = a*x\n y = 0 if y < 0 else y.item()\n mean += y\n sqr += y ** 2\nmean/10000,sqr/10000",
"We can double check by running the experiment on the whole matrix product.",
"mean,sqr = 0.,0.\nfor i in range(100):\n x = torch.randn(512)\n a = torch.randn(512, 512)\n y = a @ x\n y = y.clamp(min=0)\n mean += y.mean().item()\n sqr += y.pow(2).mean().item()\nmean/100,sqr/100",
"Or that scaling the coefficient with the magic number gives us a scale of 1.",
"mean,sqr = 0.,0.\nfor i in range(100):\n x = torch.randn(512)\n a = torch.randn(512, 512) * math.sqrt(2/512)\n y = a @ x\n y = y.clamp(min=0)\n mean += y.mean().item()\n sqr += y.pow(2).mean().item()\nmean/100,sqr/100",
"The math behind is a tiny bit more complex, and you can find everything in the Kaiming and the Xavier paper but this gives the intuition behing those results."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
hvillanua/deep-learning
|
autoencoder/Convolutional_Autoencoder_Solution.ipynb
|
mit
|
[
"Convolutional Autoencoder\nSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.",
"%matplotlib inline\n\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data', validation_size=0)\n\nimg = mnist.train.images[2]\nplt.imshow(img.reshape((28, 28)), cmap='Greys_r')",
"Network Architecture\nThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.\n<img src='assets/convolutional_autoencoder.png' width=500px>\nHere our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.\nWhat's going on with the decoder\nOkay, so the decoder has these \"Upsample\" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see transposed convolution layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, tf.nn.conv2d_transpose. \nHowever, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.\n\nExercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.",
"inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')\ntargets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')\n\n### Encoder\nconv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)\n# Now 28x28x16\nmaxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')\n# Now 14x14x16\nconv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)\n# Now 14x14x8\nmaxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')\n# Now 7x7x8\nconv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)\n# Now 7x7x8\nencoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')\n# Now 4x4x8\n\n### Decoder\nupsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))\n# Now 7x7x8\nconv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)\n# Now 7x7x8\nupsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))\n# Now 14x14x8\nconv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)\n# Now 14x14x8\nupsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))\n# Now 28x28x8\nconv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)\n# Now 28x28x16\n\nlogits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)\n#Now 28x28x1\n\ndecoded = tf.nn.sigmoid(logits, name='decoded')\n\nloss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)\ncost = tf.reduce_mean(loss)\nopt = tf.train.AdamOptimizer(0.001).minimize(cost)",
"Training\nAs before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.",
"sess = tf.Session()\n\nepochs = 20\nbatch_size = 200\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n imgs = batch[0].reshape((-1, 28, 28, 1))\n batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,\n targets_: imgs})\n\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))\n\nfig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nreconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})\n\nfor images, row in zip([in_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\n\nfig.tight_layout(pad=0.1)\n\nsess.close()",
"Denoising\nAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.\n\nSince this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.\n\nExercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.",
"inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')\ntargets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')\n\n### Encoder\nconv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)\n# Now 28x28x32\nmaxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')\n# Now 14x14x32\nconv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)\n# Now 14x14x32\nmaxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')\n# Now 7x7x32\nconv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)\n# Now 7x7x16\nencoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')\n# Now 4x4x16\n\n### Decoder\nupsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))\n# Now 7x7x16\nconv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)\n# Now 7x7x16\nupsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))\n# Now 14x14x16\nconv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)\n# Now 14x14x32\nupsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))\n# Now 28x28x32\nconv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)\n# Now 28x28x32\n\nlogits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)\n#Now 28x28x1\n\ndecoded = tf.nn.sigmoid(logits, name='decoded')\n\nloss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)\ncost = tf.reduce_mean(loss)\nopt = tf.train.AdamOptimizer(0.001).minimize(cost)\n\nsess = tf.Session()\n\nepochs = 100\nbatch_size = 200\n# Set's how much noise we're adding to the MNIST images\nnoise_factor = 0.5\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n # Get images from the batch\n imgs = batch[0].reshape((-1, 28, 28, 1))\n \n # Add random noise to the input images\n noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)\n # Clip the images to be between 0 and 1\n noisy_imgs = np.clip(noisy_imgs, 0., 1.)\n \n # Noisy images as inputs, original images as targets\n batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,\n targets_: imgs})\n\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))",
"Checking out the performance\nHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprising great job of removing the noise, even though it's sometimes difficult to tell what the original number is.",
"fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nnoisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)\nnoisy_imgs = np.clip(noisy_imgs, 0., 1.)\n\nreconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})\n\nfor images, row in zip([noisy_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\nfig.tight_layout(pad=0.1)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gsentveld/lunch_and_learn
|
notebooks/Get_zip_files.ipynb
|
mit
|
[
"Using environment variables saved in a .env file\n<code>dotenv</code> is a package that finds and reads a .env file and then creates environment variables from the contents.",
"import os\nfrom dotenv import load_dotenv, find_dotenv\n\n# find .env automagically by walking up directories until it's found\ndotenv_path = find_dotenv()\n\n# load up the entries as environment variables\nload_dotenv(dotenv_path)",
"Once the .env file has been loaded you can use the environment variables and the <code>dotenv_path</code>",
"# Get the project folders that we are interested in\nPROJECT_DIR = os.path.dirname(dotenv_path)\nEXTERNAL_DATA_DIR = PROJECT_DIR + os.environ.get(\"EXTERNAL_DATA_DIR\")\n\n# Get the base URL of the data\nBASE_URL = os.environ.get(\"BASE_URL\")\n\n# Get the list of filenames\nfiles=os.environ.get(\"FILES\").split()\n\nprint(\"Project directory is : {0}\".format(PROJECT_DIR))\nprint(\"External directory is : {0}\".format(EXTERNAL_DATA_DIR))\nprint(\"Source data URL is : {0}\".format(BASE_URL))\nprint(\"Base names of files : {0}\".format(\" \".join(files)))",
"Getting the files is done with <code>urllib.request</code>",
"import urllib.request\n\nfor file in files:\n \n # transform the base filename into a URL and a target filename\n url=BASE_URL + file + '.zip'\n targetfile=EXTERNAL_DATA_DIR + '/' + file + '.zip'\n\n # By specifying the end=\" ... \" parameter to the print function, you surpress the newline.\n # You can also specify end=\"\" to have subsequent print statements connect\n print('Get {0}'.format(url))\n \n urllib.request.urlretrieve(url, targetfile)\n\nprint('Done.')",
"Back to Agenda"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/docs-l10n
|
site/zh-cn/quantum/tutorials/gradients.ipynb
|
apache-2.0
|
[
"Copyright 2020 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"计算梯度\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td> <a target=\"_blank\" href=\"https://tensorflow.google.cn/quantum/tutorials/gradients\"><img src=\"https://tensorflow.google.cn/images/tf_logo_32px.png\">在 TensorFlow.org 上查看</a> </td>\n <td> <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/quantum/tutorials/gradients.ipynb\"><img src=\"https://tensorflow.google.cn/images/colab_logo_32px.png\">在 Google Colab 运行</a>\n</td>\n <td> <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/quantum/tutorials/gradients.ipynb\"><img src=\"https://tensorflow.google.cn/images/GitHub-Mark-32px.png\">在 Github 上查看源代码</a> </td>\n <td> <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/quantum/tutorials/gradients.ipynb\"><img src=\"https://tensorflow.google.cn/images/download_logo_32px.png\">下载笔记本</a> </td>\n</table>\n\n本教程探讨适用于量子电路期望值的梯度计算算法。\n计算量子电路中某个可观测对象的期望值的梯度是一个复杂的过程。可观测对象的期望值并不具备总是易于编写的解析梯度公式——这不同于诸如矩阵乘法或向量加法等具备易于编写的解析梯度公式的传统机器学习变换。因此,可以轻松地为不同的场景采用不同的量子梯度计算方法。本教程比较了两种不同的微分方案。\n设置",
"!pip install tensorflow==2.4.1",
"安装 TensorFlow Quantum:",
"!pip install tensorflow-quantum\n\n# Update package resources to account for version changes.\nimport importlib, pkg_resources\nimportlib.reload(pkg_resources)",
"现在,导入 TensorFlow 和模块依赖项:",
"import tensorflow as tf\nimport tensorflow_quantum as tfq\n\nimport cirq\nimport sympy\nimport numpy as np\n\n# visualization tools\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom cirq.contrib.svg import SVGCircuit",
"1. 准备工作\n我们来更具体地说明量子电路的梯度计算概念。假设您具有如下所示的参数化电路:",
"qubit = cirq.GridQubit(0, 0)\nmy_circuit = cirq.Circuit(cirq.Y(qubit)**sympy.Symbol('alpha'))\nSVGCircuit(my_circuit)",
"以及可观测对象:",
"pauli_x = cirq.X(qubit)\npauli_x",
"所用算子为 $⟨Y(\\alpha)| X | Y(\\alpha)⟩ = \\sin(\\pi \\alpha)$",
"def my_expectation(op, alpha):\n \"\"\"Compute ⟨Y(alpha)| `op` | Y(alpha)⟩\"\"\"\n params = {'alpha': alpha}\n sim = cirq.Simulator()\n final_state_vector = sim.simulate(my_circuit, params).final_state_vector\n return op.expectation_from_state_vector(final_state_vector, {qubit: 0}).real\n\n\nmy_alpha = 0.3\nprint(\"Expectation=\", my_expectation(pauli_x, my_alpha))\nprint(\"Sin Formula=\", np.sin(np.pi * my_alpha))",
"如果定义 $f_{1}(\\alpha) = ⟨Y(\\alpha)| X | Y(\\alpha)⟩$,则 $f_{1}^{'}(\\alpha) = \\pi \\cos(\\pi \\alpha)$。请参见下例:",
"def my_grad(obs, alpha, eps=0.01):\n grad = 0\n f_x = my_expectation(obs, alpha)\n f_x_prime = my_expectation(obs, alpha + eps)\n return ((f_x_prime - f_x) / eps).real\n\n\nprint('Finite difference:', my_grad(pauli_x, my_alpha))\nprint('Cosine formula: ', np.pi * np.cos(np.pi * my_alpha))",
"2. 对微分器的需求\n对于大型电路,要始终具备可精确计算给定量子电路梯度的公式并不现实。如果简单的公式不足以计算梯度,则可以使用 tfq.differentiators.Differentiator 类来定义用于计算电路梯度的算法。例如,您可以使用以下方法在 TensorFlow Quantum (TFQ) 中重新创建以上示例:",
"expectation_calculation = tfq.layers.Expectation(\n differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))\n\nexpectation_calculation(my_circuit,\n operators=pauli_x,\n symbol_names=['alpha'],\n symbol_values=[[my_alpha]])",
"但是,如果您改为基于采样(在真实设备上进行)估计期望值,则值可能会有所变化。这意味着您的估计方法并不完善:",
"sampled_expectation_calculation = tfq.layers.SampledExpectation(\n differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))\n\nsampled_expectation_calculation(my_circuit,\n operators=pauli_x,\n repetitions=500,\n symbol_names=['alpha'],\n symbol_values=[[my_alpha]])",
"涉及到梯度时,这会迅速加剧造成严重的准确率问题:",
"# Make input_points = [batch_size, 1] array.\ninput_points = np.linspace(0, 5, 200)[:, np.newaxis].astype(np.float32)\nexact_outputs = expectation_calculation(my_circuit,\n operators=pauli_x,\n symbol_names=['alpha'],\n symbol_values=input_points)\nimperfect_outputs = sampled_expectation_calculation(my_circuit,\n operators=pauli_x,\n repetitions=500,\n symbol_names=['alpha'],\n symbol_values=input_points)\nplt.title('Forward Pass Values')\nplt.xlabel('$x$')\nplt.ylabel('$f(x)$')\nplt.plot(input_points, exact_outputs, label='Analytic')\nplt.plot(input_points, imperfect_outputs, label='Sampled')\nplt.legend()\n\n# Gradients are a much different story.\nvalues_tensor = tf.convert_to_tensor(input_points)\n\nwith tf.GradientTape() as g:\n g.watch(values_tensor)\n exact_outputs = expectation_calculation(my_circuit,\n operators=pauli_x,\n symbol_names=['alpha'],\n symbol_values=values_tensor)\nanalytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)\n\nwith tf.GradientTape() as g:\n g.watch(values_tensor)\n imperfect_outputs = sampled_expectation_calculation(\n my_circuit,\n operators=pauli_x,\n repetitions=500,\n symbol_names=['alpha'],\n symbol_values=values_tensor)\nsampled_finite_diff_gradients = g.gradient(imperfect_outputs, values_tensor)\n\nplt.title('Gradient Values')\nplt.xlabel('$x$')\nplt.ylabel('$f^{\\'}(x)$')\nplt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')\nplt.plot(input_points, sampled_finite_diff_gradients, label='Sampled')\nplt.legend()",
"在这里可以看到,尽管有限差分公式在解析示例中可以快速计算出梯度本身,但当涉及到基于采样的方法时,却产生了大量噪声。必须使用更细致的技术来确保可以计算出良好的梯度。接下来,您将了解一种速度缓慢而不太适用于解析期望梯度计算的技术,但该技术在基于实际样本的真实示例中却有着出色的表现:",
"# A smarter differentiation scheme.\ngradient_safe_sampled_expectation = tfq.layers.SampledExpectation(\n differentiator=tfq.differentiators.ParameterShift())\n\nwith tf.GradientTape() as g:\n g.watch(values_tensor)\n imperfect_outputs = gradient_safe_sampled_expectation(\n my_circuit,\n operators=pauli_x,\n repetitions=500,\n symbol_names=['alpha'],\n symbol_values=values_tensor)\n\nsampled_param_shift_gradients = g.gradient(imperfect_outputs, values_tensor)\n\nplt.title('Gradient Values')\nplt.xlabel('$x$')\nplt.ylabel('$f^{\\'}(x)$')\nplt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')\nplt.plot(input_points, sampled_param_shift_gradients, label='Sampled')\nplt.legend()",
"从上面可以看到,某些微分器最好用于特定的研究场景。通常,在更为“真实”的环境下测试或实现算法时,基于样本的较慢方法在面对设备噪声等问题时鲁棒性更佳,因此是理想的微分器。诸如有限差分之类的较快方法非常适合面向解析计算且需要更高吞吐量的场景,但尚未考虑算法在实际设备上是否可行。\n3. 多个可观测对象\n我们来引入一个额外的可观测对象,借此了解 TensorFlow Quantum 对单个电路的多个可观测对象的支持情况。",
"pauli_z = cirq.Z(qubit)\npauli_z",
"如果此可观测对象同样用于之前的电路,则 $f_{2}(\\alpha) = ⟨Y(\\alpha)| Z | Y(\\alpha)⟩ = \\cos(\\pi \\alpha)$ 且 $f_{2}^{'}(\\alpha) = -\\pi \\sin(\\pi \\alpha)$。快速检查:",
"test_value = 0.\n\nprint('Finite difference:', my_grad(pauli_z, test_value))\nprint('Sin formula: ', -np.pi * np.sin(np.pi * test_value))",
"结果匹配(足够接近)。\n现在,如果定义 $g(\\alpha) = f_{1}(\\alpha) + f_{2}(\\alpha)$,则 $g'(\\alpha) = f_{1}^{'}(\\alpha) + f^{'}_{2}(\\alpha)$。在 TensorFlow Quantum 中为电路定义多个可观测对象,相当于向 $g$ 添加更多项。\n这意味着,电路中特定符号的梯度等于该符号应用于该电路的每个可观测对象的相应梯度之和。这与 TensorFlow 梯度计算和反向传播(将所有可观测对象的梯度总和作为特定符号的梯度)相兼容。",
"sum_of_outputs = tfq.layers.Expectation(\n differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))\n\nsum_of_outputs(my_circuit,\n operators=[pauli_x, pauli_z],\n symbol_names=['alpha'],\n symbol_values=[[test_value]])",
"在这里可以看到,第一个条目是相对于 Pauli X 的期望,第二个条目是相对于 Pauli Z 的期望。现在,梯度计算方法如下:",
"test_value_tensor = tf.convert_to_tensor([[test_value]])\n\nwith tf.GradientTape() as g:\n g.watch(test_value_tensor)\n outputs = sum_of_outputs(my_circuit,\n operators=[pauli_x, pauli_z],\n symbol_names=['alpha'],\n symbol_values=test_value_tensor)\n\nsum_of_gradients = g.gradient(outputs, test_value_tensor)\n\nprint(my_grad(pauli_x, test_value) + my_grad(pauli_z, test_value))\nprint(sum_of_gradients.numpy())",
"现在,您已验证每个可观测对象的梯度之和即为 $\\alpha$ 的梯度。所有 TensorFlow Quantum 微分器均支持此行为,且此行为在与其余 TensorFlow 的兼容性方面起着至关重要的作用。\n4. 高级用法\nTensorFlow Quantum 子类 tfq.differentiators.Differentiator 中存在的所有微分器。要实现微分器,用户必须实现两个接口之一。标准是实现 get_gradient_circuits ,它告诉基类要测量哪些电路以获得梯度估计值。或者,也可以重载 differentiate_analytic 和differentiate_sampled;类 tfq.differentiators.Adjoint 就采用这种方式。\n下面使用 TensorFlow Quantum 实现一个电路的梯度。您将使用一个参数转移的小示例。\n回想上文定义的电路,$|\\alpha⟩ = Y^{\\alpha}|0⟩$。和之前一样,可以定义一个函数作为该电路对 $X$ 可观测对象的期望值,$f(\\alpha) = ⟨\\alpha|X|\\alpha⟩$。对于该电路使用参数转移规则,您可以发现导数是 $$\\frac{\\partial}{\\partial \\alpha} f(\\alpha) = \\frac{\\pi}{2} f\\left(\\alpha + \\frac{1}{2}\\right) - \\frac{ \\pi}{2} f\\left(\\alpha - \\frac{1}{2}\\right)$$。get_gradient_circuits 函数返回该导数的分量。",
"class MyDifferentiator(tfq.differentiators.Differentiator):\n \"\"\"A Toy differentiator for <Y^alpha | X |Y^alpha>.\"\"\"\n\n def __init__(self):\n pass\n\n def get_gradient_circuits(self, programs, symbol_names, symbol_values):\n \"\"\"Return circuits to compute gradients for given forward pass circuits.\n \n Every gradient on a quantum computer can be computed via measurements\n of transformed quantum circuits. Here, you implement a custom gradient\n for a specific circuit. For a real differentiator, you will need to\n implement this function in a more general way. See the differentiator\n implementations in the TFQ library for examples.\n \"\"\"\n\n # The two terms in the derivative are the same circuit...\n batch_programs = tf.stack([programs, programs], axis=1)\n\n # ... with shifted parameter values.\n shift = tf.constant(1/2)\n forward = symbol_values + shift\n backward = symbol_values - shift\n batch_symbol_values = tf.stack([forward, backward], axis=1)\n \n # Weights are the coefficients of the terms in the derivative.\n num_program_copies = tf.shape(batch_programs)[0]\n batch_weights = tf.tile(tf.constant([[[np.pi/2, -np.pi/2]]]),\n [num_program_copies, 1, 1])\n\n # The index map simply says which weights go with which circuits.\n batch_mapper = tf.tile(\n tf.constant([[[0, 1]]]), [num_program_copies, 1, 1])\n\n return (batch_programs, symbol_names, batch_symbol_values,\n batch_weights, batch_mapper)",
"Differentiator 基类使用从 get_gradient_circuits 返回的分量来计算导数,如上面的参数转移公式所示。现在,这个新的微分器可以与现有 tfq.layer 对象一起使用:",
"custom_dif = MyDifferentiator()\ncustom_grad_expectation = tfq.layers.Expectation(differentiator=custom_dif)\n\n# Now let's get the gradients with finite diff.\nwith tf.GradientTape() as g:\n g.watch(values_tensor)\n exact_outputs = expectation_calculation(my_circuit,\n operators=[pauli_x],\n symbol_names=['alpha'],\n symbol_values=values_tensor)\n\nanalytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)\n\n# Now let's get the gradients with custom diff.\nwith tf.GradientTape() as g:\n g.watch(values_tensor)\n my_outputs = custom_grad_expectation(my_circuit,\n operators=[pauli_x],\n symbol_names=['alpha'],\n symbol_values=values_tensor)\n\nmy_gradients = g.gradient(my_outputs, values_tensor)\n\nplt.subplot(1, 2, 1)\nplt.title('Exact Gradient')\nplt.plot(input_points, analytic_finite_diff_gradients.numpy())\nplt.xlabel('x')\nplt.ylabel('f(x)')\nplt.subplot(1, 2, 2)\nplt.title('My Gradient')\nplt.plot(input_points, my_gradients.numpy())\nplt.xlabel('x')",
"现在,可以使用这个新的微分器来生成可微运算。\n要点:如果微分器之前已附加到一个运算,那么在附加到新的运算之前,必须先进行刷新,因为一个微分器一次只能附加到一个运算。",
"# Create a noisy sample based expectation op.\nexpectation_sampled = tfq.get_sampled_expectation_op(\n cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.01)))\n\n# Make it differentiable with your differentiator:\n# Remember to refresh the differentiator before attaching the new op\ncustom_dif.refresh()\ndifferentiable_op = custom_dif.generate_differentiable_op(\n sampled_op=expectation_sampled)\n\n# Prep op inputs.\ncircuit_tensor = tfq.convert_to_tensor([my_circuit])\nop_tensor = tfq.convert_to_tensor([[pauli_x]])\nsingle_value = tf.convert_to_tensor([[my_alpha]])\nnum_samples_tensor = tf.convert_to_tensor([[5000]])\n\nwith tf.GradientTape() as g:\n g.watch(single_value)\n forward_output = differentiable_op(circuit_tensor, ['alpha'], single_value,\n op_tensor, num_samples_tensor)\n\nmy_gradients = g.gradient(forward_output, single_value)\n\nprint('---TFQ---')\nprint('Foward: ', forward_output.numpy())\nprint('Gradient:', my_gradients.numpy())\nprint('---Original---')\nprint('Forward: ', my_expectation(pauli_x, my_alpha))\nprint('Gradient:', my_grad(pauli_x, my_alpha))",
"成功:现在,您可以使用 TensorFlow Quantum 提供的所有微分器,以及定义自己的微分器了。"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
anandha2017/udacity
|
nd101 Deep Learning Nanodegree Foundation/notebooks/1 - playing with jupyter/mandelbrot_numbapro.ipynb
|
mit
|
[
"A NumbaPro Mandelbrot Example\nThis notebook was written by Mark Harris based on code examples from Continuum Analytics that I modified somewhat. This is an example that demonstrates accelerating a Mandelbrot fractal computation using \"CUDA Python\" with NumbaPro.\nLet's start with a basic Python Mandelbrot set. We use a numpy array for the image and display it using pylab imshow.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nfrom pylab import imshow, show\nfrom timeit import default_timer as timer",
"The mandel function performs the Mandelbrot set calculation for a given (x,y) position on the imaginary plane. It returns the number of iterations before the computation \"escapes\".",
"def mandel(x, y, max_iters):\n \"\"\"\n Given the real and imaginary parts of a complex number,\n determine if it is a candidate for membership in the Mandelbrot\n set given a fixed number of iterations.\n \"\"\"\n c = complex(x, y)\n z = 0.0j\n for i in range(max_iters):\n z = z*z + c\n if (z.real*z.real + z.imag*z.imag) >= 4:\n return i\n\n return max_iters",
"create_fractal iterates over all the pixels in the image, computing the complex coordinates from the pixel coordinates, and calls the mandel function at each pixel. The return value of mandel is used to color the pixel.",
"def create_fractal(min_x, max_x, min_y, max_y, image, iters):\n height = image.shape[0]\n width = image.shape[1]\n\n pixel_size_x = (max_x - min_x) / width\n pixel_size_y = (max_y - min_y) / height\n \n for x in range(width):\n real = min_x + x * pixel_size_x\n for y in range(height):\n imag = min_y + y * pixel_size_y\n color = mandel(real, imag, iters)\n image[y, x] = color",
"Next we create a 1024x1024 pixel image as a numpy array of bytes. We then call create_fractal with appropriate coordinates to fit the whole mandelbrot set.",
"image = np.zeros((1024, 1536), dtype = np.uint8)\nstart = timer()\ncreate_fractal(-2.0, 1.0, -1.0, 1.0, image, 20) \ndt = timer() - start\n\nprint (\"Mandelbrot created in %f s\" % dt)\nimshow(image)\nshow()",
"You can play with the coordinates to zoom in on different regions in the fractal.",
"create_fractal(-2.0, -1.7, -0.1, 0.1, image, 20) \nimshow(image)\nshow()",
"Faster Execution with Numba\nNumba is a Numpy-aware dynamic Python compiler based on the popular LLVM compiler infrastructure. \nNumba is an Open Source NumPy-aware optimizing compiler for Python sponsored by Continuum Analytics, Inc. It uses the remarkable compiler infrastructure to compile Python syntax to machine code. It is aware of NumPy arrays as typed memory regions and so can speed-up code using NumPy arrays, such as our Mandelbrot functions.\nThe simplest way to use Numba is to decorate the functions you want to compile with @autojit. Numba will compile them for the CPU (if it can resolve the types used).",
"from numba import autojit\n\n@autojit\ndef mandel(x, y, max_iters):\n \"\"\"\n Given the real and imaginary parts of a complex number,\n determine if it is a candidate for membership in the Mandelbrot\n set given a fixed number of iterations.\n \"\"\"\n c = complex(x, y)\n z = 0.0j\n for i in range(max_iters):\n z = z*z + c\n if (z.real*z.real + z.imag*z.imag) >= 4:\n return i\n\n return max_iters\n\n@autojit\ndef create_fractal(min_x, max_x, min_y, max_y, image, iters):\n height = image.shape[0]\n width = image.shape[1]\n\n pixel_size_x = (max_x - min_x) / width\n pixel_size_y = (max_y - min_y) / height\n \n for x in range(width):\n real = min_x + x * pixel_size_x\n for y in range(height):\n imag = min_y + y * pixel_size_y\n color = mandel(real, imag, iters)\n image[y, x] = color",
"Let's run the @autojit code and see if it is faster.",
"image = np.zeros((1024, 1536), dtype = np.uint8)\nstart = timer()\ncreate_fractal(-2.0, 1.0, -1.0, 1.0, image, 20) \ndt = timer() - start\n\nprint (\"Mandelbrot created in %f s\" % dt)\nimshow(image)\nshow()",
"On my desktop computer, the time to compute the 1024x1024 mandelbrot set dropped from 6.92s down to 0.06s. That's a speedup of 115x! The reason this is so much faster is that Numba uses Numpy type information to convert the dynamic Python code into statically compiled machine code, which is many times faster to execute than dynamically typed, interpreted Python code. \nEven Bigger Speedups with CUDA Python\nAnaconda, from Continuum Analytics, is a \"completely free enterprise-ready Python distribution for large-scale data processing, predictive analytics, and scientific computing.\" Anaconda Accelerate is an add-on for Anaconda that includes the NumbaPro Python compiler.\nNumbaPro is an enhanced Numba that targets multi-core CPUs and GPUs directly from simple Python syntax, providing the performance of compiled parallel code with the productivity of the Python language.\nCUDA Python\nIn addition to various types of automatic vectorization and generalized Numpy Ufuncs, NumbaPro also enables developers to access the CUDA parallel programming model using Python syntax. With CUDA Python, you use parallelism explicitly just as in other CUDA languages such as CUDA C and CUDA Fortran. \nLet's write a CUDA version of our Python Mandelbrot set. We need to import cuda from the numbapro module. Then, we need to create a version of the mandel function compiled for the GPU. We can do this without any code duplication by calling cuda.jit on the function, providing it with the return type and the argument types, and specifying device=True to indicate that this is a function that will run on the GPU device.",
"from numbapro import cuda\nfrom numba import *\n\nmandel_gpu = cuda.jit(restype=uint32, argtypes=[f8, f8, uint32], device=True)(mandel)",
"In CUDA, a kernel is a function that runs in parallel using many threads on the device. We can write a kernel version of our mandelbrot function by simply assuming that it will be run by a grid of threads. NumbaPro provides the familiar CUDA threadIdx, blockIdx, blockDim and gridDim intrinsics, as well as a grid() convenience function which evaluates to blockDim * blockIdx + threadIdx.\nOur example juse needs a minor modification to compute a grid-size stride for the x and y ranges, since we will have many threads running in parallel. We just add these three lines:\nstartX, startY = cuda.grid(2)\ngridX = cuda.gridDim.x * cuda.blockDim.x;\ngridY = cuda.gridDim.y * cuda.blockDim.y;\n\nAnd we modify the range in the x loop to use range(startX, width, gridX) (and likewise for the y loop).\nWe decorate the function with @cuda.jit, passing it the type signature of the function. Since kernels cannot have a return value, we do not need the restype argument.",
"@cuda.jit(argtypes=[f8, f8, f8, f8, uint8[:,:], uint32])\ndef mandel_kernel(min_x, max_x, min_y, max_y, image, iters):\n height = image.shape[0]\n width = image.shape[1]\n\n pixel_size_x = (max_x - min_x) / width\n pixel_size_y = (max_y - min_y) / height\n\n startX, startY = cuda.grid(2)\n gridX = cuda.gridDim.x * cuda.blockDim.x;\n gridY = cuda.gridDim.y * cuda.blockDim.y;\n\n for x in range(startX, width, gridX):\n real = min_x + x * pixel_size_x\n for y in range(startY, height, gridY):\n imag = min_y + y * pixel_size_y \n image[y, x] = mandel_gpu(real, imag, iters)",
"Device Memory\nCUDA kernels must operate on data allocated on the device. NumbaPro provides the cuda.to_device() function to copy a Numpy array to the GPU. \nd_image = cuda.to_device(image)\n\nThe return value (d_image) is of type DeviceNDArray, which is a subclass of numpy.ndarray, and provides the to_host() function to copy the array back from GPU to CPU memory\nd_image.to_host()\n\nLaunching Kernels\nTo launch a kernel on the GPU, we must configure it, specifying the size of the grid in blocks, and the size of each thread block. For a 2D image calculation like the Mandelbrot set, we use a 2D grid of 2D blocks. We'll use blocks of 32x8 threads, and launch 32x16 of them in a 2D grid so that we have plenty of blocks to occupy all of the multiprocessors on the GPU.\nPutting this all together, we launch the kernel like this.",
"gimage = np.zeros((1024, 1536), dtype = np.uint8)\nblockdim = (32, 8)\ngriddim = (32,16)\n\nstart = timer()\nd_image = cuda.to_device(gimage)\nmandel_kernel[griddim, blockdim](-2.0, 1.0, -1.0, 1.0, d_image, 20) \nd_image.to_host()\ndt = timer() - start\n\nprint (\"Mandelbrot created on GPU in %f s\", % dt)\n\nimshow(gimage)\nshow()",
"You may notice that when you ran the above code, the image was generated almost instantly. On the NVIDIA Tesla K20c GPU installed in my desktop, it ran in 311 milliseconds, which is an additional 19.3x speedup over the @autojit (compiled CPU) code, or a total of over 2000x faster than interpreted Python code."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
hfoffani/deep-learning
|
batch-norm/Batch_Normalization_Exercises.ipynb
|
mit
|
[
"Batch Normalization – Practice\nBatch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.\nThis is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was:\n1. Complicated enough that training would benefit from batch normalization.\n2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.\n3. Simple enough that the architecture would be easy to understand without additional resources.\nThis notebook includes two versions of the network that you can edit. The first uses higher level functions from the tf.layers package. The second is the same network, but uses only lower level functions in the tf.nn package.\n\nBatch Normalization with tf.layers.batch_normalization\nBatch Normalization with tf.nn.batch_normalization\n\nThe following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named mnist. You'll need to run this cell before running anything else in the notebook.",
"import tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True, reshape=False)",
"Batch Normalization using tf.layers.batch_normalization<a id=\"example_1\"></a>\nThis version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization \nWe'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.\nThis version of the function does not include batch normalization.",
"\"\"\"\nDO NOT MODIFY THIS CELL\n\"\"\"\ndef fully_connected(prev_layer, num_units):\n \"\"\"\n Create a fully connectd layer with the given layer as input and the given number of neurons.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param num_units: int\n The size of the layer. That is, the number of units, nodes, or neurons.\n :returns Tensor\n A new fully connected layer\n \"\"\"\n layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)\n return layer",
"We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.\nThis version of the function does not include batch normalization.",
"\"\"\"\nDO NOT MODIFY THIS CELL\n\"\"\"\ndef conv_layer(prev_layer, layer_depth):\n \"\"\"\n Create a convolutional layer with the given layer as input.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param layer_depth: int\n We'll set the strides and number of feature maps based on the layer's depth in the network.\n This is *not* a good way to make a CNN, but it helps us create this example with very little code.\n :returns Tensor\n A new convolutional layer\n \"\"\"\n strides = 2 if layer_depth % 3 == 0 else 1\n conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)\n return conv_layer",
"Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions). \nThis cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.",
"\"\"\"\nDO NOT MODIFY THIS CELL\n\"\"\"\ndef train(num_batches, batch_size, learning_rate):\n # Build placeholders for the input samples and labels \n inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])\n labels = tf.placeholder(tf.float32, [None, 10])\n \n # Feed the inputs into a series of 20 convolutional layers \n layer = inputs\n for layer_i in range(1, 20):\n layer = conv_layer(layer, layer_i)\n\n # Flatten the output from the convolutional layers \n orig_shape = layer.get_shape().as_list()\n layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])\n\n # Add one fully connected layer\n layer = fully_connected(layer, 100)\n\n # Create the output layer with 1 node for each \n logits = tf.layers.dense(layer, 10)\n \n # Define loss and training operations\n model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\n train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)\n \n # Create operations to test accuracy\n correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n \n # Train and test the network\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for batch_i in range(num_batches):\n batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n\n # train this batch\n sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})\n \n # Periodically check the validation or training loss and accuracy\n if batch_i % 100 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n elif batch_i % 25 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})\n print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n\n # At the end, score the final accuracy for both the validation and test sets\n acc = sess.run(accuracy, {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Final validation accuracy: {:>3.5f}'.format(acc))\n acc = sess.run(accuracy, {inputs: mnist.test.images,\n labels: mnist.test.labels})\n print('Final test accuracy: {:>3.5f}'.format(acc))\n \n # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.\n correct = 0\n for i in range(100):\n correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]]})\n\n print(\"Accuracy on 100 samples:\", correct/100)\n\n\nnum_batches = 800\nbatch_size = 64\nlearning_rate = 0.002\n\ntf.reset_default_graph()\nwith tf.Graph().as_default():\n train(num_batches, batch_size, learning_rate)",
"With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)\nUsing batch normalization, you'll be able to train this same network to over 90% in that same number of batches.\nAdd batch normalization\nWe've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference. \nIf you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.\nTODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.",
"def fully_connected(prev_layer, num_units, is_training):\n \"\"\"\n Create a fully connectd layer with the given layer as input and the given number of neurons.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param num_units: int\n The size of the layer. That is, the number of units, nodes, or neurons.\n :returns Tensor\n A new fully connected layer\n \"\"\"\n layer = tf.layers.dense(prev_layer, num_units, activation=None, use_bias=False)\n layer = tf.layers.batch_normalization(layer, training=is_training)\n layer = tf.nn.relu(layer)\n return layer",
"TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.",
"def conv_layer(prev_layer, layer_depth, is_training):\n \"\"\"\n Create a convolutional layer with the given layer as input.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param layer_depth: int\n We'll set the strides and number of feature maps based on the layer's depth in the network.\n This is *not* a good way to make a CNN, but it helps us create this example with very little code.\n :returns Tensor\n A new convolutional layer\n \"\"\"\n strides = 2 if layer_depth % 3 == 0 else 1\n conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=None)\n conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)\n conv_layer = tf.nn.relu(conv_layer)\n return conv_layer",
"TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.",
"def train(num_batches, batch_size, learning_rate):\n # Build placeholders for the input samples and labels \n inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])\n labels = tf.placeholder(tf.float32, [None, 10])\n \n is_training = tf.placeholder(tf.bool)\n \n # Feed the inputs into a series of 20 convolutional layers \n layer = inputs\n for layer_i in range(1, 20):\n layer = conv_layer(layer, layer_i, is_training)\n\n # Flatten the output from the convolutional layers \n orig_shape = layer.get_shape().as_list()\n layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])\n\n # Add one fully connected layer\n layer = fully_connected(layer, 100, is_training)\n\n # Create the output layer with 1 node for each \n logits = tf.layers.dense(layer, 10)\n \n # Define loss and training operations\n model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\n with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\n train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)\n \n # Create operations to test accuracy\n correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n \n # Train and test the network\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for batch_i in range(num_batches):\n batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n\n # train this batch\n sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training:True})\n \n # Periodically check the validation or training loss and accuracy\n if batch_i % 100 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,\n labels: mnist.validation.labels,\n is_training: False})\n print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n elif batch_i % 25 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training:False})\n print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n\n # At the end, score the final accuracy for both the validation and test sets\n acc = sess.run(accuracy, {inputs: mnist.validation.images,\n labels: mnist.validation.labels,\n is_training:False})\n print('Final validation accuracy: {:>3.5f}'.format(acc))\n acc = sess.run(accuracy, {inputs: mnist.test.images,\n labels: mnist.test.labels,\n is_training:False})\n print('Final test accuracy: {:>3.5f}'.format(acc))\n \n # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.\n correct = 0\n for i in range(100):\n correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]],\n is_training:False})\n\n print(\"Accuracy on 100 samples:\", correct/100)\n\n\nnum_batches = 800\nbatch_size = 64\nlearning_rate = 0.002\n\ntf.reset_default_graph()\nwith tf.Graph().as_default():\n train(num_batches, batch_size, learning_rate)",
"With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.\nBatch Normalization using tf.nn.batch_normalization<a id=\"example_2\"></a>\nMost of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.\nThis version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization.\nOptional TODO: You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization. \nTODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.\nNote: For convenience, we continue to use tf.layers.dense for the fully_connected layer. By this point in the class, you should have no problem replacing that with matrix operations between the prev_layer and explicit weights and biases variables.",
"def fully_connected(prev_layer, num_units, is_training):\n \"\"\"\n Create a fully connectd layer with the given layer as input and the given number of neurons.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param num_units: int\n The size of the layer. That is, the number of units, nodes, or neurons.\n :returns Tensor\n A new fully connected layer\n \"\"\"\n \n layer = tf.layers.dense(prev_layer, num_units, activation=None, use_bias=False)\n \n gamma = tf.Variable(tf.ones([num_units]))\n beta = tf.Variable(tf.zeros([num_units]))\n pop_mean = tf.Variable(tf.zeros([num_units]), trainable=False)\n pop_variance = tf.Variable(tf.ones([num_units]), trainable=False)\n epsilon = 1e-3\n \n def b_training():\n batch_mean, batch_variance = tf.nn.moments(layer, [0])\n decay = 0.99\n train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))\n train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))\n with tf.control_dependencies([train_mean, train_variance]):\n return tf.nn.batch_normalization(layer, batch_mean, batch_variance, beta, gamma, epsilon)\n \n def b_infering():\n return tf.nn.batch_normalization(layer, pop_mean, pop_variance, beta, gamma, epsilon)\n\n bnorm_layer = tf.cond(is_training, b_training, b_infering)\n return tf.nn.relu(bnorm_layer)\n",
"TODO: Modify conv_layer to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.\nNote: Unlike in the previous example that used tf.layers, adding batch normalization to these convolutional layers does require some slight differences to what you did in fully_connected.",
"def conv_layer(prev_layer, layer_depth, is_training):\n \"\"\"\n Create a convolutional layer with the given layer as input.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param layer_depth: int\n We'll set the strides and number of feature maps based on the layer's depth in the network.\n This is *not* a good way to make a CNN, but it helps us create this example with very little code.\n :returns Tensor\n A new convolutional layer\n \"\"\"\n strides = 2 if layer_depth % 3 == 0 else 1\n\n in_channels = prev_layer.get_shape().as_list()[3]\n out_channels = layer_depth*4\n \n weights = tf.Variable(\n tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))\n \n layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')\n\n gamma = tf.Variable(tf.ones([out_channels]))\n beta = tf.Variable(tf.zeros([out_channels]))\n pop_mean = tf.Variable(tf.zeros([out_channels]), trainable=False)\n pop_variance = tf.Variable(tf.ones([out_channels]), trainable=False)\n epsilon = 1e-3\n \n def b_training():\n batch_mean, batch_variance = tf.nn.moments(layer, [0,1,2], keep_dims=False)\n decay = 0.99\n train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))\n train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))\n with tf.control_dependencies([train_mean, train_variance]):\n return tf.nn.batch_normalization(layer, batch_mean, batch_variance, beta, gamma, epsilon)\n \n def b_infering():\n return tf.nn.batch_normalization(layer, pop_mean, pop_variance, beta, gamma, epsilon)\n\n bnorm_layer = tf.cond(is_training, b_training, b_infering)\n return tf.nn.relu(bnorm_layer)\n",
"TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training.",
"def train(num_batches, batch_size, learning_rate):\n # Build placeholders for the input samples and labels \n inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])\n labels = tf.placeholder(tf.float32, [None, 10])\n \n is_training = tf.placeholder(tf.bool)\n\n # Feed the inputs into a series of 20 convolutional layers \n layer = inputs\n for layer_i in range(1, 20):\n layer = conv_layer(layer, layer_i, is_training)\n\n # Flatten the output from the convolutional layers \n orig_shape = layer.get_shape().as_list()\n layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])\n\n # Add one fully connected layer\n layer = fully_connected(layer, 100, is_training)\n\n # Create the output layer with 1 node for each \n logits = tf.layers.dense(layer, 10)\n \n # Define loss and training operations\n model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\n train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)\n \n # Create operations to test accuracy\n correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n \n # Train and test the network\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for batch_i in range(num_batches):\n batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n\n # train this batch\n sess.run(train_opt, {inputs: batch_xs, labels: batch_ys,\n is_training:True})\n \n # Periodically check the validation or training loss and accuracy\n if batch_i % 100 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,\n labels: mnist.validation.labels,\n is_training:False})\n print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n elif batch_i % 25 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys,\n is_training:False})\n print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n\n # At the end, score the final accuracy for both the validation and test sets\n acc = sess.run(accuracy, {inputs: mnist.validation.images,\n labels: mnist.validation.labels,\n is_training:False})\n print('Final validation accuracy: {:>3.5f}'.format(acc))\n acc = sess.run(accuracy, {inputs: mnist.test.images,\n labels: mnist.test.labels,\n is_training:False})\n print('Final test accuracy: {:>3.5f}'.format(acc))\n \n # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.\n correct = 0\n for i in range(100):\n correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]],\n is_training:False})\n\n print(\"Accuracy on 100 samples:\", correct/100)\n\n\nnum_batches = 800\nbatch_size = 64\nlearning_rate = 0.002\n\ntf.reset_default_graph()\nwith tf.Graph().as_default():\n train(num_batches, batch_size, learning_rate)",
"Once again, the model with batch normalization should reach an accuracy over 90%. There are plenty of details that can go wrong when implementing at this low level, so if you got it working - great job! If not, do not worry, just look at the Batch_Normalization_Solutions notebook to see what went wrong."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
synthicity/activitysim
|
activitysim/examples/example_estimation/notebooks/23_trip_mode_choice.ipynb
|
agpl-3.0
|
[
"Estimating Trip Mode Choice\nThis notebook illustrates how to re-estimate a single model component for ActivitySim. This process \nincludes running ActivitySim in estimation mode to read household travel survey files and write out\nthe estimation data bundles used in this notebook. To review how to do so, please visit the other\nnotebooks in this directory.\nLoad libraries",
"import os\nimport larch # !conda install larch -c conda-forge # for estimation\nimport pandas as pd",
"We'll work in our test directory, where ActivitySim has saved the estimation data bundles.",
"os.chdir('test')",
"Load data and prep model for estimation",
"modelname = \"trip_mode_choice\"\n\nfrom activitysim.estimation.larch import component_model\nmodel, data = component_model(modelname, return_data=True)",
"Review data loaded from the EDB\nThe next step is to read the EDB, including the coefficients, model settings, utilities specification, and chooser and alternative data.\nCoefficients",
"data.coefficients",
"Utility specification",
"data.spec",
"Chooser data",
"data.chooser_data",
"Estimate\nWith the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters.",
"model.load_data()\nmodel.doctor(repair_ch_av='-')\nmodel.loglike()\n\nmodel.maximize_loglike(method='SLSQP', options={\"maxiter\": 1000})",
"Estimated coefficients",
"model.parameter_summary()",
"Output Estimation Results",
"from activitysim.estimation.larch import update_coefficients\nresult_dir = data.edb_directory/\"estimated\"\nupdate_coefficients(\n model, data, result_dir,\n output_file=f\"{modelname}_coefficients_revised.csv\",\n);",
"Write the model estimation report, including coefficient t-statistic and log likelihood",
"for m in model:\n m.to_xlsx(\n result_dir/f\"{m.title}_{modelname}_model_estimation.xlsx\", \n data_statistics=False,\n )",
"Next Steps\nThe final step is to either manually or automatically copy the *_coefficients_revised.csv file to the configs folder, rename it to *_coefficients.csv, and run ActivitySim in simulation mode.",
"pd.read_csv(result_dir/f\"{modelname}_coefficients_revised.csv\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
miykael/nipype_tutorial
|
notebooks/basic_data_input.ipynb
|
bsd-3-clause
|
[
"Data Input\nTo do any computation, you need to have data. Getting the data in the framework of a workflow is therefore the first step of every analysis. Nipype provides many different modules to grab or select the data:\nDataFinder\nDataGrabber\nFreeSurferSource\nJSONFileGrabber\nS3DataGrabber\nSSHDataGrabber\nSelectFiles\nXNATSource\n\nThis tutorial will only cover some of them. For the rest, see the section interfaces.io on the official homepage.\nDataset structure\nTo be able to import data, you first need to be aware of the structure of your dataset. The structure of the dataset for this tutorial is according to BIDS, and looks as follows:\nds000114\n├── CHANGES\n├── dataset_description.json\n├── derivatives\n│ ├── fmriprep\n│ │ └── sub01...sub10\n│ │ └── ...\n│ ├── freesurfer\n│ ├── fsaverage\n│ ├── fsaverage5\n│ │ └── sub01...sub10\n│ │ └── ...\n├── dwi.bval\n├── dwi.bvec\n├── sub-01\n│ ├── ses-retest \n│ ├── anat\n│ │ └── sub-01_ses-retest_T1w.nii.gz\n│ ├──func\n│ ├── sub-01_ses-retest_task-covertverbgeneration_bold.nii.gz\n│ ├── sub-01_ses-retest_task-fingerfootlips_bold.nii.gz\n│ ├── sub-01_ses-retest_task-linebisection_bold.nii.gz\n│ ├── sub-01_ses-retest_task-linebisection_events.tsv\n│ ├── sub-01_ses-retest_task-overtverbgeneration_bold.nii.gz\n│ └── sub-01_ses-retest_task-overtwordrepetition_bold.nii.gz\n│ └── dwi\n│ └── sub-01_ses-retest_dwi.nii.gz\n│ ├── ses-test \n│ ├── anat\n│ │ └── sub-01_ses-test_T1w.nii.gz\n│ ├──func\n│ ├── sub-01_ses-test_task-covertverbgeneration_bold.nii.gz\n│ ├── sub-01_ses-test_task-fingerfootlips_bold.nii.gz\n│ ├── sub-01_ses-test_task-linebisection_bold.nii.gz\n│ ├── sub-01_ses-test_task-linebisection_events.tsv\n│ ├── sub-01_ses-test_task-overtverbgeneration_bold.nii.gz\n│ └── sub-01_ses-test_task-overtwordrepetition_bold.nii.gz\n│ └── dwi\n│ └── sub-01_ses-retest_dwi.nii.gz\n├── sub-02..sub-10\n│ └── ...\n├── task-covertverbgeneration_bold.json\n├── task-covertverbgeneration_events.tsv\n├── task-fingerfootlips_bold.json\n├── task-fingerfootlips_events.tsv\n├── task-linebisection_bold.json\n├── task-overtverbgeneration_bold.json\n├── task-overtverbgeneration_events.tsv\n├── task-overtwordrepetition_bold.json\n└── task-overtwordrepetition_events.tsv\n\nDataGrabber\nDataGrabber is an interface for collecting files from hard drive. It is very flexible and supports almost any file organization of your data you can imagine.\nYou can use it as a trivial use case of getting a fixed file. By default, DataGrabber stores its outputs in a field called outfiles.",
"import nipype.interfaces.io as nio\ndatasource1 = nio.DataGrabber()\ndatasource1.inputs.base_directory = '/data/ds000114'\ndatasource1.inputs.template = 'sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz'\ndatasource1.inputs.sort_filelist = True\nresults = datasource1.run()\nresults.outputs",
"Or you can get at all NIfTI files containing the word 'fingerfootlips' in all directories starting with the letter 's'.",
"import nipype.interfaces.io as nio\ndatasource2 = nio.DataGrabber()\ndatasource2.inputs.base_directory = '/data/ds000114'\ndatasource2.inputs.template = 's*/ses-test/func/*fingerfootlips*.nii.gz'\ndatasource2.inputs.sort_filelist = True\nresults = datasource2.run()\nresults.outputs",
"Two special inputs were used in these previous cases. The input base_directory\nindicates in which directory to search, while the input template indicates the\nstring template to match. So in the previous case DataGrabber is looking for\npath matches of the form /data/ds000114/s*/ses-test/func/*fingerfootlips*.nii.gz.\n<div class=\"alert alert-info\">\n**Note**: When used with wildcards (e.g., `s*` and `*fingerfootlips*` above) `DataGrabber` does not return data in sorted order. In order to force it to return data in a sorted order, one needs to set the input `sorted = True`. However, when explicitly specifying an order as we will see below, `sorted` should be set to `False`.\n</div>\n\nMore use cases arise when the template can be filled by other inputs. In the\nexample below, we define an input field for DataGrabber called subject_id. This is\nthen used to set the template (see %d in the template).",
"datasource3 = nio.DataGrabber(infields=['subject_id'])\ndatasource3.inputs.base_directory = '/data/ds000114'\ndatasource3.inputs.template = 'sub-%02d/ses-test/func/*fingerfootlips*.nii.gz'\ndatasource3.inputs.sort_filelist = True\ndatasource3.inputs.subject_id = [1, 7]\nresults = datasource3.run()\nresults.outputs",
"This will return the functional images from subject 1 and 7 for the task fingerfootlips. We can take this a step further and pair subjects with task.",
"datasource4 = nio.DataGrabber(infields=['subject_id', 'run'])\ndatasource4.inputs.base_directory = '/data/ds000114'\ndatasource4.inputs.template = 'sub-%02d/ses-test/func/*%s*.nii.gz'\ndatasource4.inputs.sort_filelist = True\ndatasource4.inputs.run = ['fingerfootlips', 'linebisection']\ndatasource4.inputs.subject_id = [1, 7]\nresults = datasource4.run()\nresults.outputs",
"This will return the functional image of subject 1, task 'fingerfootlips' and the functional image of subject 7 for the 'linebisection' task.\nA more realistic use-case\nDataGrabber is a generic data grabber module that wraps around glob to select your neuroimaging data in an intelligent way. As an example, let's assume we want to grab the anatomical and functional images of a certain subject.\nFirst, we need to create the DataGrabber node. This node needs to have some input fields for all dynamic parameters (e.g. subject identifier, task identifier), as well as the two desired output fields anat and func.",
"from nipype import DataGrabber, Node\n\n# Create DataGrabber node\ndg = Node(DataGrabber(infields=['subject_id', 'ses_name', 'task_name'],\n outfields=['anat', 'func']),\n name='datagrabber')\n\n# Location of the dataset folder\ndg.inputs.base_directory = '/data/ds000114'\n\n# Necessary default parameters\ndg.inputs.template = '*'\ndg.inputs.sort_filelist = True",
"Second, we know that the two files we desire are the the following location:\nanat = /data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz\nfunc = /data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz\n\nWe see that the two files only have three dynamic parameters between subjects and task names:\nsubject_id: in this case 'sub-01'\ntask_name: in this case fingerfootlips\nses_name: test\n\nThis means that we can rewrite the paths as follows:\nanat = /data/ds102/[subject_id]/ses-[ses_name]/anat/sub-[subject_id]_ses-[ses_name]_T1w.nii.gz\nfunc = /data/ds102/[subject_id]/ses-[ses_name]/func/sub-[subject_id]_ses-[ses_name]_task-[task_name]_bold.nii.gz\n\nTherefore, we need the parameters subject_id and ses_name for the anatomical image and the parameters subject_id, ses_name and task_name for the functional image. In the context of DataGabber, this is specified as follows:",
"dg.inputs.template_args = {'anat': [['subject_id', 'ses_name']],\n 'func': [['subject_id', 'ses_name', 'task_name']]}",
"Now, comes the most important part of DataGrabber. We need to specify the template structure to find the specific data. This can be done as follows.",
"dg.inputs.field_template = {'anat': 'sub-%02d/ses-%s/anat/*_T1w.nii.gz',\n 'func': 'sub-%02d/ses-%s/func/*task-%s_bold.nii.gz'}",
"You'll notice that we use %s, %02d and * for placeholders in the data paths. %s is a placeholder for a string and is filled out by task_name or ses_name. %02d is a placeholder for a integer number and is filled out by subject_id. * is used as a wild card, e.g. a placeholder for any possible string combination. This is all to set up the DataGrabber node.\nAbove, two more fields are introduced: field_template and template_args. These fields are both dictionaries whose keys correspond to the outfields keyword. The field_template reflects the search path for each output field, while the template_args reflect the inputs that satisfy the template. The inputs can either be one of the named inputs specified by the infields keyword arg or it can be raw strings or integers corresponding to the template. For the func output, the %s in the field_template is satisfied by subject_id and the %d is filled in by the list of numbers.\nNow it is up to you how you want to feed the dynamic parameters into the node. You can either do this by using another node (e.g. IdentityInterface) and feed subject_id, ses_name and task_name as connections to the DataGrabber node or specify them directly as node inputs.",
"# Using the IdentityInterface\nfrom nipype import IdentityInterface\ninfosource = Node(IdentityInterface(fields=['subject_id', 'task_name']),\n name=\"infosource\")\ninfosource.inputs.task_name = \"fingerfootlips\"\ninfosource.inputs.ses_name = \"test\"\nsubject_id_list = [1, 2]\ninfosource.iterables = [('subject_id', subject_id_list)]",
"Now you only have to connect infosource with your DataGrabber and run the workflow to iterate over subjects 1 and 2.\nYou can also provide the inputs to the DataGrabber node directly, for one subject you can do this as follows:",
"# Specifying the input fields of DataGrabber directly\ndg.inputs.subject_id = 1\ndg.inputs.ses_name = \"test\"\ndg.inputs.task_name = \"fingerfootlips\"",
"Now let's run the DataGrabber node and let's look at the output:",
"dg.run().outputs",
"Exercise 1\nGrab T1w images from both sessions - ses-test and ses-retest for sub-01.",
"# write your solution here\n\nfrom nipype import DataGrabber, Node\n\n# Create DataGrabber node\nex1_dg = Node(DataGrabber(infields=['subject_id', 'ses_name'],\n outfields=['anat']),\n name='datagrabber')\n\n# Location of the dataset folder\nex1_dg.inputs.base_directory = '/data/ds000114'\n\n# Necessary default parameters\nex1_dg.inputs.template = '*'\nex1_dg.inputs.sort_filelist = True\n\n# specify the template\nex1_dg.inputs.template_args = {'anat': [['subject_id', 'ses_name']]}\nex1_dg.inputs.field_template = {'anat': 'sub-%02d/ses-%s/anat/*_T1w.nii.gz'}\n\n# specify subject_id and ses_name you're interested in\nex1_dg.inputs.subject_id = 1\nex1_dg.inputs.ses_name = [\"test\", \"retest\"]\n\n# and run the node\nex1_res = ex1_dg.run()\n\n# you can now check the output\nex1_res.outputs",
"SelectFiles\nSelectFiles is a more flexible alternative to DataGrabber. It is built on Python format strings, which are similar to the Python string interpolation feature you are likely already familiar with, but advantageous in several respects. Format strings allow you to replace named sections of template strings set off by curly braces ({}), possibly filtered through a set of functions that control how the values are rendered into the string. As a very basic example, we could write",
"msg = \"This workflow uses {package}.\"",
"and then format it with keyword arguments:",
"print(msg.format(package=\"FSL\"))",
"SelectFiles uses the {}-based string formatting syntax to plug values into string templates and collect the data. These templates can also be combined with glob wild cards. The field names in the formatting template (i.e. the terms in braces) will become inputs fields on the interface, and the keys in the templates dictionary will form the output fields.\nLet's focus again on the data we want to import:\nanat = /data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz\nfunc = /data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz\n\nNow, we can replace those paths with the according {}-based strings.\nanat = /data/ds000114/sub-{subject_id}/ses-{ses_name}/anat/sub-{subject_id}_ses-{ses_name}_T1w.nii.gz\nfunc = /data/ds000114/sub-{subject_id}/ses-{ses_name}/func/ \\\n sub-{subject_id}_ses-{ses_name}_task-{task_name}_bold.nii.gz\n\nHow would this look like as a SelectFiles node?",
"from nipype import SelectFiles, Node\n\n# String template with {}-based strings\ntemplates = {'anat': 'sub-{subject_id}/ses-{ses_name}/anat/sub-{subject_id}_ses-{ses_name}_T1w.nii.gz',\n 'func': 'sub-{subject_id}/ses-{ses_name}/func/sub-{subject_id}_ses-{ses_name}_task-{task_name}_bold.nii.gz'}\n\n# Create SelectFiles node\nsf = Node(SelectFiles(templates),\n name='selectfiles')\n\n# Location of the dataset folder\nsf.inputs.base_directory = '/data/ds000114'\n\n# Feed {}-based placeholder strings with values\nsf.inputs.subject_id = '01'\nsf.inputs.ses_name = \"test\"\nsf.inputs.task_name = 'fingerfootlips'",
"Let's check if we get what we wanted.",
"sf.run().outputs",
"Perfect! But why is SelectFiles more flexible than DataGrabber? First, you perhaps noticed that with the {}-based string, we can reuse the same input (e.g. subject_id) multiple time in the same string, without feeding it multiple times into the template.\nAdditionally, you can also select multiple files without the need of an iterable node. For example, let's assume we want to select anatomical images for all subjects at once. We can do this by using the eildcard * in a template:\n'sub-*/anat/sub-*_T1w.nii.gz'\n\nLet's see how this works:",
"from nipype import SelectFiles, Node\n\n# String template with {}-based strings\ntemplates = {'anat': 'sub-*/ses-{ses_name}/anat/sub-*_ses-{ses_name}_T1w.nii.gz'}\n\n\n# Create SelectFiles node\nsf = Node(SelectFiles(templates),\n name='selectfiles')\n\n# Location of the dataset folder\nsf.inputs.base_directory = '/data/ds000114'\n\n# Feed {}-based placeholder strings with values\nsf.inputs.ses_name = 'test'\n\n# Print SelectFiles output\nsf.run().outputs",
"As you can see, now anat contains ten file paths, T1w images for all ten subject. \nAs a side note, you could also use [] string formatting for some simple cases, e.g. for loading only subject 1 and 2: \n'sub-0[1,2]/ses-test/anat/sub-0[1,2]_ses-test_T1w.nii.gz'\n\nforce_lists\nThere's an additional parameter, force_lists, which controls how SelectFiles behaves in cases where only a single file matches the template. The default behavior is that when a template matches multiple files they are returned as a list, while a single file is returned as a string. There may be situations where you want to force the outputs to always be returned as a list (for example, you are writing a workflow that expects to operate on several runs of data, but some of your subjects only have a single run). In this case, force_lists can be used to tune the outputs of the interface. You can either use a boolean value, which will be applied to every output the interface has, or you can provide a list of the output fields that should be coerced to a list.\nReturning to our previous example, you may want to ensure that the anat files are returned as a list, but you only ever will have a single T1 file. In this case, you would do",
"sf = SelectFiles(templates, force_lists=[\"anat\"])",
"Exercise 2\nUse SelectFile to select again T1w images from both sessions - ses-test and ses-retest for sub-01.",
"# write your solution here\n\nfrom nipype import SelectFiles, Node\n\n# String template with {}-based strings\ntemplates = {'anat': 'sub-01/ses-*/anat/sub-01_ses-*_T1w.nii.gz'}\n \n\n# Create SelectFiles node\nsf = Node(SelectFiles(templates),\n name='selectfiles')\n\n# Location of the dataset folder\nsf.inputs.base_directory = '/data/ds000114'\n\n#sf.inputs.ses_name = \n\nsf.run().outputs",
"FreeSurferSource\nFreeSurferSource is a specific case of a file grabber that facilitates the data import of outputs from the FreeSurfer recon-all algorithm. This, of course, requires that you've already run recon-all on your subject.\nFor the tutorial dataset ds000114, recon-all was already run. So, let's make sure that you have the anatomy output of one subject on your system:",
"!datalad get -r -J 4 -d /data/ds000114 /data/ds000114/derivatives/freesurfer/sub-01",
"Now, before you can run FreeSurferSource, you first have to specify the path to the FreeSurfer output folder, i.e. you have to specify the SUBJECTS_DIR variable. This can be done as follows:",
"from nipype.interfaces.freesurfer import FSCommand\nfrom os.path import abspath as opap\n\n# Path to your freesurfer output folder\nfs_dir = opap('/data/ds000114/derivatives/freesurfer/')\n\n# Set SUBJECTS_DIR\nFSCommand.set_default_subjects_dir(fs_dir)",
"To create the FreeSurferSource node, do as follows:",
"from nipype import Node\nfrom nipype.interfaces.io import FreeSurferSource\n\n# Create FreeSurferSource node\nfssource = Node(FreeSurferSource(subjects_dir=fs_dir),\n name='fssource')",
"Let's now run it for a specific subject.",
"fssource.inputs.subject_id = 'sub-01'\nresult = fssource.run() ",
"Did it work? Let's try to access multiple FreeSurfer outputs:",
"print('aparc_aseg: %s\\n' % result.outputs.aparc_aseg)\nprint('inflated: %s\\n' % result.outputs.inflated)",
"It seems to be working as it should. But as you can see, the inflated output actually contains the file location for both hemispheres. With FreeSurferSource we can also restrict the file selection to a single hemisphere. To do this, we use the hemi input filed:",
"fssource.inputs.hemi = 'lh'\nresult = fssource.run()",
"Let's take a look again at the inflated output.",
"result.outputs.inflated",
"Perfect!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Boussau/Notebooks
|
Notebooks/Metropolis MCMC for coin fairness inference.ipynb
|
gpl-2.0
|
[
"import random\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.stats import bernoulli\nfrom scipy.stats import beta\nimport seaborn as sns\nsns.set()\nsns.set_context(\"talk\")\n%matplotlib inline",
"We have some data providing the results of 30 coin tosses. We would like to estimate how fair the coin is, i.e. what is the probability of getting heads (1).",
"data = [1,0,1,0,0,1,1,1,0,0,1,1,1,0,1,1,1,0,0,1,1,0,1,1,0,1,1,0,1,1]\nprint(len(data))",
"We build a probabilistic model of coin tossing.\nAll coin tosses are supposed to be independent tosses of the same coin, which always have the same probability of returning a head.\nWe want to perform Bayesian inference, therefore we need a prior.\nFor inference, we will be using Metropolis MCMC. \nDefining a prior\nWe need to put some prior probability on the fairness of the coin. For this, a beta distribution seems appropriate, as it is a continuous distribution between 0 and 1.\nLet's display a beta distribution with various parameter values.",
"fig_size=[]\nfig_size.append(15)\nfig_size.append(9)\nplt.rcParams[\"figure.figsize\"] = fig_size\n\nbeginning = 0.0001\nend = 0.9999\n\na = 1\nb = 1\nx = np.linspace(beta.ppf(beginning, a, b), beta.ppf(end, a, b), 100)\nplt.plot(x, beta.pdf(x, a, b),'r-', lw=5, alpha=0.6, label='Beta pdf, a=1, b=1')\na = 2\nb = 2\nx2 = np.linspace(beta.ppf(beginning, a, b), beta.ppf(end, a, b), 100)\nplt.plot(x2, beta.pdf(x2, a, b),'b-', lw=5, alpha=0.6, label='Beta pdf, a=2, b=2')\na = 0.8\nb = 0.8\nx3 = np.linspace(beta.ppf(beginning, a, b), beta.ppf(end, a, b), 100)\nplt.plot(x3, beta.pdf(x3, a, b),'g-', lw=5, alpha=0.6, label='Beta pdf, a=0.8, b=0.8')\na = 2\nb = 0.8\nx4 = np.linspace(beta.ppf(beginning, a, b), beta.ppf(end, a, b), 100)\nplt.plot(x4, beta.pdf(x4, a, b),'p-', lw=5, alpha=0.6, label='Beta pdf, a=2, b=0.8')\nplt.legend(loc='best', frameon=False)\nplt.xlabel(\"Parameter value\")\nplt.ylabel(\"Density/Frequency\")\n",
"We choose to use a=2, b=2. This is a weakly informative prior that the coin should be fair, given our past experience with coins.\nInference of the fairness of the coin using MCMC\nWe build a MCMC chain to estimate the probability of heads for this coin.\nFirst we define the model, with the prior, the likelihood and the posterior probability, then we implement a Metropolis MCMC inference mechanism.\nBuilding of the model",
"# Function to compute the likelihood P(D|M)\ndef likelihood (data, parameter):\n p = 1.0\n for d in data:\n if d == 0:\n p *= 1-parameter\n else:\n p *= parameter\n return p\n\n# Function to compute the prior P(M)\ndef prior (parameter):\n return beta.pdf(parameter, a=2, b=2)\n\n# Function to compute the un-normalized posterior P(D|M) * P(M)\ndef unnormalized_posterior (data, parameter):\n return likelihood(data, parameter) * prior(parameter)\n\n",
"Implementing the MCMC algorithm",
"# Function to propose a new parameter value, randomly drawn between 0 and 1\ndef propose_new_parameter_value():\n return random.random()\n\n\n# Function to run Metropolis MCMC inference\ndef MetropolisMCMC(data, number_iterations):\n current_parameter_value = propose_new_parameter_value()\n record_parameter = []\n record_parameter.append(current_parameter_value)\n print(\"Initial parameter value for the MCMC: \"+str(current_parameter_value))\n current_posterior = unnormalized_posterior(data, current_parameter_value)\n print(\"Initial probability of the model: \" + str(current_posterior))\n record_posterior = []\n record_posterior.append(current_posterior)\n for i in range (number_iterations):\n acceptance_threshold = random.random()\n proposed_parameter_value = random.random()\n proposed_posterior = unnormalized_posterior(data, proposed_parameter_value)\n if (proposed_posterior / current_posterior > acceptance_threshold):\n current_parameter_value = proposed_parameter_value\n current_posterior = proposed_posterior\n record_parameter.append(current_parameter_value)\n record_posterior.append(current_posterior)\n return record_parameter, record_posterior\n\n\nparams, posteriors = MetropolisMCMC(data, 10000)\n\nplt.plot(posteriors)\nplt.xlabel(\"Iteration\")\nplt.ylabel(\"Posterior probability\")\n\n\nplt.plot(params)\nplt.xlabel(\"Iteration\")\nplt.ylabel(\"Parameter value\")\n",
"Let's compare the posterior inference to the prior: have we learned anything about our coin?",
"plt.rcParams[\"figure.figsize\"] = fig_size\n\na = 2\nb = 2\nx2 = np.linspace(beta.ppf(beginning, a, b), beta.ppf(end, a, b), 100)\nplt.plot(x2, beta.pdf(x2, a, b),'b-', lw=5, alpha=0.6, label='PRIOR: Beta pdf, a=2, b=2')\nplt.hist(params, label='POSTERIOR', density=True, bins=200, color=\"lightgreen\")\nplt.legend(loc='best', frameon=False)\nplt.xlabel(\"Parameter value\")\nplt.ylabel(\"Density/Frequency\")\nsns.kdeplot(np.array(params), bw=0.03, lw=5, color=\"green\", shade=True)\n",
"What is the probability that the coin favours heads over tails?\nLet's compute P(parameter > 0.5).",
"num = 0\nfor i in range(len(params)):\n if params[i]>0.5:\n num += 1\nprint(\"The probability that the coin favours heads is \"+str(num / len(params)) + \" vs \"+str(1-num / len(params)) + \" that it favours tails.\")",
"Our median estimate for the parameter is:",
"median_param=np.median(params)\nprint(str(median_param))",
"Compared to the Maximum Likelihood estimate, the frequency of heads:",
"ratio_1=sum(data)/len(data)\nprint(str(ratio_1))",
"Conclusion\nThe coin seems to be unfair because it is much more probable that the parameter value is above 0.5 than below. Our estimate of the fairness of the coin has been a little bit affected by our prior, which was weakly informative, as can be seen from the comparison between the median parameter value inferred by MCMC and the Maximum Likelihood inference of the parameter value, the frequency of heads."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
statsmodels/statsmodels.github.io
|
v0.12.2/examples/notebooks/generated/ols.ipynb
|
bsd-3-clause
|
[
"Ordinary Least Squares",
"%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nimport statsmodels.api as sm\nfrom statsmodels.sandbox.regression.predstd import wls_prediction_std\n\nnp.random.seed(9876789)",
"OLS estimation\nArtificial data:",
"nsample = 100\nx = np.linspace(0, 10, 100)\nX = np.column_stack((x, x**2))\nbeta = np.array([1, 0.1, 10])\ne = np.random.normal(size=nsample)",
"Our model needs an intercept so we add a column of 1s:",
"X = sm.add_constant(X)\ny = np.dot(X, beta) + e",
"Fit and summary:",
"model = sm.OLS(y, X)\nresults = model.fit()\nprint(results.summary())",
"Quantities of interest can be extracted directly from the fitted model. Type dir(results) for a full list. Here are some examples:",
"print('Parameters: ', results.params)\nprint('R2: ', results.rsquared)",
"OLS non-linear curve but linear in parameters\nWe simulate artificial data with a non-linear relationship between x and y:",
"nsample = 50\nsig = 0.5\nx = np.linspace(0, 20, nsample)\nX = np.column_stack((x, np.sin(x), (x-5)**2, np.ones(nsample)))\nbeta = [0.5, 0.5, -0.02, 5.]\n\ny_true = np.dot(X, beta)\ny = y_true + sig * np.random.normal(size=nsample)",
"Fit and summary:",
"res = sm.OLS(y, X).fit()\nprint(res.summary())",
"Extract other quantities of interest:",
"print('Parameters: ', res.params)\nprint('Standard errors: ', res.bse)\nprint('Predicted values: ', res.predict())",
"Draw a plot to compare the true relationship to OLS predictions. Confidence intervals around the predictions are built using the wls_prediction_std command.",
"prstd, iv_l, iv_u = wls_prediction_std(res)\n\nfig, ax = plt.subplots(figsize=(8,6))\n\nax.plot(x, y, 'o', label=\"data\")\nax.plot(x, y_true, 'b-', label=\"True\")\nax.plot(x, res.fittedvalues, 'r--.', label=\"OLS\")\nax.plot(x, iv_u, 'r--')\nax.plot(x, iv_l, 'r--')\nax.legend(loc='best');",
"OLS with dummy variables\nWe generate some artificial data. There are 3 groups which will be modelled using dummy variables. Group 0 is the omitted/benchmark category.",
"nsample = 50\ngroups = np.zeros(nsample, int)\ngroups[20:40] = 1\ngroups[40:] = 2\n#dummy = (groups[:,None] == np.unique(groups)).astype(float)\n\ndummy = pd.get_dummies(groups).values\nx = np.linspace(0, 20, nsample)\n# drop reference category\nX = np.column_stack((x, dummy[:,1:]))\nX = sm.add_constant(X, prepend=False)\n\nbeta = [1., 3, -3, 10]\ny_true = np.dot(X, beta)\ne = np.random.normal(size=nsample)\ny = y_true + e",
"Inspect the data:",
"print(X[:5,:])\nprint(y[:5])\nprint(groups)\nprint(dummy[:5,:])",
"Fit and summary:",
"res2 = sm.OLS(y, X).fit()\nprint(res2.summary())",
"Draw a plot to compare the true relationship to OLS predictions:",
"prstd, iv_l, iv_u = wls_prediction_std(res2)\n\nfig, ax = plt.subplots(figsize=(8,6))\n\nax.plot(x, y, 'o', label=\"Data\")\nax.plot(x, y_true, 'b-', label=\"True\")\nax.plot(x, res2.fittedvalues, 'r--.', label=\"Predicted\")\nax.plot(x, iv_u, 'r--')\nax.plot(x, iv_l, 'r--')\nlegend = ax.legend(loc=\"best\")",
"Joint hypothesis test\nF test\nWe want to test the hypothesis that both coefficients on the dummy variables are equal to zero, that is, $R \\times \\beta = 0$. An F test leads us to strongly reject the null hypothesis of identical constant in the 3 groups:",
"R = [[0, 1, 0, 0], [0, 0, 1, 0]]\nprint(np.array(R))\nprint(res2.f_test(R))",
"You can also use formula-like syntax to test hypotheses",
"print(res2.f_test(\"x2 = x3 = 0\"))",
"Small group effects\nIf we generate artificial data with smaller group effects, the T test can no longer reject the Null hypothesis:",
"beta = [1., 0.3, -0.0, 10]\ny_true = np.dot(X, beta)\ny = y_true + np.random.normal(size=nsample)\n\nres3 = sm.OLS(y, X).fit()\n\nprint(res3.f_test(R))\n\nprint(res3.f_test(\"x2 = x3 = 0\"))",
"Multicollinearity\nThe Longley dataset is well known to have high multicollinearity. That is, the exogenous predictors are highly correlated. This is problematic because it can affect the stability of our coefficient estimates as we make minor changes to model specification.",
"from statsmodels.datasets.longley import load_pandas\ny = load_pandas().endog\nX = load_pandas().exog\nX = sm.add_constant(X)",
"Fit and summary:",
"ols_model = sm.OLS(y, X)\nols_results = ols_model.fit()\nprint(ols_results.summary())",
"Condition number\nOne way to assess multicollinearity is to compute the condition number. Values over 20 are worrisome (see Greene 4.9). The first step is to normalize the independent variables to have unit length:",
"norm_x = X.values\nfor i, name in enumerate(X):\n if name == \"const\":\n continue\n norm_x[:,i] = X[name]/np.linalg.norm(X[name])\nnorm_xtx = np.dot(norm_x.T,norm_x)",
"Then, we take the square root of the ratio of the biggest to the smallest eigen values.",
"eigs = np.linalg.eigvals(norm_xtx)\ncondition_number = np.sqrt(eigs.max() / eigs.min())\nprint(condition_number)",
"Dropping an observation\nGreene also points out that dropping a single observation can have a dramatic effect on the coefficient estimates:",
"ols_results2 = sm.OLS(y.iloc[:14], X.iloc[:14]).fit()\nprint(\"Percentage change %4.2f%%\\n\"*7 % tuple([i for i in (ols_results2.params - ols_results.params)/ols_results.params*100]))",
"We can also look at formal statistics for this such as the DFBETAS -- a standardized measure of how much each coefficient changes when that observation is left out.",
"infl = ols_results.get_influence()",
"In general we may consider DBETAS in absolute value greater than $2/\\sqrt{N}$ to be influential observations",
"2./len(X)**.5\n\nprint(infl.summary_frame().filter(regex=\"dfb\"))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ldiary/marigoso
|
notebooks/handling_select2_controls_in_selenium_webdriver.ipynb
|
mit
|
[
"Handling Select2 Controls in Selenium WebDriver\nSelect2 is a jQuery based replacement for select boxes. This article will demonstrate how Selenium webdriver can handle Select2 by manipulating the first such selection box in the Examples page of Select2.\nCreating an instance of Selenium webdriver equipped with Firefox Extensions\nFirebug and FirePath are very helpful Firefox extensions that I want to use in this demonstration, so I will make Selenium launch a Firefox browser equipped with these extensions.",
"import os\nfrom marigoso import Test\nrequest = {\n 'firefox': {\n 'capabilities': {\n 'marionette': False,\n },\n }\n}",
"Note that in order for the extensions to be installed in the browser, you need to either specify an extension enabled Firefox profile to Selenium or you specify the location and name of Firefox extensions you want to install. In the above example, I have Firebug and FirePath files stored in 'tools\\firefox' folder so I can just specify the location and filenames of the extensions.\nNavigate to Select2 Examples page",
"browser.get_url('https://select2.github.io/')\nbrowser.press(\"Examples\")",
"Identify the locator for the Selection Box\nRight click on the first Select2 box and select 'Inspect Element with Firebug'\n\nFirebug will then display and highlight the HTML source of the Selection Box as well as highlight the control itself if you hover your mouse to the HTML source.\n\nWe now have the task of figuring out what locator we can use to locate this Selection Box. The Selection Box is a 'span' element with an id=\"select2-jnw9-container\", we can surely make use of this id attribute. However, it appears that this id is randomly generated so I made a slight modification to make sure my locator will still work even if the page is refreshed.\nVerify the adopted locator works\nIn the Firebug window, click on 'FirePath' tab. Click on the dropdown before the input box and select 'CSS:'. Then enter \"[id^='select2']\" in the input box and press Enter key.\n\nFirebug will now display the same thing as before, but notice now that at the lower left part of Firebug window it says '17 matching nodes'. This means we have 17 such Selection Box that can be located using my chosen selector. However, this time we are only interested on the first Selection Box, so I think my chosen selector is still useful.\nThe ultimate way to verify that the locator works is to feed it to Selenium and run it. So we execute the following command.",
"browser.press(\"css=[id^='select2']\" )",
"If the Selection Dropdown appears upon executing the above command, then we are on the right track. You can run the above command several times to confirm the closing and opening of the selection dropdown.\nIdentify the locator for the Selection Dropdown\nWe now need to identify the locator for the Selection Dropdown. We do this by clicking back on the 'HTML' tab in the Firebug window and observing that when you manually click on the Selection Box another 'span' element is dynamically being added at the buttom of the HTML source.\n\nWe can use previous technique of locating the Selection Box above to arrive to a conclusion that the locator for Selection Dropdown could be 'css=span.select2-dropdown > span > ul'. Note that in this case we specifically located until the 'ul' tag element. This is because the options for Select2 are not 'option' tag elements, instead they are 'li' elements of a 'ul' tag.\n\nVerify that both Selection Box and Dropdown works\nAfter all this hardwork of figuring out the best locators for Selection Box and Selection Dropdown, we then test it to see if we can now properly handle Select2. Marigoso offers two syntax for performing the same action.\nselect_text\nWe can use the usual select_text function by just appending the Select Dropdown locator at the end.",
"browser.select_text(\"css=*[id^='select2']\", \"Nevada\", 'css=span.select2-dropdown > span > ul')",
"select2\nWe can also use the select2 function of Marigoso by swapping the order of the Selection Dropdown locator and the value of the text you want to select.",
"browser.select2(\"css=*[id^='select2']\", 'css=span.select2-dropdown > span > ul', \"Hawaii\")",
"Final Solution\nFinally, here again is the summary of the necessary commands used in this demonstration.",
"import os\nfrom marigoso import Test\nrequest = {\n 'firefox': {\n 'extensions_path': os.path.join(os.getcwd(), 'tools', 'firefox'),\n 'extensions': ['firebug@software.joehewitt.com.xpi', 'FireXPath@pierre.tholence.com.xpi'],\n }\n}\nbrowser = Test(request).launch_browser('Firefox')\nbrowser.get_url('https://select2.github.io/')\nbrowser.press(\"Examples\")\nbrowser.select_text(\"css=*[id^='select2']\", \"Nevada\", 'css=span.select2-dropdown > span > ul')\nbrowser.quit()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ioggstream/python-course
|
connexion-101/notebooks/03-connexion.ipynb
|
agpl-3.0
|
[
"Connexion\nConnexion is a python framework based on Flask.\nIt streamlines the creation of contract-first REST APIs.\nOnce you have your OAS3 spec, connexion uses it to:\n\ndispatch requests\nserve mock responses on unimplemented methods\nvalidate input and output of the called methods\napply authentication policies\nprovide an API Documentation UI (Swagger UI) where we can browse our API.",
"# At first ensure connexion is installed \n# together with the swagger module used to render the OAS3 spec\n# in the web-ui\n!pip install connexion[swagger-ui] connexion",
"Now run the spec in a terminal using\nconnexion run /code/notebooks/oas3/ex-01-info-ok.yaml\nRemember:\n\ndefault port is :5000\nthe Swagger GUI is at the /ui path.",
"# A request on a generic PATH on the server returns a \n# nicely formatted and explicative error.\n# Remember that we haven't already defined an operation.\n!curl http://0.0.0.0:5000 -kv\n\nrender_markdown(f'''\nOpen the [documentation URL]({api_server_url('ui')}) and check the outcome!\n\nPlay a bit with Swagger UI.''')\n",
"Defining endpoints in OAS3\nNow that we have added our metadata, we can provide informations about the endpoints.\nOAS3 allows multiple endpoints because good APIs have many.\nEvery endpoint can start with a prefix path (eg. /datetime/v1).\n```\nOne or more server\nYou can add production, staging and test environments.\nWe\nsandbox instances\nservers:\n - description: |\n An interoperable API has many endpoints.\n One for development...\n url: https://localhost:8443/datetime/v1\n\n\ndescription: \n One for testing in a sandboxed environment. This\n is especially important to avoid clients to \n test in production.\n We are using the custom x-sandbox to identify \n url: https://api.example.com/datetime/v1\n x-sandbox: true\n\n\ndescription: |\n Then we have our production endpoint.\n The custom x-healthCheck parameter\n can be used to declare how to check the API.\n url: https://api.example.com/datetime/v1/status \n x-healthCheck:\n url: https://api.example.com/datetime/v1/status\n interval: 300\n timeout: 15\n\n\n```\nExercise: the servers parameter\nEdit the servers attribute so that it points to your actual endpoint URL (eg. your IP/port).\nNow check the outcome.\nconnexion run /code/notebooks/oas3/ex-02-servers-ok.yaml\nDefining paths\nNow we can define our first path that is the /status one.\nAn interoperable API should declare an URL for checking its status.\nThis allows implementers to plan a suitable method for testing it (eg. it could be\na simple OK/KO method or can execute basic checks like. databases are reachable, smoke testing other components, ..)\nCaveats on /status\nNB: the /status path is not a replacement for proper monitoring your APIs, but a way to communicate to your peers that you're online.\nPaths anatomy\nAn OAS3 path references:\n\nthe associated METHOD (eg. get|post|..)\na summary and a description of the operation\n\n/status:\n get:\n summary: Returns the application status.\n description: |\n This path can randomly return an error\n for testing purposes. The returned object\n is always a problem+json.\n\na reference to the python object to call when the \n\noperationId: get_status\n\nthe http statuses of the possible responses, each with its description,\n content-type and examples\n\n```\n responses:\n '200':\n description: |\n The application is working properly.\n content:\n application/problem+json:\n example:\n status: 200\n title: OK\n detail: API is working properly.\n default:\n description: |\n If none of the above statuses is returned, then this applies\n content:\n application/problem+json:\n example:\n status: 500\n title: Internal Server Error\n detail: API is not responding correctly\n```\nExercise\n\nopen the ex-03-02-path.yaml\n eventually copy/paste the code from/to the swagger editor.\ncomplete the get /status path\n\nWe haven't already implemented the function get_status() referenced by operationId,\nso to run the spec in a terminal we tell the server\nto ignore this with --stub \nconnexion run /code/notebooks/oas3/ex-03-02-path.yaml --stub\nExercise\n1- What happens if I get the /status resource of my API now?\n2- And if I invoke another path which is not mentioned in the spec?\n3- Restart the server via\nconnexion run /code/notebooks/oas3/ex-03-02-path.yaml --mock notimplemented",
"# Exercise: what's the expected output of the following command?\n\n!curl http://0.0.0.0:5000/datetime/v1/status\n \n# Exercise: what happens if you GET an unexisting path? \n\n!curl http://0.0.0.0:5000/datetime/v1/MISSING\n ",
"Solution on the unimplemented method\n$ curl http://0.0.0.0:8889/datetime/v1/status\n{\n \"detail\": \"Empty module name\",\n \"status\": 501,\n \"title\": \"Not Implemented\",\n \"type\": \"about:blank\"\n}\nSolution on other paths\n$ curl http://0.0.0.0:8889/datetime/v1/missing\n{\n \"detail\": \"The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.\",\n \"status\": 404,\n \"title\": \"Not Found\",\n \"type\": \"about:blank\"\n}\nSchemas\nOAS3 allows defining, using and reusing schemas. \nThey can be defined inline, in the component section or referenced from another file, like below.\nThe URL fragment part can be used to navigate inside the yaml (eg. #/schemas/Problem).\n```\ncomponents:\n schemas:\n Problem:\n $ref: 'https://teamdigitale.github.io/openapi/0.0.5/definitions.yaml#/schemas/Problem'\n```",
"print(show_component('https://teamdigitale.github.io/openapi/0.0.5/definitions.yaml#/schemas/Problem'))\n\n# Exercise: use the yaml and requests libraries \n# to download the Problem schema\nfrom requests import get\nret = get('https://teamdigitale.github.io/openapi/0.0.5/definitions.yaml')\n\n# Yaml parse the definitions\ndefinitions = yaml.safe_load(ret.content)\n\n# Nicely print the Problem schema\nprint(yaml.dump(definitions['schemas']['Problem']))\n\n### Exercise\n# Read the definitions above\n# - https://teamdigitale.github.io/openapi/0.0.5/definitions.yaml\n#\n# Then use this cell to list all the structures present in definitions\n\nfor sections, v in definitions.items():\n for items, vv in v.items():\n print(f'{sections}.{items}')",
"Exercise\nEdit ex-03-02-path.yaml so that every /status response uses\nthe Problem schema.\nLook at simple.yaml to\nsee a complete implementation.",
"## Exercise\n\n#Test the new setup\n\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google-research/scenic
|
scenic/projects/owl_vit/notebooks/OWL_ViT_minimal_example.ipynb
|
apache-2.0
|
[
"OWL-ViT minimal example\nThis Colab shows how to load a pre-trained OWL-ViT checkpoint and use it to\nget object detection predictions for an image.\nDownload and install OWL-ViT\nOWL-ViT is implemented in Scenic. The cell below installs the Scenic codebase from GitHub and imports it.",
"!rm -rf *\n!rm -rf .config\n!rm -rf .git\n!git clone https://github.com/google-research/scenic.git .\n!python -m pip install -q .\n!python -m pip install -r scenic/projects/baselines/clip/requirements.txt\n!echo \"Done.\"\n\nimport os\n\nimport jax\nfrom matplotlib import pyplot as plt\nimport numpy as np\nfrom scenic.projects.owl_vit import models\nfrom scenic.projects.owl_vit.configs import clip_b32\nfrom scipy.special import expit as sigmoid\nimport skimage\nfrom skimage import io as skimage_io\nfrom skimage import transform as skimage_transform",
"Choose config",
"config = clip_b32.get_config()",
"Load the model and variables",
"module = models.TextZeroShotDetectionModule(\n body_configs=config.model.body,\n normalize=config.model.normalize,\n box_bias=config.model.box_bias)\n\nvariables = module.load_variables(config.init_from.checkpoint_path)",
"Prepare image",
"# Load example image:\nfilename = os.path.join(skimage.data_dir, 'astronaut.png')\nimage_uint8 = skimage_io.imread(filename)\nimage = image_uint8.astype(np.float32) / 255.0\n\n# Pad to square with gray pixels on bottom and right:\nh, w, _ = image.shape\nsize = max(h, w)\nimage_padded = np.pad(\n image, ((0, size - h), (0, size - w), (0, 0)), constant_values=0.5)\n\n# Resize to model input size:\ninput_image = skimage.transform.resize(\n image_padded,\n (config.dataset_configs.input_size, config.dataset_configs.input_size),\n anti_aliasing=True)",
"Prepare text queries",
"text_queries = ['human face', 'rocket', 'nasa badge', 'star-spangled banner']\ntokenized_queries = np.array([\n module.tokenize(q, config.dataset_configs.max_query_length)\n for q in text_queries\n])\n\n# Pad tokenized queries to avoid recompilation if number of queries changes:\ntokenized_queries = np.pad(\n tokenized_queries,\n pad_width=((0, 100 - len(text_queries)), (0, 0)),\n constant_values=0)",
"Get predictions\nThis will take a minute on the first execution due to model compilation. Subsequent executions will be faster.",
"# Note: The model expects a batch dimension.\npredictions = module.apply(\n variables,\n input_image[None, ...],\n tokenized_queries[None, ...],\n train=False)\n\n# Remove batch dimension and convert to numpy:\npredictions = jax.tree_map(lambda x: np.array(x[0]), predictions )",
"Plot predictions",
"%matplotlib inline\n\nscore_threshold = 0.1\n\nlogits = predictions['pred_logits'][..., :len(text_queries)] # Remove padding.\nscores = sigmoid(np.max(logits, axis=-1))\nlabels = np.argmax(predictions['pred_logits'], axis=-1)\nboxes = predictions['pred_boxes']\n\nfig, ax = plt.subplots(1, 1, figsize=(8, 8))\nax.imshow(input_image, extent=(0, 1, 1, 0))\nax.set_axis_off()\n\nfor score, box, label in zip(scores, boxes, labels):\n if score < score_threshold:\n continue\n cx, cy, w, h = box\n ax.plot([cx - w / 2, cx + w / 2, cx + w / 2, cx - w / 2, cx - w / 2],\n [cy - h / 2, cy - h / 2, cy + h / 2, cy + h / 2, cy - h / 2], 'r')\n ax.text(\n cx - w / 2,\n cy + h / 2 + 0.015,\n f'{text_queries[label]}: {score:1.2f}',\n ha='left',\n va='top',\n color='red',\n bbox={\n 'facecolor': 'white',\n 'edgecolor': 'red',\n 'boxstyle': 'square,pad=.3'\n })"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
amitkaps/machine-learning
|
time_series/3-Refine.ipynb
|
mit
|
[
"2. Refine the Data\n\n\"Data is messy\"\n\nWe will be performing the following operation on our Onion price to refine it\n- Remove e.g. remove redundant data from the data frame\n- Derive e.g. State and City from the market field\n- Parse e.g. extract date from year and month column\nOther stuff you may need to do to refine are...\n- Missing e.g. Check for missing or incomplete data\n- Quality e.g. Check for duplicates, accuracy, unusual data\n- Convert e.g. free text to coded value\n- Calculate e.g. percentages, proportion\n- Merge e.g. first and surname for full name\n- Aggregate e.g. rollup by year, cluster by area\n- Filter e.g. exclude based on location\n- Sample e.g. extract a representative data\n- Summary e.g. show summary stats like mean",
"# Import the two library we need, which is Pandas and Numpy\nimport pandas as pd\nimport numpy as np\n\n# Read the csv file of Month Wise Market Arrival data that has been scraped.\ndf = pd.read_csv('MonthWiseMarketArrivals.csv')\n\ndf.head()\n\ndf.tail()",
"Remove the redundant data",
"df.dtypes\n\n# Delete the last row from the dataframe\ndf.tail(1)\n\n# Delete a row from the dataframe\ndf.drop(df.tail(1).index, inplace = True)\n\ndf.head()\n\ndf.tail()\n\ndf.dtypes\n\ndf.iloc[:,4:7].head()\n\ndf.iloc[:,2:7] = df.iloc[:,2:7].astype(int)\n\ndf.dtypes\n\ndf.head()\n\ndf.describe()",
"Extracting the states from market names",
"df.market.value_counts().head()\n\ndf['state'] = df.market.str.split('(').str[-1]\n\ndf.head()\n\ndf['city'] = df.market.str.split('(').str[0]\n\ndf.head()\n\ndf.state.unique()\n\ndf['state'] = df.state.str.split(')').str[0]\n\ndf.state.unique()\n\ndfState = df.groupby(['state', 'market'], as_index=False).count()\n\ndfState.market.unique()\n\nstate_now = ['PB', 'UP', 'GUJ', 'MS', 'RAJ', 'BANGALORE', 'KNT', 'BHOPAL', 'OR',\n 'BHR', 'WB', 'CHANDIGARH', 'CHENNAI', 'bellary', 'podisu', 'UTT',\n 'DELHI', 'MP', 'TN', 'Podis', 'GUWAHATI', 'HYDERABAD', 'JAIPUR',\n 'WHITE', 'JAMMU', 'HR', 'KOLKATA', 'AP', 'LUCKNOW', 'MUMBAI',\n 'NAGPUR', 'KER', 'PATNA', 'CHGARH', 'JH', 'SHIMLA', 'SRINAGAR',\n 'TRIVENDRUM']\n\nstate_new =['PB', 'UP', 'GUJ', 'MS', 'RAJ', 'KNT', 'KNT', 'MP', 'OR',\n 'BHR', 'WB', 'CH', 'TN', 'KNT', 'TN', 'UP',\n 'DEL', 'MP', 'TN', 'TN', 'ASM', 'AP', 'RAJ',\n 'MS', 'JK', 'HR', 'WB', 'AP', 'UP', 'MS',\n 'MS', 'KER', 'BHR', 'HR', 'JH', 'HP', 'JK',\n 'KEL']\n\ndf.state = df.state.replace(state_now, state_new)\n\ndf.state.unique()",
"Getting the Dates",
"df.head()\n\ndf.index\n\npd.to_datetime('January 2012')\n\ndf['date'] = df['month'] + '-' + df['year'].map(str)\n\n??map\n\ndf.head()\n\nindex = pd.to_datetime(df.date)\n\ndf.index = pd.PeriodIndex(df.date, freq='M')\n\ndf.columns\n\ndf.index\n\ndf.head()\n\ndf.to_csv('MonthWiseMarketArrivals_Clean.csv', index = False)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
h-mayorquin/camp_india_2016
|
tutorials/EI networks/presentation/ExcInhNet.ipynb
|
mit
|
[
"Networks of Exc and Inh Spiking Neurons (Leaky Integrate & Fire)\nAditya Gilra\nCAMP 2016 @ Bangalore\n<img src='files/redneuronal-1024x768_mattcognitivescienceblog.jpg'>\nImage from http://mattscognitivescienceblog.wordpress.com/2013/07/18/connectionism-part-3/ \n\n- http://briansimulator.org/\n- \"a simulator should not only save the time of processors, but also the time of scientists.\"\n- python based\n- easy to prototype arbitrary neurons/synapse models\n- version 2rc3\n- http://brian2.readthedocs.io/en/2.0rc3/\n- http://brian2.readthedocs.io/en/2.0rc3/examples/index.html\n- https://brian2tools.readthedocs.io/en/stable/",
"%matplotlib inline\n#%matplotlib qt\nfrom brian2 import *\n\ndefaultclock.dt = 0.1*ms",
"Units, Equations, NeuronGroup\n\nBrian imposes units (1,second,ms,volt,mV,amp,nA,...SI names)\nEquations: first-order differential equations, parameters, sub-expressions\nSimulate 1000 LIF neurons without synaptic connections",
"eqs_neurons='''\ninp : volt\ndv/dt = (-v + inp)/(20*ms) : volt\n'''\nP=NeuronGroup(N=1000,model=eqs_neurons,\\\n threshold='v>=20*mV',reset='v=10*mV',\\\n refractory=10*ms,method='euler')\nP.v = 0.*mV\nP.inp = uniform(size=(1000)) * 40*mV",
"Monitors, run, plot",
"sm = SpikeMonitor(P)\nsr = PopulationRateMonitor(P)\nsm_vm = StateMonitor(P,'v',record=range(5))\n\nrun(100*ms, report='text')\n\nfigure()\nplot(sm.t/ms,sm.i,',');\nfigure()\nplot(sr.t/ms,sr.smooth_rate(width=5*ms)/Hz,',');\nfigure()\nplot(sm_vm.t/ms,transpose(sm_vm.v/mV));",
"Synapses\n\nSet up connections between these neurons\nstate evolution equations\nevent-based updates, on_pre & on_post\nSimulate for varying strengths of these connections",
"# delta-function synapses\ncon = Synapses(P,P,'w:volt (constant)',on_pre='v_post+=w',method='euler')\ncon.connect(condition='i!=j',p=0.1)\ncon.delay = 1*ms\ncon.w['i<800'] = 0.1*mV\ncon.w['i>=800'] = -5*0.1*mV\n\n\n%matplotlib inline\n#%matplotlib qt\nfrom brian2 import *\n\ndefaultclock.dt = 0.1*ms\n\neqs_neurons='''\ninp : volt\ndv/dt = (-v + inp)/(20*ms) : volt\n'''\nP=NeuronGroup(N=1000,model=eqs_neurons,\\\n threshold='v>=20*mV',reset='v=10*mV',\\\n refractory=2*ms,method='euler')\nP.v = 0.*mV\nP.inp = uniform(size=(1000)) * 40*mV\n\n# delta-function synapses\ncon = Synapses(P,P,'w:volt (constant)',on_pre='v_post+=w',method='euler')\ncon.connect(condition='i!=j',p=0.1)\ncon.delay = 1*ms\ncon.w['i<800'] = 0.1*mV\ncon.w['i>=800'] = -5*0.1*mV\n\nsm = SpikeMonitor(P)\nsr = PopulationRateMonitor(P)\nsm_vm = StateMonitor(P,'v',record=range(5))\n\nrun(100*ms, report='text')\n\nfigure()\nplot(sm.t/ms,sm.i,',');\nfigure()\nplot(sr.t/ms,sr.smooth_rate(width=5*ms)/Hz,',');\nfigure()\nplot(sm_vm.t/ms,transpose(sm_vm.v/mV));",
"Brunel (2000) EI network\n\nNE=10000 exc and NI=2500 inh LIF neurons\nCE=p*NE exc and CI=p*NI inh connections per neuron (p = 0.1)\ndelta-function synapses\n0.1mV exc and -g*0.1mV inh weights (g=5)\nmean external input = inpfactor * threshold voltage (via 12500 spike trains)\n\nSpiking input",
"inpfactor = 2\nnu_thr = vth/(p*NE*J*tau)\nPinp = PoissonGroup(N=N,rates=inpfactor*nu_thr)\n\n# connect these Pinp neurons to the P neurons",
"Asynchronous Irregular (AI) activity\n\nrun \"STEP2_ExcInhNet_Brunel2000_brian2_prob.py\"\n\nIs the activity AI?\n\n\nPlot only 50 neurons for 200ms. Compare with below:\n<img src='files/Brunel2000_fig8C.png'> \n\n\nRemedying the irregularity\n\nHave a fixed CE number of exc and CI of inh connections, not a probability\nProbability gives rise to a binomial distribution of input synapses per neuron (with mean CE and CI)\n<img src='files/binomial.png'> \nDisrupts the EI balance in some neurons\n\ncon.connect(i=conn_i,j=conn_j) # conn_i and conn_j are vectors of corresponding pre i and post j neuronal indices\n\n\nSTEP3_ExcInhNet_Brunel2000_brian2.py\n\nIdeally, fixed number of synapses for input connections also\n\nPhase diagram for Brunel (2000) network\n<img src='files/Brunel2000_fig2A.png'> \n\nVerify phase diagram in Brunel (2000) for different g and nu_ext\nA. $g=3, \\nu_{ext}/\\nu_{thr}=2$\nB. $g=6, \\nu_{ext}/\\nu_{thr}=4$\nC. $g=5, \\nu_{ext}/\\nu_{thr}=2$\nD. $g=4.5, \\nu_{ext}/\\nu_{thr}=0.9$\n\n<img src='files/Brunel_fig8_params.png'> \nPlot a histogram of the co-eff of variation of inter-spike intervals\nHow can you increase it?\nOstojic (2014)\nVary the connections strength J and see the behaviour of instantaneous firing rates a la Ostojic (2014).\n\nPlot rate vs time for say 10 individual neurons.\nSet J=0.2mV or J=0.8mV to reproduce result from Ostojic, 2014 \nNote the CV distribution also\n\n<img src='files/Ostojic2014_result.png'>\nSparse Strong Weak Dense network (Fukai lab 2012, 2013)\n\nEI network of LIF neurons, but conductance-based synapses\n0.1 & 0.5 probability of E & I connections\nconstant E-to-I, I-to-E, and I-to-I weights\nE-to-E weights are distributed lognormally (Song et al 2005)\ncauses persistent activity without external input\nlog-normal distribution of firing rates\n<img src='files/Songetal2005_fig5b.png'>\n\nTry Gaussian instead of log-normal distribution of E-to-E synapses.\n\ncan build on this for associative memory model\nplay with delay, jitter in delay, probabilistic release..."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tpin3694/tpin3694.github.io
|
machine-learning/blurring_images.ipynb
|
mit
|
[
"Title: Blurring Images\nSlug: blurring_images\nSummary: How to blurring images using OpenCV in Python. \nDate: 2017-09-11 12:00\nCategory: Machine Learning\nTags: Preprocessing Images \nAuthors: Chris Albon\nPreliminaries",
"# Load image\nimport cv2\nimport numpy as np\nfrom matplotlib import pyplot as plt",
"Load Image As Greyscale",
"# Load image as grayscale\nimage = cv2.imread('images/plane_256x256.jpg', cv2.IMREAD_GRAYSCALE)",
"Blur Image",
"# Blur image\nimage_blurry = cv2.blur(image, (5,5))",
"View Image",
"# Show image\nplt.imshow(image_blurry, cmap='gray'), plt.xticks([]), plt.yticks([])\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
adrn/globber
|
notebooks/Proper Motions of MW stuffs.ipynb
|
mit
|
[
"See also:\nhttps://ui.adsabs.harvard.edu/#abs/2015MNRAS.454.1453R/abstract\nDownloaded PM GC catalogs from:\nhttp://www.stsci.edu/~marel/hstpromo.html#Projects\nFor dSph PMs:\nPiatek et al. 2003, 2005, 2006, 2007; Lepine et al. 2011; Pryor et al. 2014",
"import os\n\n# Third-party\nfrom astropy.io import ascii\nimport astropy.coordinates as coord\nimport astropy.units as u\nimport matplotlib as mpl\nimport matplotlib.pyplot as pl\nimport numpy as np\n%matplotlib inline\n\nimport gary.coordinates as gc\nimport gary.dynamics as gd\nimport gary.potential as gp\n\nfrom ophiuchus import galactocentric_frame, vcirc, vlsr\nimport ophiuchus.potential as op\n\nfrom astropy.utils.data import download_file\n\ndata_path = \"/Users/adrian/projects/globmfer/data\"",
"Data munging to read in this stupid proper motion catalogs",
"import astropy.table as at\n\npm_gc_main = np.genfromtxt(os.path.join(data_path,\"gl_2012_J2000.cat1.txt\"), dtype=None, \n skip_header=2, \n usecols=[0,2,3,6,7,8,9,10,11,12,13],\n names=['ngc_num','ra','dec','dist','dist_err','mu_ra','mu_ra_err',\n 'mu_dec','mu_dec_err', 'vr', 'vr_err'])\n\npm_gc_main = at.Table(pm_gc_main)\n\nlen(pm_gc_main)\n\npm_gc_bulge = np.genfromtxt(os.path.join(data_path,\"bulge_J2000.cat1.txt\"), dtype=None, \n skip_header=2, \n usecols=[0,2,3,6,7,8,9,10,11,12,13],\n names=['ngc_num','ra','dec','dist','dist_err','mu_ra','mu_ra_err',\n 'mu_dec','mu_dec_err', 'vr', 'vr_err'])\npm_gc_bulge = at.Table(pm_gc_bulge)\n\ngo = ascii.read(os.path.join(data_path,\"go97_table1.txt\"))\n\nall_gc = at.vstack((pm_gc_main, pm_gc_bulge))\nall_gc['name'] = np.array([\"NGC {}\".format(x) for x in all_gc['ngc_num']])\nall_gc = at.join(all_gc, go, keys='name')",
"Convert coordinates, proper motions into galactocentric cartesian",
"c = coord.ICRS(ra=all_gc['ra']*u.degree,\n dec=all_gc['dec']*u.degree,\n distance=all_gc['dist']*u.kpc)\n\nxyz = c.transform_to(galactocentric_frame).cartesian.xyz\nvxyz = gc.vhel_to_gal(c, pm=(all_gc['mu_ra']*u.mas/u.yr,\n all_gc['mu_dec']*u.mas/u.yr),\n rv=all_gc['vr']*u.km/u.s, \n galactocentric_frame=galactocentric_frame,\n vcirc=vcirc, vlsr=vlsr)",
"Try integrating some orbits",
"pot = op.load_potential('static_mw')\n# pot = op.load_potential('barred_mw_8')\n\nw0 = gd.CartesianPhaseSpacePosition(pos=xyz, vel=vxyz)\norbit = pot.integrate_orbit(w0, dt=-0.5, nsteps=12000)\n\npers = u.Quantity([orbit[:,i].pericenter() for i in range(orbit.norbits)])\napos = u.Quantity([orbit[:,i].apocenter() for i in range(orbit.norbits)])\n\nfig,axes = pl.subplots(1,2,figsize=(10,5))\n\nbins = np.logspace(-1,2,16)\naxes[0].hist(pers.value, bins=bins)\naxes[0].set_xscale('log')\n\nbins = np.logspace(0,3.,16)\naxes[1].hist(apos.value, bins=bins)\naxes[1].set_xscale('log');\n\naxes[0].set_xlabel(\"Pericenter [kpc]\")\naxes[1].set_xlabel(\"Apocenter [kpc]\")\naxes[0].set_ylabel(\"N\")",
"Phases of the clusters",
"periods = u.Quantity([np.abs(orbit[:,i].estimate_period()) for i in range(orbit.norbits)])\n\nfrom scipy.signal import argrelmin\n\nphases = np.zeros(periods.size)\nfor i in range(orbit.norbits):\n sph,_ = orbit[:,i].represent_as(coord.SphericalRepresentation)\n r = sph.distance\n idx, = argrelmin(r)\n phase = (orbit.t - orbit.t[idx[0]]) / periods[i]\n phases[i] = phase[0]\n\npl.hist(phases, bins=np.linspace(0,1,16));\npl.xlabel(\"Orbital phase today\")",
"Estimate tidal radius at pericenter",
"pers_xyz = np.zeros((3,len(pers)))\npers_xyz[0] = pers.value\npers_xyz = pers_xyz*u.kpc\nmx = pot.mass_enclosed(pers_xyz)\n\npers_xyz = np.zeros((3,len(pers)))\npers_xyz[2] = pers.value\npers_xyz = pers_xyz*u.kpc\nmz = pot.mass_enclosed(pers_xyz)\n\nrtide_x = pers.to(u.pc) * (all_gc['M'] / (3*mx))**(1/3.)\nrtide_z = pers.to(u.pc) * (all_gc['M'] / (3*mz))**(1/3.)\n\nrtide = np.mean(np.vstack((rtide_x, rtide_z)).value*rtide_z.unit, axis=0)\nerr_rtide = np.std(np.vstack((rtide_x, rtide_z)).value*rtide_z.unit, axis=0)\n\ncore_radius = all_gc['Rc']*u.pc\nratio = rtide / core_radius\n\nall_gc['peri_rtide_to_rcore'] = ratio\nsort_idx = np.argsort(ratio)[:][:10]\n# ratio[sort_idx]\n\nimport astropy.coordinates as coord\nimport astropy.units as u\n\ncoord.Galactic(l=255*u.degree, b=48*u.degree).transform_to(coord.ICRS)\n\nall_gc['name','peri_rtide_to_rcore','ra','dec'][sort_idx]\n\nall_gc[sort_idx[0:10]]\n\nngc5897_orbit = orbit[:,sort_idx[0]]\n\nfig = ngc5897_orbit.plot()\n\nngc5897_c_back,ngc5897_v_back = ngc5897_orbit[:100].to_frame(coord.ICRS, vcirc=vcirc, vlsr=vlsr, # orbit -50 Myr\n galactocentric_frame=galactocentric_frame) \n\nngc5897_orbit_forw = pot.integrate_orbit(ngc5897_orbit[0], dt=0.5, nsteps=100)\nngc5897_c_forw,ngc5897_v_forw = ngc5897_orbit_forw.to_frame(coord.ICRS, vcirc=vcirc, vlsr=vlsr, # orbit +50 Myr\n galactocentric_frame=galactocentric_frame) ",
"There are 293039 SDSS stars with good photometry in the grey selection box",
"sdss_select = mpl.patches.Rectangle((210,-25), width=20, height=10, alpha=0.1)\n\nfig,ax = pl.subplots(1,1,figsize=(8,6))\nax.plot(ngc5897_c_back.ra.degree, ngc5897_c_back.dec.degree, ls='none')\nax.plot(ngc5897_c_forw.ra.degree, ngc5897_c_forw.dec.degree, ls='none')\nax.set_xlim(325,200)\nax.set_ylim(-45,0)\nax.xaxis.set_ticks(np.arange(200,330,10))\nax.add_patch(sdss_select)\npl.minorticks_on()\npl.grid()",
"Or, in RA - distance",
"fig,ax = pl.subplots(1,1,figsize=(8,6))\nax.plot(ngc5897_c_back.ra.degree, ngc5897_c_back.distance, ls='none')\nax.plot(ngc5897_c_forw.ra.degree, ngc5897_c_forw.distance, ls='none')\nax.set_ylabel(\"Helio. distance [kpc]\")\nax.set_xlim(325,200)\n# ax.set_ylim(-45,0)\nax.xaxis.set_ticks(np.arange(200,330,10))\npl.minorticks_on()\npl.grid()\n\nfig,ax = pl.subplots(1,1,figsize=(8,6))\nax.plot(ngc5897_c_back.ra.degree, ngc5897_v_back[2].to(u.km/u.s).value, ls='none')\nax.plot(ngc5897_c_forw.ra.degree, ngc5897_v_forw[2].to(u.km/u.s).value, ls='none')\nax.set_ylabel(r\"$v_{\\rm los}$ [km/s]\")\nax.set_xlim(325,200)\n# ax.set_ylim(-45,0)\nax.xaxis.set_ticks(np.arange(200,330,10))\npl.minorticks_on()\npl.grid()",
"Here I plot the positions of all stars selected from SDSS in the grey box:",
"ngc5897_sdss = ascii.read(\"/Users/adrian/Downloads/NGC5897_adrn.csv\")\n\nsdss_select = mpl.patches.Rectangle((210,-25), width=20, height=20, alpha=0.1)\n\nfig,ax = pl.subplots(1,1,figsize=(8,6))\nax.plot(ngc5897_c_back.ra.degree, ngc5897_c_back.dec.degree, ls='none')\nax.plot(ngc5897_c_forw.ra.degree, ngc5897_c_forw.dec.degree, ls='none')\n\nax.plot(ngc5897_sdss['ra'], ngc5897_sdss['dec'], marker=',', ls='none', alpha=0.25)\n\nax.set_xlim(325,200)\nax.set_ylim(-45,0)\nax.xaxis.set_ticks(np.arange(200,330,10))\nax.add_patch(sdss_select)\npl.minorticks_on()\npl.grid()\n\ncontrol_idx = (ngc5897_sdss['ra'] < 220.) & (ngc5897_sdss['dec'] < -10.) & (ngc5897_sdss['dec'] > -15.)\ntargets_idx = (ngc5897_sdss['dec'] < -15.) & (ngc5897_sdss['dec'] > -20.)\n\ncontrol = ngc5897_sdss[control_idx]\ntargets = ngc5897_sdss[targets_idx]\n\n# isochrone\nngc5897_iso = ascii.read(\"/Users/adrian/Downloads/ngc5897_iso.txt\", header_start=8)\nngc5897_iso.colnames",
"Distance modulus = 15.55 mag (from http://arxiv.org/pdf/1403.1262v1.pdf)",
"dm = 15.55\n\n_u = control['dered_u']\n_g = control['dered_g']\n_r = control['dered_r']\n_i = control['dered_i']\n\nalpha = 0.02\n\nfig,axes = pl.subplots(1,3,figsize=(10,5),sharey=True)\n\naxes[1].set_title(\"Control field\", fontsize=20)\n\naxes[0].plot(_u-_g, _g, ls='none', marker='.', alpha=alpha)\naxes[1].plot(_g-_r, _g, ls='none', marker='.', alpha=alpha)\naxes[2].plot(_g-_i, _g, ls='none', marker='.', alpha=alpha)\n\naxes[0].plot(ngc5897_iso['sdss_u'] - ngc5897_iso['sdss_g'], ngc5897_iso['sdss_g'] + dm, marker=None)\naxes[1].plot(ngc5897_iso['sdss_g'] - ngc5897_iso['sdss_r'], ngc5897_iso['sdss_g'] + dm, marker=None)\naxes[2].plot(ngc5897_iso['sdss_g'] - ngc5897_iso['sdss_i'], ngc5897_iso['sdss_g'] + dm, marker=None)\n\naxes[0].set_ylim(22,14)\naxes[0].set_xlim(0.5,2.5)\naxes[1].set_xlim(-0.25,1.1)\naxes[2].set_xlim(-0.5,2)\n\naxes[0].set_ylabel(\"$g_0$\")\naxes[0].set_xlabel(\"$(u-g)_0$\")\naxes[1].set_xlabel(\"$(g-r)_0$\")\naxes[2].set_xlabel(\"$(g-i)_0$\")\n\nfig.tight_layout()\n\n_u = targets['dered_u']\n_g = targets['dered_g']\n_r = targets['dered_r']\n_i = targets['dered_i']\n\nalpha = 0.02\n\nfig,axes = pl.subplots(1,3,figsize=(10,5),sharey=True)\n\naxes[1].set_title(\"NGC 5897 field\", fontsize=20)\n\naxes[0].plot(_u-_g, _g, ls='none', marker='.', alpha=alpha)\naxes[1].plot(_g-_r, _g, ls='none', marker='.', alpha=alpha)\naxes[2].plot(_g-_i, _g, ls='none', marker='.', alpha=alpha)\n\naxes[0].plot(ngc5897_iso['sdss_u'] - ngc5897_iso['sdss_g'], ngc5897_iso['sdss_g'] + dm, marker=None)\naxes[1].plot(ngc5897_iso['sdss_g'] - ngc5897_iso['sdss_r'], ngc5897_iso['sdss_g'] + dm, marker=None)\naxes[2].plot(ngc5897_iso['sdss_g'] - ngc5897_iso['sdss_i'], ngc5897_iso['sdss_g'] + dm, marker=None)\n\naxes[0].set_ylim(22,14)\naxes[0].set_xlim(0.5,2.5)\naxes[1].set_xlim(-0.25,1.1)\naxes[2].set_xlim(-0.5,2)\n\naxes[0].set_ylabel(\"$g_0$\")\naxes[0].set_xlabel(\"$(u-g)_0$\")\naxes[1].set_xlabel(\"$(g-r)_0$\")\naxes[2].set_xlabel(\"$(g-i)_0$\")\n\nfig.tight_layout()\n\nfrom scipy.misc import logsumexp\n\ndef matched_filter(data, isochrone, dm, smooth=0.02, threshold=2):\n data = data[(data['dered_g'] > 14) & (data['dered_g'] < 21.5)]\n isochrone = isochrone[((isochrone['sdss_g'] + dm) > 14) & ((isochrone['sdss_g'] + dm) < 21.5)]\n \n i_g = isochrone['sdss_g'] + dm\n i_ug = isochrone['sdss_u'] - isochrone['sdss_g']\n i_gr = isochrone['sdss_g'] - isochrone['sdss_r']\n i_gi = isochrone['sdss_g'] - isochrone['sdss_i']\n \n d_g = data['dered_g']\n d_ug = data['dered_u'] - data['dered_g']\n d_gr = data['dered_g'] - data['dered_r']\n d_gi = data['dered_g'] - data['dered_i']\n \n d_g_var = data['err_g']**2\n d_ug_var = data['err_u']**2 + data['err_g']**2 \n d_gr_var = data['err_g']**2 + data['err_r']**2 \n d_gi_var = data['err_g']**2 + data['err_i']**2 \n \n s_var = smooth**2\n const = 0.5*np.log(2*np.pi*s_var)\n dist = -0.5 * (d_g[None] - i_g[:,None])**2 / (s_var + d_g_var[None]) - const\n dist += -0.5 * (d_ug[None] - i_ug[:,None])**2 / (s_var + d_ug_var[None]) - const\n dist += -0.5 * (d_gr[None] - i_gr[:,None])**2 / (s_var + d_gr_var[None]) - const\n dist += -0.5 * (d_gi[None] - i_gi[:,None])**2 / (s_var + d_gi_var[None]) - const\n \n log_prob = logsumexp(dist, axis=0) - np.log(len(isochrone))\n pl.hist(log_prob,bins=np.linspace(-1,12,16))\n ix = log_prob > threshold\n \n return data[ix]\n\nfilter_dm = 15.2\nfilter_smooth = 0.01 # mag\nfilter_threshold = 9.5\n\nfiltered_control = matched_filter(control, ngc5897_iso, \n dm=filter_dm, smooth=filter_smooth,\n threshold=filter_threshold)\n\nfiltered_targets = matched_filter(targets, ngc5897_iso, \n dm=filter_dm, smooth=filter_smooth,\n threshold=filter_threshold)\n\nprint(len(filtered_control), len(filtered_targets))\n\nalpha = 0.02\n\nfig,axes = pl.subplots(1,3,figsize=(10,5),sharey=True)\n\naxes[1].set_title(\"NGC 5897 field\", fontsize=20)\n\n_u = targets['dered_u']\n_g = targets['dered_g']\n_r = targets['dered_r']\n_i = targets['dered_i']\naxes[0].plot(_u-_g, _g, ls='none', marker='.', alpha=alpha)\naxes[1].plot(_g-_r, _g, ls='none', marker='.', alpha=alpha)\naxes[2].plot(_g-_i, _g, ls='none', marker='.', alpha=alpha)\n\n_u = filtered_targets['dered_u']\n_g = filtered_targets['dered_g']\n_r = filtered_targets['dered_r']\n_i = filtered_targets['dered_i']\naxes[0].plot(_u-_g, _g, ls='none', marker='.')\naxes[1].plot(_g-_r, _g, ls='none', marker='.')\naxes[2].plot(_g-_i, _g, ls='none', marker='.')\n\naxes[0].plot(ngc5897_iso['sdss_u'] - ngc5897_iso['sdss_g'], ngc5897_iso['sdss_g'] + dm, marker=None)\naxes[1].plot(ngc5897_iso['sdss_g'] - ngc5897_iso['sdss_r'], ngc5897_iso['sdss_g'] + dm, marker=None)\naxes[2].plot(ngc5897_iso['sdss_g'] - ngc5897_iso['sdss_i'], ngc5897_iso['sdss_g'] + dm, marker=None)\n\naxes[0].set_ylim(22,14)\naxes[0].set_xlim(0.5,2.5)\naxes[1].set_xlim(-0.25,1.1)\naxes[2].set_xlim(-0.5,2)\n\naxes[0].set_ylabel(\"$g_0$\")\naxes[0].set_xlabel(\"$(u-g)_0$\")\naxes[1].set_xlabel(\"$(g-r)_0$\")\naxes[2].set_xlabel(\"$(g-i)_0$\")\n\nfig.tight_layout()\n\nfig,ax = pl.subplots(1,1,figsize=(8,6))\n# ax.plot(ngc5897_c_back.ra.degree, ngc5897_c_back.dec.degree, ls='none')\n# ax.plot(ngc5897_c_forw.ra.degree, ngc5897_c_forw.dec.degree, ls='none')\n\nax.plot(filtered_control['ra'], filtered_control['dec'], marker='.', ls='none', alpha=0.5)\nax.plot(filtered_targets['ra'], filtered_targets['dec'], marker='.', ls='none', alpha=0.5)\n\nax.set_xlim(220,210)\nax.set_ylim(-20,-10)\npl.minorticks_on()\npl.grid()",
"Hm, well, the above isn't very conclusive...maybe I'll try looking in teh catalina data too?",
"css = ascii.read(\"/Users/adrian/Downloads/catalina_south.csv\")\nprint(css.colnames)\n\nfig,ax = pl.subplots(1,1,figsize=(8,6))\n\ngalcen = coord.Galactic(l=0*u.deg, b=0*u.deg).transform_to(coord.ICRS)\nax.scatter(galcen.ra.degree, galcen.dec.degree)\n\nax.plot(css['RAJ2000'], css['DEJ2000'], marker='.', ls='none')\n\nax.plot(ngc5897_c_back.ra.degree, ngc5897_c_back.dec.degree, ls='none')\nax.plot(ngc5897_c_forw.ra.degree, ngc5897_c_forw.dec.degree, ls='none')\n\nax.set_xlim(325,200)\nax.set_ylim(-45,0)\nax.xaxis.set_ticks(np.arange(200,330,10))\npl.minorticks_on()\npl.grid()\n\nbox_ix = ((css['RAJ2000'] > 210) & (css['RAJ2000'] < 230) & \n (css['DEJ2000'] > -22) & (css['DEJ2000'] < -15) &\n (css['dH'] > 9.) & (css['dH'] < 13.))\nbox_ix.sum()\n\npl.plot(css['RAJ2000'][box_ix], css['DEJ2000'][box_ix], ls='none', marker='o')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
PrincetonACM/princetonacm.github.io
|
events/code-at-night/archive/python_talk/.ipynb_checkpoints/intro_to_python_soln-checkpoint.ipynb
|
mit
|
[
"Python 3 Tutorial Notebook\nWe'll be using this notebook to follow the slides from the workshop. You can also use it to experiment with Python yourself! Simply add a cell wherever you want, type in some Python code, and see what happens!\nTopic 1: Printing\n1.1 Printing Basics",
"# When a line begins with a '#' character, it designates a comment. This means that it's not actually a line of code\n\n# This is how you say hello world\nprint('hello world')\n\n# Can you make Python print the staircase below:\n#\n# ========\n# | |\n# =============== \n# | | |\n# ======================\n\nprint(' ========')\nprint(' | |')\nprint(' ===============')\nprint(' | | |')\nprint(' ======================')",
"1.2 Some other useful printing tips/tricks",
"# The print(...) function can accept as many arguments as you'd like. It prints the arguments\n# in order, separated by a space. For example...\n\nprint('hello', 'world', 'i', 'go', 'to', 'princeton')\n\n# You can change the delimiter that separates the comma-separated arguments by changing the 'sep' parameter:\n\nprint('hello', 'world', 'i', 'go', 'to', 'princeton', sep=' --> ')\n\n# By default, python adds a newline to the end of any statement you print. However, you can change this\n# by changing the 'end' parameter. For example...\n\n# Here we've told Python to print 'hello world' and then an empty string. This effectively \n# removes the newline that Python adds by default.\nprint('hello prince', end='')\n\n# The next line that we print then begins immediately after the previous thing we printed.\nprint('ton')\n\n# We can also end our lines with something more exotic\nprint('ACM is so cool', end=' :)')",
"Topic 2: Variables, Basic Operators, and Data Types\n2.1 Playing with Numbers\nFor a description of all the operators that exist in Python, you can visit https://www.tutorialspoint.com/python/python_basic_operators.htm.",
"# What does the following snippet of code do?\n\nday = 24\nmonth = 'September'\nyear = '2021'\ndotw = 'Friday'\n\nprint(month, day - 7, year, 'was a', dotw)\n\n# Explanation: Tells us that the day 7 days ago was the same day of the week as today. The more you know...?\n\nage_in_weeks = 1057\n\n# What's the difference between the two statements below? Comment one of them out to check yourself!\nage_in_years = 1057 / 52\nage_in_years = 1057 // 52\n\nprint(age_in_years)\n\n# Explanation: The '/' operator does float division in Python by default, while the '//' operator does\n# integer division (i.e. it returns the decimal answer, ALWAYS rounded down to an integer)\n\n# Try to calculate the following in your head and see if your answer matches what Python says\n\nmystery = 2 ** 4 * 3 ** 2 % 7 * (2 + 7)\nprint(mystery)\n\n# Explanation: First evaluate anything in parentheses, then do the exponents, then do multiplication/modulo \n# FROM LEFT TO RIGHT. So\n# 2 ** 4 * 3 ** 2 % 7 * (2 + 7) = 2 ** 4 * 3 ** 2 % 7 * 9 = 16 * 9 % 7 * 9 = 144 % 7 * 9 = 4 * 9 = 36\n\n# Write a function that converts a given temperature in Farenheit to Celsius and Kelvin\n# The relevant formulas are (degrees Celsius) = (degrees Farenheit - 32) * 5 / 9\n# and (Kelvin) = (degrees Celsius + 273)\n\nfarenheit = 86 # Change the value here to test your solution\ncelsius = (farenheit - 32) * 5 / 9\nkelvin = celsius + 273\n\nprint(farenheit, 'degrees farenheit =', celsius, 'degrees celsius =', kelvin, 'kelvin')",
"2.2 Playing with Strings",
"# You are given the following string:\na = 'Thomas Cruise'\n\n# Your job is to put the phrase 'Tom Cruise is 9 outta 10' into variable b using ONLY operations on string a.\n# You may not concatenate letters or strings of your own. HINT: You can use the str(...) function to convert\n# numerical values into strings so that you can concatenate it with another string\nb = a[0] + a[2:4] + a[6:] + a[6] + a[10:12] + a[6] + str(a.find('u')) + a[6] + a[2] + a[9] + \\\n (a[0].lower() * 2) + a[4] + a[6] + str(a.find('i'))\n\nprint(b)\n\n# Practice with string formatting with mad libs! For this, you'll need to know \n# how to receive input. It's really easy in Python:\n\nword_1 = input('Input first word:\\n') # This prompts the user with the phrase 'Input first word'\n # and stores the result in the variable word_1\nword_2 = input('Input second word:\\n')\nword_3 = input('Input third word:\\n')\nword_4 = input('Input fourth word:\\n')\n\n# You want to print the following mad libs:\n#\n# Hi, my name is [first phrase]. \n# One thing that I love about Princeton is [second phrase].\n# One pet peeve I have about Princeton is [third phrase], but I can get over it because I have [fourth phrase].\n# \n# For the last sentence, use one print statement to print it!\n\nprint('\\nYour mad libs is: ')\nprint('Hi, my name is {}.'.format(word_1))\nprint('One thing that I love about Princeton is {}.'.format(word_2))\nprint('One pet peeve I have about Princeton is {}, but it\\'s OK because I have {}.'.format(word_3, word_4))",
"2.3 Playing with Booleans",
"# Your objective is to write a boolean formula in Python that takes three boolean variables (a, b, c)\n# and returns True if and only if exactly one of them is True. This is called the xor of the variables\n\n# Toggle these to test your formula\na = False\nb = True\nc = False\n\n# Write your formula here\nxor = (a and not b and not c) or (not a and b and not c) or (not a and not b and c)\nprint(xor)",
"2.4 Mixing Types",
"# In Python, data types are divided into two categories: truthy and falsy. Falsy values include anything\n# (strings, lists, etc.) that is empty, the special value None, any zero number, and the boolean False. \n# You can use the bool(...) function to check whether a value is truthy or falsy:\n\nprint('bool(3) =', bool(3))\nprint('bool(0) =', bool(0))\nprint('bool(\"\") =', bool(''))\nprint('bool(\" \") =', bool(' '))\nprint('bool(False) =', bool(False))\nprint('bool(True) =', bool(True))",
"Topic 3: If Statements, Ranges, and Loops\n3.1 Practice with the Basics",
"x = 5\n\n# What is the difference between this snippet of code:\n\nif x % 2 == 0:\n print(x, 'is even')\nif x % 5 == 0:\n print(x, 'is divisible by 5')\nif x > 0:\n print(x, 'is positive')\n \nprint()\n \n# And this one:\nif x % 2 == 0:\n print(x, 'is even')\nelif x % 5 == 0:\n print(x, 'is divisible by 5')\nelif x > 0:\n print(x, 'is positive')\n \n# Explanation: the elif will only execute if all statements in the block above evaluated to false.\n# In the second block, the first if statement evaluates to false, so the first elif's condition is\n# tested. It is true, so then the second elif will not execute.\n\n# Follow-up: An if statement starts its own new block. So the first snippet of code is actually\n# three if statement blocks.\n\n# FizzBuzz is a very well-known programming challenge. It's quite easy, but it can trip up people\n# who are trying to look for shortcuts to solving the problem. The problem is as follows:\n# \n# For every number k in order from 1 to 50, print\n# - 'FizzBuzz' if the number is divisible by 3 and 5\n# - 'Fizz' if the number is only divisible by 3\n# - 'Buzz' if the number is only divisble by 5\n# - the value of k if none of the above options hold\n#\n# Your task is to write a snippet of code that solves FizzBuzz.\n\nfor i in range(1, 51):\n div3 = ((i % 3) == 0)\n div5 = ((i % 5) == 0)\n \n if div3:\n if div5: print('FizzBuzz')\n else: print('Fizz')\n elif div5:\n print('Buzz')\n else: print(i)",
"3.2 Ternary Statements in Python",
"# The following if statement construct is so common it has a name ('ternary statement'):\n#\n# if (condition): \n# x = something1\n# elif (condition2):\n# x = something2\n# else:\n# x = something3\n#\n# In python, this can be shortened into a one-liner:\n#\n# x = something else something2 if (condition2) else something3\n#\n# And this works for an arbitrary number of elif statements in between the initial if and final else.\n\n# Can you convert the following block into a one-liner?\nbudget = 3\nif budget > 50:\n restaurant = 'Agricola'\nelif budget > 30:\n restaurant = 'Mediterra'\nelif budget > 15:\n restaurant = 'Thai Village'\nelse:\n retaurant = 'Wawa'\n \n# Write your solution below:\nrestaurant = 'Agricola' if budget > 50 else 'Mediterra' if budget > 30 \\\n else 'Thai Village' if budget > 15 else 'Wawa'\n\nprint(restaurant)",
"3.3 Practice with While Loops",
"# Your job is to create a 'guessing game' where the program thinks of an integer from 1 to 50\n# and will keep prompting you for a guess. It'll tell you each time whether your guess is\n# too high or too low until you find the number.\n\n# Don't touch these two lines of code; they choose a random number between 1 and 50\n# and store it in mystery_num\nfrom random import randint\nmystery_num = randint(1, 100)\n\n# Write your guessing game below:\nguess = int(input('Guess a number:\\n')) # First guess; don't forget to convert it to an int!\nwhile mystery_num != guess:\n if guess > mystery_num: guess = int(input('Nope. Guess was too high!\\n'))\n elif guess < mystery_num: guess = int(input('Nope. Guess was too low!\\n'))\n\nprint('You got it!')\n\n# Follow-up: Using the best strategy, what's the worst-case number of guesses you should need?",
"Topic 4: Data Structures in Python\n4.1 Sequences\nStrings, tuples, and lists are all considered sequences in Python, which is why there are many operations that work on all three of them.\n4.1.1 Iterating",
"# When at the top of a loop, the 'in' keyword in Python will iterate through all of the sequence's \n# members in order. For strings, members are individual characters; for lists and tuples, they're \n# the items contained.\n\n# Task: Given a list of lowercase words, print whether the word has a vowel. Example: if the input is\n# ['rhythm', 'owl', 'hymn', 'aardvark'], you should output the following:\n# rhythm has no vowels\n# owl has a vowel\n# hymn has no vowels\n# aardvark has a vowel\n\n# HINT: The 'in' keyword can also test whether something is a member of another object.\n# Also, don't forget about break and continue!\n\nvowels = ['a', 'e', 'i', 'o', 'u']\nwords = ['rhythm', 'owl', 'hymn', 'aardvark']\n\nfor word in words:\n has_vowel = False\n for letter in word:\n if letter in vowels: \n has_vowel = True\n break # Not necessary, but is more efficient\n \n if has_vowel:\n print(word, 'has a vowel')\n else:\n print(word, 'has no vowels')\n\n# Given a tuple, write a program to check if the value at index i is equal to the square of i.\n# Example: If the input is nums = (0, 2, 4, 6, 8), then the desired output is\n#\n# True\n# False\n# True\n# False\n# False\n#\n# Because nums[0] = 0^2 and nums[2] = 4 = 2^2. HINT: Use enumerate!\n\nnums = (0, 2, 4, 6, 8)\nfor i, num in enumerate(nums): print(num == i * i)",
"4.1.2 Slicing",
"# Slicing is one of the operations that work on all of them.\n\n# Task 1: Given a string s whose length is odd and at least 5, can you print \n# the middle three characters of it? Try to do it in one line.\n# Example: if the input is 'PrInCeToN', the the output should be 'nCe'\ns = 'PrInCeToN'\nprint(s[len(s) // 2 - 1 : len(s) // 2 + 2])\n\n# Task 2: Given a tuple, return a tuple that includes only every other element, starting\n# from the first. Example: if the input is (4, 5, 'cow', True, 9.4), then the output should\n# be (4, 'cow', 9.4). Again, try to do it in one line — there's an easy way to do it with slicing.\n\nt = (4, 5, 'cow', True, 9.4)\nprint(t[::2])\nprint(t[0::2]) # also acceptable\nprint(t[0:len(t):2]) # also acceptable, but less ideal\n\n# Task 3: Do the same as task 2, except start from the last element and alternate backwards.\n# Example: if the input is (3, 9, 1, 0, True, 'Tiger'), output should be ('Tiger', 0, 9)\n\nt = (3, 9, 1, 0, True, 'Tiger')\nprint(t[::-2])",
"4.2 List Comprehension and Other Useful List Functions",
"# Task 1: Given a list of names, return a new list where all the names which are more than 15\n# characters long are removed.\n\nnames = ['Nalin Ranjan', 'Howard Yen', 'Sacheth Sathyanarayanan', 'Henry Tang', \\\n 'Austen Mazenko', 'Michael Tang', 'Dangely Canabal', 'Vicky Feng']\n\n# Write your solution below\nprint([name for name in names if len(name) <= 15])\n\n# Task 2: Given a list of strings, return a list which is the reverse of the original, with\n# all the strings reversed. Example: if the input is ['Its', 'nine', 'o-clock', 'on a', 'Saturday'],\n# then the output should be ['yadrutaS', 'a no', 'kcolc-o', 'enin', 'stI']. Try to do it in one line!\n\n# HINT: Use list comprehension and negative indices!\n\nl = ['Its', 'nine', 'o-clock', 'on a', 'Saturday']\nprint([word[::-1] for word in l[::-1]]) \nprint([l[-i][::-1] for i in range(1, len(l) + 1)]) # Also acceptable, but a little less ideal\n\nl1 = [5, 2, 6, 1, 8, 2, 4]\nl2 = [6, 1, 2, 4]\n\n# Python has a bunch of useful built-in list functions. Some of them are\n\nl1.append(3) # adds the element 3 to the end of the the list\nprint(l1)\n\nl1.insert(1, 7) # adds the element 7 as the second element of the list\nprint(l1)\n\nl1.remove(2) # Removes the first occurrence of 7 in the list (DOES NOT REMOVE ALL)\nprint(l1)\n\nl1.pop(4) # Remove the fifth item of the list (since everything is zero-indexed)\nprint(l1)\n\nl1.sort() # Sorts the list in increasing order\nprint(l1)\n\nl1.sort(reverse=True) # Sorts the list in decreasing order\nprint(l1)\n\nprint(l1.count(2)) # Counts the number of occurrences of the number 2 in the list\n\nl1.extend(l2) # Appends all elements in l2 to the end of l1\nprint(l1)\n\n# If the list is numeric, we can find the min, max, and sum easily:\nprint('Sum:', sum(l1))\nprint('Minimum:', min(l1))\nprint('Maximum:', max(l1))\n\n# You can see all the list methods at https://www.w3schools.com/python/python_ref_list.asp",
"4.3 Sets and Dictionaries",
"# Task 1: In a dictionary, keys must be unique, but values need not be. Given a dictionary, write a script\n# that prints the set of all unique values in a dictionary. Example: if the dictionary is\n# {'Cap': 'bicker', 'Quad': 'sign-in', 'Colonial': 'sign-in', 'Tower': 'bicker', 'Charter': '???'}\n# The program should print {'sign-in', 'bicker', '???'}\n\nd = {'Cap': 'bicker', 'Quad': 'sign-in', 'Colonial': 'sign-in', 'Tower': 'bicker', 'Charter': '???'}\nunique_vals = set()\nfor key in d: \n unique_vals.add(d[key])\n\nprint(unique_vals)\n\n# In fact, there's a more Pythonic way to do this in one line:\nprint(set([val for val in d.values()]))\n\n# We used list comprehension to put every value into a list (which may have contained duplicates)\n# and then converting it to a set removed the duplicates (since every element in a set must be unique).\n\n# Task 2: Given a passage of text (a string), analyze the frequency of each individual letter. Sort\n# the letters by their frequency in the passage. Does your distribution look reasonable for English?\n\npassage = \"\"\"it was the best of times, it was the worst of times, it was the age of wisdom, it was \n the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was \n the season of Light, it was the season of Darkness, it was the spring of hope, it was the \n winter of despair, we had everything before us, we had nothing before us, we were all going \n direct to Heaven, we were all going direct the other way -- in short, the period was so far \n like the present period that some of its noisiest authorities insisted on its being received, \n for good or for evil, in the superlative degree of comparison only\"\"\"\n\n# Here's the alphabet to help you out; it'll help you ignore other characters\nalphabet = \"abcdefghijklmnopqrstuvwxyz\"\n\n# This adds a key in the dictionary for each letter of the alphabet\nd = dict.fromkeys(alphabet, 0)\n\nfor char in passage:\n if char in alphabet: d[char] += 1\n\n# Don't change the code below: it'll take your dictionary of frequencies and sort it from most frequent to least\nfreqs = [(letter, d[letter]) for letter in d]\nfreqs.sort(key = lambda x: x[1], reverse=True)\n\nprint(freqs)",
"Topic 5: Functions in Python\n5.1 Practice with Basic Functions",
"# Task 1: Write a function that returns the minimum of three numbers. Don't use the built-in min function\ndef my_min(a, b, c):\n if a <= b:\n return a if a <= c else c\n else:\n return b if b <= c else c\n \nprint('Minimum of 6, 3, 7 is', my_min(6, 3, 7))\nprint('Minimum of 0, 3.333, -52 is', my_min(0, -52, 3.333))\nprint('Minimum of -3, -1, 3.14159 is', my_min(-3, -1, 3.14159))\n\n# Task 2: Write a function that checks if a given tuple of numbers is increasing (that is, each number\n# is at least the number before it)\n\ndef my_increasing(t):\n if len(t) == 0: return True\n \n prev = t[0]\n for i in t:\n if i < prev: return False\n prev = i\n \n return True\n\nprint('(1, 2, 3, 4, 5, 7, 8) is increasing:', my_increasing((1, 2, 3, 4, 5, 7, 8)))\nprint('(1, 2, 3, 2, 5, 7, 8) is increasing:', my_increasing((1, 2, 3, 2, 5, 7, 8)))\nprint('(-1, 2, 3, 2.99, 5, 7, 8) is increasing:', my_increasing((1, 2, 3, 2, 5, 7, 8)))\n\n# Task 3: Given a list of numbers that is guaranteed to contain all but one of the consecutive integers\n# 1 to N (for some N), find the one that is missing. For example, if the input is [2, 1, 5, 4], your function\n# should return 3, because that's the number missing from 1-5.\n\ndef my_missing(l):\n s = sum(l)\n n = len(l) + 1\n return n * (n + 1) // 2 - s # Why does this work?\n\nprint(my_missing([2, 1, 5, 4]))\nprint(my_missing([3, 4, 6, 2, 5, 7, 9, 8]))",
"5.2 Recursion",
"# Task: The sequence of Fibonacci numbers starts with the numbers 1 and 1 and every subsequent term\n# is the sum of the previous two terms. So the sequence is 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 87, 144, ...\n# Can you write a simple recursive function that calculates the nth Fibonacci number?\n\n# WARNING: Don't call your function for anything more than 35 or pass a non-integer parameter. \n# Your notebook might crash if you do.\n\ndef fib(n):\n if n == 1 or n == 2: return 1\n \n return fib(n - 1) + fib(n - 2)\n\nprint(fib(35))",
"5.3 Memoization",
"# Part of the reason that we told you not to run your answer for 5.2 for large n is because the number of\n# function calls generated is exponentially large: for n = 35, the number of function calls you have is on\n# the order of 34 billion, which is a lot, even for a computer! If you did n = 75, the number of calls you\n# would make is approximately 37 sextillion, which is more than the number of seconds until the heat death\n# of the sun. \n\n# You can avert this issue, however, if you **memoize** your function, which is just a fancy way of saying\n# that you can remember values of your function instead of having to re-evaluate your function again. Python\n# has a handy memoization tool:\n\nfrom functools import lru_cache\n\n@lru_cache\ndef fib(n):\n if n == 1 or n == 2: return 1\n \n return fib(n - 1) + fib(n - 2)\n\nprint(fib(100)) # Works no problem!\n\n# All we had to do was add the import statement and 'decorate' the function we wanted to remember\n# values from with the line @lru_cache",
"Topic 6: Classes in Python\n6.1 Practice Writing Basic Classes",
"# Write a PrincetonStudent class, where a PrincetonStudent has a name, major, year,\n# set of clubs, and a preference ordering of dining halls. We want to have\n#\n# - a default constructor that initializes the PrincetonStudent with a name, major, PUID, year, no clubs,\n# and a random preference ordering of dining halls\n# - a special constructor (class method) called detailed_student that initializes a PrincetonStudent \n# with a name, major, year,\n# a specific set of clubs, and a particular preference ordering of dining halls\n# - a __str__() method that prints all the data of the student\n# - a move_dhall_to_top() function that takes a dhall and moves it to the top\n# of one's dining hall preference list\n# - a __lt__() method that returns true if and only if this student has a name that comes before\n# the other's alphabetically\n# - an __eq__() method that returns true if and only if the PUIDs of students are equal \n\n# HINT: To generate a random dining hall preference order, you can take a particular preference order\n# and shuffle it using the random.shuffle(list) function\n\nfrom random import shuffle\n\nclass PrincetonStudent():\n def __init__(self, name, major, puid, year):\n self.name = name\n self.major = major\n self.year = year\n self.puid = puid\n self.clubs = []\n \n # From least preferred to most\n self.dhall_pref = ['WuCox', 'Whitman', 'Forbes', 'CJL', 'RoMa']\n shuffle(self.dhall_pref)\n \n @classmethod\n # Initialize a student with name, major, year, specific set of clubs, and a dhall preference ordering\n def detailed_student(cls, name, major, puid, year, clubs, dhall_pref):\n new_student = PrincetonStudent(name, major, puid, year)\n new_student.clubs = clubs\n new_student.dhall_pref = dhall_pref\n \n return new_student\n \n def move_dhall_to_top(self, dhall):\n if dhall in self.dhall_pref:\n move_to_top = self.dhall_pref.remove(dhall)\n self.dhall_pref.append(dhall)\n \n # Returns a string description of the student. This allows us to call print(...) on a\n # PrincetonStudent and get an intelligible result\n def __str__(self):\n str_version = \"Name: \" + self.name + \"\\n\"\n str_version += \"Year: \" + str(self.year) + \"\\n\"\n str_version += \"Concentration: \" + self.major + \"\\n\"\n str_version += \"Clubs: \" + str(self.clubs) + \"\\n\"\n str_version += \"Dining Halls from Most Favorite to Least: \" + str(self.dhall_pref[::-1]) + \"\\n\"\n \n return str_version\n \n def __lt__(self, other):\n return (self.name.lower() < other.name.lower()) # This works because string comparison \n # is automatically alphabetical\n \n \n# Test your PrincetonStudent class using this test suite. Feel free to write your own too!\nnalin = PrincetonStudent('Nalin Ranjan', 'COS', '123456789', 2022)\nprint(nalin, end=\"\\n\\n\")\n\nnalin.clubs.extend(['ACM', 'Taekwondo', 'Princeton Legal Journal', 'Badminton'])\nprint(nalin, end=\"\\n\\n\")\n\nsacheth_clubs = ['ACM', 'Table Tennis']\nsacheth_prefs = ['WuCox', 'Whitman', 'Forbes', 'RoMa', 'CJL']\nsacheth = PrincetonStudent.detailed_student('Sacheth Sathyanarayanan', 'COS', \\\n '24681012', 2022, sacheth_clubs, sacheth_prefs)\nprint(sacheth)\n\nprint('Sacheth had a great meal at Whitman! It is now his favorite.\\n')\nsacheth.move_dhall_to_top('Whitman')\nprint(sacheth)\n\nprint('Sacheth is the same student as Nalin:', sacheth == nalin)\nprint('Sacheth\\'s name comes before Nalin\\'s:', sacheth < nalin)",
"6.2 Inheritance",
"# Write an ACMOfficer class that inherits the PrincetonStudent class. An ACMOfficer has every attribute\n# a PrincetonStudent has, and also a position and term expiration date. You'll only need to overwrite\n# the constructors to accommodate these two additions. Remember that you can still call the parent's \n# functions as subroutines.\n\nclass ACMOfficer(PrincetonStudent):\n def __init__(self, name, major, puid, year, acm_pos, acm_term_exp):\n PrincetonStudent.__init__(self, name, major, puid, year)\n self.acm_pos = acm_pos\n self.acm_term_exp = acm_term_exp\n \n @classmethod\n def detailed_officer(cls, name, major, puid, year, clubs, dhall_pref, acm_pos, acm_term_exp):\n new_officer = ACMOfficer(name, major, puid, year, acm_pos, acm_term_exp)\n new_officer.clubs = clubs\n new_officer.dhall_pref = dhall_pref\n \n return new_officer\n \n def __str__(self):\n str_version = PrincetonStudent.__str__(self)\n str_version += \"Position on the ACM Board: \" + self.acm_pos + \"\\n\"\n str_version += \"Term expires in: \" + str(self.acm_term_exp) + \"\\n\"\n \n return str_version\n \n# Test your PrincetonStudent class using this test suite. Feel free to write your own too!\nnalin = ACMOfficer('Nalin Ranjan', 'COS', '123456789', 2022, 'Chair', 2022)\nprint(nalin, end=\"\\n\\n\")\n\nnalin.clubs.extend(['ACM', 'Taekwondo', 'Princeton Legal Journal', 'Badminton'])\nprint(nalin, end=\"\\n\\n\")\n\nsacheth_clubs = ['ACM', 'Table Tennis']\nsacheth_prefs = ['WuCox', 'Whitman', 'Forbes', 'RoMa', 'CJL']\nsacheth = ACMOfficer.detailed_officer('Sacheth Sathyanarayanan', 'COS', '24681012', \n 2022, sacheth_clubs, sacheth_prefs, 'Treasurer', 2022)\nprint(sacheth)\n\nprint('Sacheth had a great meal at Whitman! It is now his favorite.\\n')\nsacheth.move_dhall_to_top('Whitman')\nprint(sacheth)\n\nprint('Sacheth is the same student as Nalin:', sacheth == nalin)\nprint('Sacheth\\'s name comes before Nalin\\'s:', sacheth < nalin)",
"Topic 7: Using Existing Python Libraries\nSigmoid activation functions are ubiquitous in machine learning. They all look somewhat like an S shape, starting\nout flat, and then somewhere in the middle jumping pretty quickly before leveling off. One example is the Gudermannian Function, which takes the form\n$$f(x, \\gamma) = \\gamma \\arctan \\left(\\tanh \\left( \\frac x \\gamma \\right) \\right)$$\nfor some value $\\gamma$. You can think of $\\gamma$ as a parameter that specifies \"which\" Gudermannian function we're talking about. Can you plot the Gudermannian Function in the range $[-5, 5]$ with $\\gamma = {2, 4, 6}$? You will need access to numpy to find implementations of the arctan and tanh functions, and you will need matplotlib to create the actual plot.\nHINT: Since we have three different values of $\\gamma$, we'll have three different curves on the same graph.",
"# Numpy contains many mathematical functions/data analysis tools you might want to use\nimport numpy as np\n\n# First: Write a function that returns the Gudermannian function evaluated at x.\ndef gudermannian(x, gamma):\n return gamma * np.arctan(np.tanh(x / gamma))\n\n# Next: use matplotlib to plot the function. HINT: Use matplotlib.pyplot\nfrom matplotlib import pyplot as plt # You'll refer to pyplot as plt from now on\n\n# HINT: pyplot requires that you have a set of x-values and a corresponding set of y-values.\n# To make your plot look like a continuous curve, just make your x-values close enough (say in\n# increments of 0.01). You'll have to use numpy's arange function (Google it!)\nx_vals = np.arange(-5, 5, 0.01)\n\n# Then, you'll have to make a set of y values for each gamma. HINT: If f(x) is a function\n# defined on a single number, then running it on x_vals evaluates the function at every x value\n# in x_vals. \n\nplt.plot(x_vals, gudermannian(x_vals, 2), label='gamma = 2')\nplt.plot(x_vals, gudermannian(x_vals, 4), label='gamma = 4')\nplt.plot(x_vals, gudermannian(x_vals, 6), label='gamma = 6')\nplt.legend()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
poppy-project/community-notebooks
|
tutorials-education/poppy_ergo_jr__decouverte_du_robot/TP2_mouvement_et_cartes-eleve_vf_.ipynb
|
lgpl-3.0
|
[
"from poppy.creatures import PoppyErgoJr\n\npoppy = PoppyErgoJr()\n",
"Encore une instruction pour bouger\nQUESTIONS \n\nLorsque la liste pos contient 6 angles en degrés, que permet de faire le jeu d'instructions suivant ? \nQuelle différence avec m.goal_position = 30 par exemple ?",
"i = 0\nfor m in poppy.motors:\n m.compliant = False\n m.goto_position(pos[i], 0.5, wait = True)\n i = i + 1\n",
"Lecture de marqueurs sous forme de QR-codes\nExécuter ces instructions avec les cartes prévues à cet effet. \nQue constatez-vous ?",
"# importation des outils nécessaires \nimport cv2\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom hampy import detect_markers\n\n# affichage de l'image capturée\nimg = poppy.camera.frame\nplt.imshow(img)\n#récupère dans une liste les marqueurs trouvés dans l'image\nmarkers = detect_markers(img)\n\nvaleur = 0 \nfor m in markers:\n print('Found marker {} at {}'.format(m.id, m.center))\n m.draw_contour(img)\n valeur = m.id\n print(valeur)",
"Défi mouvement à l'aide de cartes\n\nEnigme : Les cartes ont été créées pour être lues. Pouvez-vous identifier comment à partir des valeurs, on peut reconstruire les noms des variables ? \nMettre toutes les leds des moteurs roses.\n\nDétecter l'un des 4 marqueurs et lui faire effectuer l'action correspondant à son nom : \n\nNext doit permettre de passer au moteur suivant de la liste des moteurs \nPrev de revenir au précédent\nRigh de faire augmenter la position courante de 5 degrés \nLeft de faire diminuer la position courante de 5 degrés\n\n\n\nPour identifier le moteur sélectionné, sa led sera rouge durant la selection.\n\nDurant un mouvement, la led du moteur qui bouge sera verte.\nOn commence par le moteur m1 et lorsque l'on a atteint le moteur m6, si la carte next est lue, le code se termine. \n\nremarque\nEn Python une boucle tant que s'écrit",
"while (condition): \n #corps de la boucle\n\nimport time\n# Aide : la commande time.sleep(2.0) permet de temporiser 2 secondes \nRIGH = 82737172\n\nLEFT = 76697084\n\nNEXT = 78698884\n\nPREV = 80826986\n\n# l'instruction ci-dessous permet de créer une liste \nliste_moteur = [m for m in poppy.motors] \n# toutefois, poppy.motors est déjà une liste. Pour vous en assurer, \n# type(poppy.motors) vous retourne le type du conteneur poppy.motors\n\n \n\n",
"Auteur : Georges Saliba, Lycée Victor Louis, Talence, sous licence CC BY SA"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.16/_downloads/plot_decoding_csp_space.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"====================================================================\nDecoding in sensor space data using the Common Spatial Pattern (CSP)\n====================================================================\nDecoding applied to MEG data in sensor space decomposed using CSP.\nHere the classifier is applied to features extracted on CSP filtered signals.\nSee http://en.wikipedia.org/wiki/Common_spatial_pattern and [1]_.\nReferences\n.. [1] Zoltan J. Koles. The quantitative extraction and topographic mapping\n of the abnormal components in the clinical EEG. Electroencephalography\n and Clinical Neurophysiology, 79(6):440--447, December 1991.",
"# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n# Romain Trachel <romain.trachel@inria.fr>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne import io\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()",
"Set parameters and read data",
"raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\ntmin, tmax = -0.2, 0.5\nevent_id = dict(aud_l=1, vis_l=3)\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname, preload=True)\nraw.filter(2, None, fir_design='firwin') # replace baselining with high-pass\nevents = mne.read_events(event_fname)\n\nraw.info['bads'] = ['MEG 2443'] # set bad channels\npicks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=False,\n exclude='bads')\n\n# Read epochs\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n picks=picks, baseline=None, preload=True)\n\nlabels = epochs.events[:, -1]\nevoked = epochs.average()",
"Decoding in sensor space using a linear SVM",
"from sklearn.svm import SVC # noqa\nfrom sklearn.model_selection import ShuffleSplit # noqa\nfrom mne.decoding import CSP # noqa\n\nn_components = 3 # pick some components\nsvc = SVC(C=1, kernel='linear')\ncsp = CSP(n_components=n_components, norm_trace=False)\n\n# Define a monte-carlo cross-validation generator (reduce variance):\ncv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=42)\nscores = []\nepochs_data = epochs.get_data()\n\nfor train_idx, test_idx in cv.split(labels):\n y_train, y_test = labels[train_idx], labels[test_idx]\n\n X_train = csp.fit_transform(epochs_data[train_idx], y_train)\n X_test = csp.transform(epochs_data[test_idx])\n\n # fit classifier\n svc.fit(X_train, y_train)\n\n scores.append(svc.score(X_test, y_test))\n\n# Printing the results\nclass_balance = np.mean(labels == labels[0])\nclass_balance = max(class_balance, 1. - class_balance)\nprint(\"Classification accuracy: %f / Chance level: %f\" % (np.mean(scores),\n class_balance))\n\n# Or use much more convenient scikit-learn cross_val_score function using\n# a Pipeline\nfrom sklearn.pipeline import Pipeline # noqa\nfrom sklearn.model_selection import cross_val_score # noqa\ncv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=42)\nclf = Pipeline([('CSP', csp), ('SVC', svc)])\nscores = cross_val_score(clf, epochs_data, labels, cv=cv, n_jobs=1)\nprint(scores.mean()) # should match results above\n\n# And using reuglarized csp with Ledoit-Wolf estimator\ncsp = CSP(n_components=n_components, reg='ledoit_wolf', norm_trace=False)\nclf = Pipeline([('CSP', csp), ('SVC', svc)])\nscores = cross_val_score(clf, epochs_data, labels, cv=cv, n_jobs=1)\nprint(scores.mean()) # should get better results than above\n\n# plot CSP patterns estimated on full data for visualization\ncsp.fit_transform(epochs_data, labels)\ndata = csp.patterns_\nfig, axes = plt.subplots(1, 4)\nfor idx in range(4):\n mne.viz.plot_topomap(data[idx], evoked.info, axes=axes[idx], show=False)\nfig.suptitle('CSP patterns')\nfig.tight_layout()\nmne.viz.utils.plt_show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dtamayo/MachineLearning
|
Day4/Transit/MachineLearningWorkShop-TESSSimulatedData.ipynb
|
gpl-3.0
|
[
"MachineLearningWorkShop at UCSC\nAug 18th - Learning with TESS Simulated data\nLast month we explored all type of learning algorithms with simulated light curves including:\n\ndifferent planet sizes\nvarious period\nvarious white/red noise level\ndifferent baseline\n\nWhile the result looked promising, we need to extend our experiments to more realistic data. \nThe data set we are using today is created from SPyFFI, an image simulator created by Zack Berta and his undergrad student Jacobi Kosiarok. \nThe ingrediants included by SPyFFI are:\n- catalogs of real stars\n- somewhat realistic Camera and CCD effects, such as PRF variation, readout smear, resembling the TESS telescope\n- spacecraft effects such as jitter/focusing change\n- somewhat realistic noise buget\n- transits and stellar variability (sine curves) draw from Kepler\nSPyFFI out puts image time series like this:\n\nWe process the images from 10 days of TESS observations with standard photometry pipeline, and create light curves for all the stars with TESS magnituded brighter than 14. \nFor the region of sky we simulated (6 by 6 square degree), this results in 16279 stars. To make our tasks today simpler, we are going to work with only ~4000 stars.",
"import sklearn\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.utils import shuffle\nfrom sklearn import metrics\nfrom sklearn.metrics import roc_curve\nfrom sklearn.metrics import classification_report\nfrom sklearn.decomposition import PCA\nfrom sklearn.svm import SVC\nfrom sklearn.cross_validation import cross_val_score\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.ensemble import GradientBoostingClassifier\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.grid_search import GridSearchCV\nimport matplotlib\nfrom matplotlib import pyplot as plt\nimport pandas as pd\nimport numpy as np\n%matplotlib inline\n\ndef make_ROC_curve(testY, predY, name):\n fig2 = plt.figure()\n ax= fig2.add_subplot(1,1,1)\n fpr, tpr, _ = roc_curve(testY, predY)\n ax.plot(fpr, tpr, label = name)\n ax.set_title(('ROC Curve for %s') % name)\n ax.set_ylabel('True Positive Rate')\n ax.set_xlabel('False Positive Rate')\ndef collect_lc_feature(idlist):\n LCfeature=np.zeros([len(idlist),481])\n count=0\n for i in idlist:\n #print i\n infile=\"LTFsmall/\"+str(i)+\".ltf\"\n lc=np.loadtxt(infile)[:,1]\n LCfeature[count,0]=i\n LCfeature[count,1:]=lc\n count+=1\n return LCfeature\ndef plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues):\n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title)\n plt.colorbar()\n tick_marks = np.arange(2)\n plt.xticks(tick_marks, ['false positives', 'transits'], rotation=45)\n plt.yticks(tick_marks, ['false positives', 'transits'])\n plt.tight_layout()\n plt.ylabel('True label')\n plt.xlabel('Predicted label')\ndef fit(model,name,data,cv=True):\n trainX,trainY,testX,testY,X,Y=data\n model.fit(trainX, trainY)\n predY = model.predict(testX)\n f1score = metrics.f1_score(testY, predY)\n cm = metrics.confusion_matrix(testY, predY)\n plot_confusion_matrix(cm)\n predY=model.predict_proba(testX)[:,1]\n rocscore = metrics.roc_auc_score(testY, predY)\n precision, recall, thresholds = metrics.precision_recall_curve(testY, predY)\n aucscore=metrics.auc(precision,recall,reorder=True)\n \n print \"#####################################\"\n print \"Result using\",model\n print \"f1 score from train test split %f\" % f1score\n print \"roc score from train test split %f\" % rocscore\n print \"auc score from train test split %f\" % aucscore\n if cv:\n #cvscore= cross_val_score(model, X, Y, cv = 5, scoring = 'f1')\n cvscore= cross_val_score(model, X, Y, cv = 5, scoring = 'roc_auc')\n print \"f1 score from CV5 %f\" % np.mean(cvscore)\n \n \n print cm\n make_ROC_curve(testY,predY,name)\n return",
"Let's first look at what the TESS light curves: \nTESS_simulated_10day_small.csv is the combined feature file.\nTESS_simulated_lc_small.csv is the light curve file.",
"#df=pd.read_csv(\"TESS_simulated_10day_small.csv\",index_col=0)\n#df=pd.read_csv(\"TESS_simulateddata_combinedfeatures.csv\",index_col=0)\ndf=pd.read_csv(\"TESSfield_19h_44d_combinedfeatures.csv\")\n\n#LCfeature=pd.DataFrame(collect_lc_feature(df['Ids']),columns=['Ids']+list(np.arange(480)))\n#LCfeature.to_csv\n#LCfeature=pd.read_csv(\"TESS_simulated_lc_small.csv\",index_col=0)\n\n#plt.plot(LCfeature.iloc[-1,1:],'.')\n\n#plt.plot(LCfeature.iloc[0,:],'.')",
"The combined feature files contain features from Box Least Squal measurements, and 20 PCA components from the light curve. Later on hopefully we can explore how to create new features from the light curves.\nLet's first examine the columns in the combined feature files:",
"df.columns",
"Columns Ids, Catalog_Period, Depth, Catalog_Epoch records the information we have regarding the injected transits. Anything with period smaller than 0 is not a transit. \nSNR is the signal to noise calculated for the transits using the catalog value. \nSNR=\\sqrt{Ntransit}*Depth/200mmag\nThere are three type of Y values included in this feature file:\n-CatalogY marks True for all the transit planets with SNR larger than 8.5 and BLS_SignaltoPinkNoise_1_0 larger than 7. \n-ManuleY marks True all the transits identified by eye. \n-CombinedY marks True if either CatalogY and ManuleY is True. \nThe signals identified by manule effort but not by Catalog is due to blending. The signal missed by manule effort is either because of low signal to noise or due to cuts in Ntransit and Q value (standard practice before manuel inspection). \nLet's drop the irrelevent columns before training:",
"X=df.drop(['Ids','CatalogY','ManuleY','CombinedY','Catalog_Period','Depth','Catalog_Epoch','SNR'],axis=1)\n#print X.isnull().any()\n\nY=df['CombinedY']\n\ntrainX, testX, trainY, testY= train_test_split(X, Y,test_size = 0.2)\ndata=[trainX,trainY,testX,testY,X,Y]\nprint X.shape, Y[Y==1].shape",
"We show the results from some standard algorithms here:",
"model=RandomForestClassifier(n_estimators=1000,class_weight={0:10,1:1},n_jobs=-1)\nname=\"RFC\"\nfit(model,name,data,cv=False)\n\nmodel=GradientBoostingClassifier(n_estimators=1000)\nname=\"GBC\"\nfit(model,name,data,cv=False)\n\nfrom xgboost import XGBClassifier\nmodel = XGBClassifier(n_estimators=1000)\n#model=XGBClassifier(learning_rate=0.1,\n# n_estimators=1000,\n# max_depth=5,\n# min_child_weight=1,\n# gamma=0,\n# subsample=0.8,\n# colsample_bytree=0.8,\n# objective='binary:logistic')\nmodel.fit(trainX,trainY)\n#model.plot_importance(bst)\n#name=\"XGBoost\"\nfit(model,name,data,cv=False)",
"We can compare the prediction with the Manuel selection and the Catalog selection as the following:",
"from sklearn.cross_validation import StratifiedKFold\n\nmodel=RandomForestClassifier(n_estimators=3000,n_jobs=-1,class_weight='balanced_subsample',oob_score=True)\n\nskf=StratifiedKFold(Y,n_folds=4)\ni=1\nfor train_index,test_index in skf:\n trainX=X.iloc[train_index];testX=X.iloc[test_index]\n trainY=np.array(Y)[train_index];testY=np.array(Y)[test_index]\n #print train_index\n traincatY=np.array(df['CatalogY'])[train_index];testcatY=np.array(df['CatalogY'])[test_index]\n trainmanY=np.array(df['ManuleY'])[train_index];testmanY=np.array(df['ManuleY'])[test_index]\n model.fit(trainX,trainY)\n \n predY=model.predict_proba(testX)[:,1]\n rocscore = metrics.roc_auc_score(testY, predY)\n precision, recall, thresholds = metrics.precision_recall_curve(testY, predY)\n aucscore=metrics.auc(precision,recall,reorder=True)\n predY=model.predict(testX)\n f1score = metrics.f1_score(testY, predY)\n print \"#####################################\"\n print \"fold %d:\" % i\n print \"f1 score from train test split %f\" % f1score\n print \"roc score from train test split %f\" % rocscore\n print \"auc score from train test split %f\" % aucscore\n print \"oob score from RF %f\" % model.oob_score_\n \n flag1=(predY==1)*(predY==np.array(testY))\n\n flag2=(predY==1)*(predY==np.array(testmanY))\n \n print \"predict Transit %d\" % len(predY[predY==1])\n print \"real Transit %d\" % len(testY[testY==1])\n print \"real Transit selected by eye %d\" % len(testmanY[testmanY==1])\n print \"predicted Transit that's real %d\" % len(predY[flag1])\n print \"predicted Transits selected by eye %d\" % len(predY[flag2])\n i+=1\n\nmodel=GradientBoostingClassifier(n_estimators=3000)\n\nskf=StratifiedKFold(Y,n_folds=4)\ni=1\nfor train_index,test_index in skf:\n trainX=X.iloc[train_index];testX=X.iloc[test_index]\n trainY=np.array(Y)[train_index];testY=np.array(Y)[test_index]\n #print train_index\n traincatY=np.array(df['CatalogY'])[train_index];testcatY=np.array(df['CatalogY'])[test_index]\n trainmanY=np.array(df['ManuleY'])[train_index];testmanY=np.array(df['ManuleY'])[test_index]\n model.fit(trainX,trainY)\n predY=model.predict(testX)\n f1score = metrics.f1_score(testY, predY)\n predY=model.predict_proba(testX)[:,1]\n rocscore = metrics.roc_auc_score(testY, predY)\n \n print \"#####################################\"\n print \"fold %d:\" % i\n print \"f1 score from train test split %f\" % f1score\n print \"roc score from train test split %f\" % rocscore\n \n flag1=(predY==1)*(predY==np.array(testY))\n\n flag2=(predY==1)*(predY==np.array(testmanY))\n \n print \"predict Transit %d\" % len(predY[predY==1])\n print \"real Transit %d\" % len(testY[testY==1])\n print \"real Transit selected by eye %d\" % len(testmanY[testmanY==1])\n print \"predicted Transit that's real %d\" % len(predY[flag1])\n print \"predicted Transits selected by eye %d\" % len(predY[flag2])\n i+=1\n\nmodel=XGBClassifier(n_estimators=3000)\n#GradientBoostingClassifier(n_estimators=3000)\n\nskf=StratifiedKFold(Y,n_folds=4)\ni=1\nfor train_index,test_index in skf:\n trainX=X.iloc[train_index];testX=X.iloc[test_index]\n trainY=np.array(Y)[train_index];testY=np.array(Y)[test_index]\n #print train_index\n traincatY=np.array(df['CatalogY'])[train_index];testcatY=np.array(df['CatalogY'])[test_index]\n trainmanY=np.array(df['ManuleY'])[train_index];testmanY=np.array(df['ManuleY'])[test_index]\n model.fit(trainX,trainY)\n predY=model.predict(testX)\n f1score = metrics.f1_score(testY, predY)\n predY=model.predict_proba(testX)[:,1]\n rocscore = metrics.roc_auc_score(testY, predY)\n \n print \"#####################################\"\n print \"fold %d:\" % i\n print \"f1 score from train test split %f\" % f1score\n print \"roc score from train test split %f\" % rocscore\n \n flag1=(predY==1)*(predY==np.array(testY))\n\n flag2=(predY==1)*(predY==np.array(testmanY))\n \n print \"predict Transit %d\" % len(predY[predY==1])\n print \"real Transit %d\" % len(testY[testY==1])\n print \"real Transit selected by eye %d\" % len(testmanY[testmanY==1])\n print \"predicted Transit that's real %d\" % len(predY[flag1])\n print \"predicted Transits selected by eye %d\" % len(predY[flag2])\n i+=1",
"Feature Selection",
"featurelist=X.columns\nrfc= RandomForestClassifier(n_estimators=1000)\nrfc.fit(trainX, trainY)\n\nimportances = rfc.feature_importances_\nstd = np.std([tree.feature_importances_ for tree in rfc.estimators_],\n axis=0)\nindices = np.argsort(importances)[::-1]\n\n# Print the feature ranking\nprint(\"Feature ranking:\")\nthreshold=0.02\ndroplist=[]\nfor f in range(X.shape[1]):\n if importances[indices[f]]<threshold:\n droplist.append(featurelist[indices[f]])\n print(\"%d. feature %d (%s %f)\" % (f + 1, indices[f], featurelist[indices[f]],importances[indices[f]]))\n\nX_selected=X.drop(droplist,axis=1)\nX_selected.head()\n\nmodel=RandomForestClassifier(n_estimators=1000,n_jobs=-1,class_weight='balanced_subsample')\nname=\"RFC\"\nfit(model,name,data)",
"Tasks for Today:\n\noptimize the models.\nlower the threshold of what is True in combinedY, and determine a limit of 0.7 f1 score.\nfeature engerneering with unsupervised learning using the LC files. \nMore feature selection. \nTry the larger data set if interested."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Hyperparticle/deep-learning-foundation
|
lessons/intro-to-tflearn/intro_to_tensorflow_solution.ipynb
|
mit
|
[
"Solutions\nProblem 1\nImplement the Min-Max scaling function ($X'=a+{\\frac {\\left(X-X_{\\min }\\right)\\left(b-a\\right)}{X_{\\max }-X_{\\min }}}$) with the parameters:\n$X_{\\min }=0$\n$X_{\\max }=255$\n$a=0.1$\n$b=0.9$",
"# Problem 1 - Implement Min-Max scaling for grayscale image data\ndef normalize_grayscale(image_data):\n \"\"\"\n Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]\n :param image_data: The image data to be normalized\n :return: Normalized image data\n \"\"\"\n a = 0.1\n b = 0.9\n grayscale_min = 0\n grayscale_max = 255\n return a + ( ( (image_data - grayscale_min)*(b - a) )/( grayscale_max - grayscale_min ) )",
"Problem 2\n\nUse tf.placeholder() for features and labels since they are the inputs to the model.\nAny math operations must have the same type on both sides of the operator. The weights are float32, so the features and labels must also be float32.\nUse tf.Variable() to allow weights and biases to be modified.\nThe weights must be the dimensions of features by labels. The number of features is the size of the image, 28*28=784. The size of labels is 10.\nThe biases must be the dimensions of the labels, which is 10.",
"features_count = 784\nlabels_count = 10\n\n# Problem 2 - Set the features and labels tensors\nfeatures = tf.placeholder(tf.float32)\nlabels = tf.placeholder(tf.float32)\n\n# Problem 2 - Set the weights and biases tensors\nweights = tf.Variable(tf.truncated_normal((features_count, labels_count)))\nbiases = tf.Variable(tf.zeros(labels_count))",
"Problem 3\nConfiguration 1\n* Epochs: 1\n* Learning Rate: 0.1\nConfiguration 2\n* Epochs: 4 or 5\n* Learning Rate: 0.2"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
maxis42/ML-DA-Coursera-Yandex-MIPT
|
5 Data analysis applications/Homework/5 project recommender systems/Recommender systems.ipynb
|
mit
|
[
"Recommender systems\nОписание задачи\nНебольшой интернет-магазин попросил вас добавить ранжирование товаров в блок \"Смотрели ранее\" - в нём теперь надо показывать не последние просмотренные пользователем товары, а те товары из просмотренных, которые он наиболее вероятно купит. Качество вашего решения будет оцениваться по количеству покупок в сравнении с прошлым решением в ходе А/В теста, т.к. по доходу от продаж статзначимость будет достигаться дольше из-за разброса цен. Таким образом, ничего заранее не зная про корреляцию оффлайновых и онлайновых метрик качества, в начале проекта вы можете лишь постараться оптимизировать recall@k и precision@k.\nЭто задание посвящено построению простых бейзлайнов для этой задачи: ранжирование просмотренных товаров по частоте просмотров и по частоте покупок. Эти бейзлайны, с одной стороны, могут помочь вам грубо оценить возможный эффект от ранжирования товаров в блоке - например, чтобы вписать какие-то числа в коммерческое предложение заказчику, а с другой стороны, могут оказаться самым хорошим вариантом, если данных очень мало (недостаточно для обучения даже простых моделей).\nВходные данные\nВам дается две выборки с пользовательскими сессиями - id-шниками просмотренных и id-шниками купленных товаров. Одна выборка будет использоваться для обучения (оценки популярностей товаров), а другая - для теста.\nВ файлах записаны сессии по одной в каждой строке. Формат сессии: id просмотренных товаров через , затем идёт ; после чего следуют id купленных товаров (если такие имеются), разделённые запятой. Например, 1,2,3,4; или 1,2,3,4;5,6.\nГарантируется, что среди id купленных товаров все различные.\nВажно:\nСессии, в которых пользователь ничего не купил, исключаем из оценки качества.\nЕсли товар не встречался в обучающей выборке, его популярность равна 0.\nРекомендуем разные товары. И их число должно быть не больше, чем количество различных просмотренных пользователем товаров.\nРекомендаций всегда не больше, чем минимум из двух чисел: количество просмотренных пользователем товаров и k в recall@k / precision@k.\nЗадание\n\nНа обучении постройте частоты появления id в просмотренных и в купленных (id может несколько раз появляться в просмотренных, все появления надо учитывать)\nРеализуйте два алгоритма рекомендаций:\nсортировка просмотренных id по популярности (частота появления в просмотренных),\nсортировка просмотренных id по покупаемости (частота появления в покупках).\n\n\nДля данных алгоритмов выпишите через пробел AverageRecall@1, AveragePrecision@1, AverageRecall@5, AveragePrecision@5 на обучающей и тестовых выборках, округляя до 2 знака после запятой. Это будут ваши ответы в этом задании. Посмотрите, как они соотносятся друг с другом. Где качество получилось выше? Значимо ли это различие? Обратите внимание на различие качества на обучающей и тестовой выборке в случае рекомендаций по частотам покупки.\n\nЕсли частота одинаковая, то сортировать нужно по возрастанию момента просмотра (чем раньше появился в просмотренных, тем больше приоритет)",
"from __future__ import division, print_function\n\nimport numpy as np\nimport pandas as pd\n\nfrom IPython.core.interactiveshell import InteractiveShell\nInteractiveShell.ast_node_interactivity = \"all\"",
"1. Reading sessions train and test datasets.",
"# Reading train and test data\nwith open('coursera_sessions_train.txt', 'r') as f:\n sess_train = f.read().splitlines()\nwith open('coursera_sessions_test.txt', 'r') as f:\n sess_test = f.read().splitlines()",
"2. Split datasets by looks and purchases.",
"# Create train array splitted by looks (look_items) and purchases (pur_items)\nsess_train_lp = []\nfor sess in sess_train:\n look_items, pur_items = sess.split(';')\n look_items = map(int, look_items.split(','))\n if len(pur_items) > 0:\n pur_items = map(int, pur_items.split(','))\n else:\n pur_items = []\n sess_train_lp.append([look_items, pur_items])\n \n# Create test array splitted by looks (look_items) and purchases (pur_items)\nsess_test_lp = []\nfor sess in sess_test:\n look_items, pur_items = sess.split(';')\n look_items = map(int, look_items.split(','))\n if len(pur_items) > 0:\n pur_items = map(int, pur_items.split(','))\n else:\n pur_items = []\n sess_test_lp.append([look_items, pur_items])",
"3. Create and sort arrays of unique ids counters for looks and purchases for train dataset.",
"# Array of looks\nsess_train_l = [row[0] for row in sess_train_lp]\nsess_train_l_np = np.array( [id_n for sess in sess_train_l for id_n in sess] )\n\n# Array of unique ids and looks in train data\nsess_train_l_cnt = np.transpose(np.unique(sess_train_l_np, return_counts=True))\n\nsess_train_l_cnt\n\n# Array of purchases\nsess_train_p = [row[1] for row in sess_train_lp]\nsess_train_p_np = np.array( [id_n for sess in sess_train_p for id_n in sess] )\n\n# Array of unique ids and purchases in train dataset\nsess_train_p_cnt = np.transpose(np.unique(sess_train_p_np, return_counts=True))\n\nsess_train_p_cnt\n\n# Sorting arrays of looks and purchases by counts\nsess_train_l_cnt = sess_train_l_cnt[sess_train_l_cnt[:,1].argsort()][::-1]\nsess_train_p_cnt = sess_train_p_cnt[sess_train_p_cnt[:,1].argsort()][::-1]",
"4. Calculating metrics for train dataset with suggestions based on looks.",
"def prec_rec_metrics(session, reccomendations, k):\n purchase = 0\n for ind in reccomendations:\n if ind in session:\n purchase += 1 \n precision = purchase / k\n recall = purchase / len(session)\n return(precision, recall)\n\n# Calculate metrics for train dataset, suggestions based on looks\nprec_at_1_tr_l, rec_at_1_tr_l = [], []\nprec_at_5_tr_l, rec_at_5_tr_l = [], []\nk1, k5 = 1, 5\nfor i, sess_p in enumerate(sess_train_p):\n # skip sessions without purchases\n if sess_p == []: continue\n \n # looks ids\n sess_l = sess_train_l[i]\n\n # sorted looks ids indices in sess_train_l_cnt array\n # sort in accordance with looks counts\n l_ind_sess = []\n for j in range(len(sess_l)):\n l_ind_sess.append(np.where(sess_train_l_cnt[:,0] == sess_l[j])[0][0])\n l_ind_sess_sorted = np.unique(l_ind_sess)\n \n # k1 recommendations\n num_of_recs_k1 = min(k1, len(sess_l))\n if num_of_recs_k1 == 0: continue\n recs_k1 = sess_train_l_cnt[l_ind_sess_sorted[:num_of_recs_k1],0]\n \n # k1 metrics\n prec_1, rec_1 = prec_rec_metrics(sess_p, recs_k1, k1)\n prec_at_1_tr_l.append(prec_1)\n rec_at_1_tr_l.append(rec_1)\n \n # k5 recommendations\n num_of_recs_k5 = min(k5, len(sess_l))\n if num_of_recs_k5 == 0: continue\n recs_k5 = sess_train_l_cnt[l_ind_sess_sorted[:num_of_recs_k5],0]\n \n # k5 metrics\n prec_5, rec_5 = prec_rec_metrics(sess_p, recs_k5, k5)\n prec_at_5_tr_l.append(prec_5)\n rec_at_5_tr_l.append(rec_5)\n\navg_prec_at_1_tr_l = np.mean(prec_at_1_tr_l)\navg_rec_at_1_tr_l = np.mean(rec_at_1_tr_l)\navg_prec_at_5_tr_l = np.mean(prec_at_5_tr_l)\navg_rec_at_5_tr_l = np.mean(rec_at_5_tr_l)\n\nwith open('ans1.txt', 'w') as f:\n r1 = '%.2f' % round(avg_rec_at_1_tr_l, 2)\n p1 = '%.2f' % round(avg_prec_at_1_tr_l, 2)\n r5 = '%.2f' % round(avg_rec_at_5_tr_l, 2)\n p5 = '%.2f' % round(avg_prec_at_5_tr_l, 2)\n ans1 = ' '.join([r1, p1, r5, p5])\n print('Answer 1:', ans1)\n f.write(ans1)",
"5. Calculating metrics for train dataset with suggestions based on purchases.",
"# Calculate metrics for train dataset, suggestions based on purchases\nprec_at_1_tr_p, rec_at_1_tr_p = [], []\nprec_at_5_tr_p, rec_at_5_tr_p = [], []\nk1, k5 = 1, 5\n\nfor i, sess_p in enumerate(sess_train_p):\n # skip sessions without purchases\n if sess_p == []: continue\n \n # looks ids\n sess_l = sess_train_l[i]\n\n # sorted looks ids indices in sess_train_p_cnt array\n # sort in accordance with purchases counts\n l_ind_sess = []\n for j in range(len(sess_l)):\n if sess_l[j] not in sess_train_p_cnt[:,0]: continue\n l_ind_sess.append(np.where(sess_train_p_cnt[:,0] == sess_l[j])[0][0])\n l_ind_sess_sorted = np.unique(l_ind_sess)\n \n # k1 recommendations\n num_of_recs_k1 = min(k1, len(sess_l), len(l_ind_sess_sorted))\n if num_of_recs_k1 == 0: continue\n recs_k1 = sess_train_p_cnt[l_ind_sess_sorted[:num_of_recs_k1],0]\n \n # k1 metrics\n prec_1, rec_1 = prec_rec_metrics(sess_p, recs_k1, k1)\n prec_at_1_tr_p.append(prec_1)\n rec_at_1_tr_p.append(rec_1)\n \n # k5 recommendations\n num_of_recs_k5 = min(k5, len(sess_l), len(l_ind_sess_sorted))\n if num_of_recs_k5 == 0: continue\n recs_k5 = sess_train_p_cnt[l_ind_sess_sorted[:num_of_recs_k5],0]\n \n # k5 metrics\n prec_5, rec_5 = prec_rec_metrics(sess_p, recs_k5, k5)\n prec_at_5_tr_p.append(prec_5)\n rec_at_5_tr_p.append(rec_5)\n\navg_prec_at_1_tr_p = np.mean(prec_at_1_tr_p)\navg_rec_at_1_tr_p = np.mean(rec_at_1_tr_p)\navg_prec_at_5_tr_p = np.mean(prec_at_5_tr_p)\navg_rec_at_5_tr_p = np.mean(rec_at_5_tr_p)\n\nwith open('ans2.txt', 'w') as f:\n r1 = '%.2f' % round(avg_rec_at_1_tr_p, 2)\n p1 = '%.2f' % round(avg_prec_at_1_tr_p, 2)\n r5 = '%.2f' % round(avg_rec_at_5_tr_p, 2)\n p5 = '%.2f' % round(avg_prec_at_5_tr_p, 2)\n ans2 = ' '.join([r1, p1, r5, p5])\n print('Answer 2:', ans2)\n f.write(ans2)",
"6. Create and sort arrays of unique ids counters for looks and purchases for test dataset.",
"# Array of looks\nsess_test_l = [row[0] for row in sess_test_lp]\nsess_test_l_np = np.array( [id_n for sess in sess_test_l for id_n in sess] )\n\n# Array of unique ids and looks in train data\n#sess_test_l_cnt = np.transpose(np.unique(sess_test_l_np, return_counts=True))\n\nsess_test_l_np\n#sess_test_l_cnt\n\n# Array of purchases\nsess_test_p = [row[1] for row in sess_test_lp]\nsess_test_p_np = np.array( [id_n for sess in sess_test_p for id_n in sess] )\n\n# Array of unique ids and purchases in train dataset\n#sess_test_p_cnt = np.transpose(np.unique(sess_test_p_np, return_counts=True))\n\nsess_test_p_np\n#sess_test_p_cnt\n\n# Sorting arrays of looks and purchases by counts\n#sess_train_l_cnt = sess_train_l_cnt[sess_train_l_cnt[:,1].argsort()][::-1]\n#sess_train_p_cnt = sess_train_p_cnt[sess_train_p_cnt[:,1].argsort()][::-1]",
"7. Calculating metrics for test dataset with suggestions based on looks.",
"# Calculate metrics for test dataset, suggestions based on looks\nprec_at_1_tst_l, rec_at_1_tst_l = [], []\nprec_at_5_tst_l, rec_at_5_tst_l = [], []\nk1, k5 = 1, 5\n\nfor i, sess_p in enumerate(sess_test_p):\n # skip sessions without purchases\n if sess_p == []: continue\n \n # looks ids\n sess_l = sess_test_l[i]\n\n # sorted looks ids indices in sess_train_l_cnt array\n # sort in accordance with looks counts\n l_ind_sess = []\n new_ids = []\n for j in range(len(sess_l)):\n if sess_l[j] not in sess_train_l_cnt[:,0]:\n new_ids.append(sess_l[j])\n continue\n l_ind_sess.append(np.where(sess_train_l_cnt[:,0] == sess_l[j])[0][0])\n l_ind_sess_sorted = np.unique(l_ind_sess)\n \n # k1 recommendations\n num_of_recs_k1 = min(k1, len(sess_l))\n if num_of_recs_k1 == 0: continue\n if l_ind_sess != []:\n recs_k1 = sess_train_l_cnt[l_ind_sess_sorted[:num_of_recs_k1],0]\n else:\n recs_k1 = []\n recs_k1 = np.concatenate((np.array(recs_k1, dtype='int64'), np.unique(np.array(new_ids, dtype='int64'))))[:num_of_recs_k1]\n #recs_k1\n \n # k1 metrics\n prec_1, rec_1 = prec_rec_metrics(sess_p, recs_k1, k1)\n prec_at_1_tst_l.append(prec_1)\n rec_at_1_tst_l.append(rec_1)\n \n # k5 recommendations\n num_of_recs_k5 = min(k5, len(sess_l))\n if num_of_recs_k5 == 0: continue\n if l_ind_sess != []:\n recs_k5 = sess_train_l_cnt[l_ind_sess_sorted[:num_of_recs_k5],0]\n else:\n recs_k5 = []\n recs_k5 = np.concatenate((np.array(recs_k5, dtype='int64'), np.unique(np.array(new_ids, dtype='int64'))))[:num_of_recs_k5]\n #recs_k5\n \n # k5 metrics\n prec_5, rec_5 = prec_rec_metrics(sess_p, recs_k5, k5)\n prec_at_5_tst_l.append(prec_5)\n rec_at_5_tst_l.append(rec_5)\n\navg_prec_at_1_tst_l = np.mean(prec_at_1_tst_l)\navg_rec_at_1_tst_l = np.mean(rec_at_1_tst_l)\navg_prec_at_5_tst_l = np.mean(prec_at_5_tst_l)\navg_rec_at_5_tst_l = np.mean(rec_at_5_tst_l)\n\nwith open('ans3.txt', 'w') as f:\n r1 = '%.2f' % round(avg_rec_at_1_tst_l, 2)\n p1 = '%.2f' % round(avg_prec_at_1_tst_l, 2)\n r5 = '%.2f' % round(avg_rec_at_5_tst_l, 2)\n p5 = '%.2f' % round(avg_prec_at_5_tst_l, 2)\n ans3 = ' '.join([r1, p1, r5, p5])\n print('Answer 3:', ans3)\n f.write(ans3)",
"8. Calculating metrics for test dataset with suggestions based on purchases.",
"def uniquifier(seq):\n seen = set()\n return [x for x in seq if not (x in seen or seen.add(x))]\n\n# Calculate metrics for test dataset, suggestions based on purchases\nprec_at_1_tst_p, rec_at_1_tst_p = [], []\nprec_at_5_tst_p, rec_at_5_tst_p = [], []\nk1, k5 = 1, 5\nfor i, sess_p in enumerate(sess_test_p):\n # skip sessions without purchases\n if sess_p == []: continue\n \n # looks ids\n sess_l = sess_test_l[i]\n\n # sorted looks ids indices in sess_train_p_cnt array\n # sort in accordance with purchases counts\n l_ind_sess = []\n new_ids = []\n for j in range(len(sess_l)):\n if sess_l[j] not in sess_train_p_cnt[:,0]:\n new_ids.append(sess_l[j])\n continue\n l_ind_sess.append(np.where(sess_train_p_cnt[:,0] == sess_l[j])[0][0])\n l_ind_sess_sorted = np.unique(l_ind_sess)\n \n # k1 recommendations\n num_of_recs_k1 = min(k1, len(sess_l))\n if num_of_recs_k1 == 0: continue\n if l_ind_sess != []:\n recs_k1 = sess_train_p_cnt[l_ind_sess_sorted[:num_of_recs_k1],0]\n else:\n recs_k1 = []\n recs_k1 = np.concatenate((np.array(recs_k1, dtype='int64'), np.array(uniquifier(np.array(new_ids, dtype='int64')))))[:num_of_recs_k1]\n \n # k1 metrics\n prec_1, rec_1 = prec_rec_metrics(sess_p, recs_k1, k1)\n prec_at_1_tst_p.append(prec_1)\n rec_at_1_tst_p.append(rec_1)\n \n # k5 recommendations\n num_of_recs_k5 = min(k5, len(sess_l))\n if num_of_recs_k5 == 0: continue\n if l_ind_sess != []:\n recs_k5 = sess_train_p_cnt[l_ind_sess_sorted[:num_of_recs_k5],0]\n else:\n recs_k5 = []\n recs_k5 = np.concatenate((np.array(recs_k5, dtype='int64'), np.array(uniquifier(np.array(new_ids, dtype='int64')))))[:num_of_recs_k5]\n \n # k5 metrics\n prec_5, rec_5 = prec_rec_metrics(sess_p, recs_k5, k5)\n prec_at_5_tst_p.append(prec_5)\n rec_at_5_tst_p.append(rec_5)\n\navg_prec_at_1_tst_p = np.mean(prec_at_1_tst_p)\navg_rec_at_1_tst_p = np.mean(rec_at_1_tst_p)\navg_prec_at_5_tst_p = np.mean(prec_at_5_tst_p)\navg_rec_at_5_tst_p = np.mean(rec_at_5_tst_p)\n\nwith open('ans4.txt', 'w') as f:\n r1 = '%.2f' % round(avg_rec_at_1_tst_p, 2)\n p1 = '%.2f' % round(avg_prec_at_1_tst_p, 2)\n r5 = '%.2f' % round(avg_rec_at_5_tst_p, 2)\n p5 = '%.2f' % round(avg_prec_at_5_tst_p, 2)\n ans4 = ' '.join([r1, p1, r5, p5])\n print('Answer 4:', ans4)\n f.write(ans4)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
P7h/FutureLearn__Learn_to_Code_for_Data_Analysis
|
Week#2/Week_2_project.ipynb
|
apache-2.0
|
[
"Project 2: Holiday weather\nby Rob Griffiths, 11 September 2015, updated 11 April 2017\nThis is the project notebook for Week 2 of The Open University's Learn to code for Data Analysis course.\nThere is nothing I like better than taking a holiday. In the winter I like to have a two week break in a country where I can be guaranteed sunny dry days. In the summer I like to have two weeks off relaxing in my garden in London. However I'm often disappointed because I pick a fortnight when the weather is dull and it rains. So in this project I am going to use the historic weather data from the Weather Underground for London to try to predict two good weather weeks to take off as holiday next summer. Of course the weather in the summer of 2016 may be very different to 2014 but it should give me some indication of when would be a good time to take a summer break.\nEnv",
"import sys\nsys.version\n\nimport warnings\nwarnings.simplefilter('ignore', FutureWarning)\n\nfrom pandas import *\nshow_versions()",
"Getting the data\nWeather Underground keeps historical weather data collected in many airports around the world. Right-click on the following URL and choose 'Open Link in New Window' (or similar, depending on your browser):\nhttp://www.wunderground.com/history\nWhen the new page opens start typing 'London' in the 'Location' input box and when the pop up menu comes up with the option 'London, United Kingdom' select it and then click on 'Submit'. \nWhen the next page opens with London Heathrow data, click on the 'Custom' tab and select the time period From: 1 January 2014 to: 31 December 2014 and then click on 'Get History'. The data for that year should then be displayed further down the page. \nThe data can be obtained month by month in CSV format. For example, the January 2014 data is available at this URL:\nhttps://www.wunderground.com/history/airport/LHR/2014/1/1/MonthlyHistory.html?format=1\nwhere LHR is the airport code. Changing the first 1 to 2, 3, etc. will show instead the data for February, March, etc. You can also change the 2014 in the URL to another year. \nThe date column heading will be the timezone, e.g. 'GMT' (Greenwich Mean Time) for London, or the time offset, e.g. '-02'. It may even change throughout the year. For example, Delhi (aiport code DEL) is '+0430' from March to August and '+0330' in the other months.\nYou can copy each month's data directly from the browser to a text editor like Notepad or TextEdit, to obtain a single file with as many months as you wish.\nWeather Underground has changed in the past the way it provides data and may do so again in the future. \nI have therefore collated the whole 2014 data in the provided 'London_2014.csv' file. \nNow load the CSV file into a dataframe making sure that any extra spaces are skipped:",
"delhi = read_csv('Delhi_DEL_2014.csv', skipinitialspace=True)\ndelhi.head()",
"Cleaning the data\nFirst we need to clean up the data. I'm not going to make use of 'WindDirDegrees' in my analysis, but you might in yours so we'll rename 'WindDirDegrees< br />' to 'WindDirDegrees'.",
"delhi = delhi.rename(columns={'WindDirDegrees<br />' : 'WindDirDegrees'})",
"remove the < br /> html line breaks from the values in the 'WindDirDegrees' column.",
"delhi['WindDirDegrees'] = delhi['WindDirDegrees'].str.rstrip('<br />')",
"and change the values in the 'WindDirDegrees' column to float64:",
"delhi['WindDirDegrees'] = delhi['WindDirDegrees'].astype('float64') ",
"We definitely need to change the values in the 'GMT' column into values of the datetime64 date type.",
"delhi['Date'] = to_datetime(delhi['Date'])",
"We also need to change the index from the default to the datetime64 values in the 'Date' column so that it is easier to pull out rows between particular dates and display more meaningful graphs:",
"delhi.index = delhi['Date']\n\ndelhi.head()",
"Finding a summer break\nAccording to meteorologists, summer extends for the whole months of June, July, and August in the northern hemisphere and the whole months of December, January, and February in the southern hemisphere. So as I'm in the northern hemisphere I'm going to create a dataframe that holds just those months using the datetime index, like this:",
"summer = delhi.loc[datetime(2014,5,1) : datetime(2014,8,31)]",
"I now look for the days with warm temperatures.",
"summer[summer['Mean TemperatureC'] >= 25].head()",
"Summer 2014 was rather cool in London: there are no days with temperatures of 25 Celsius or higher. Best to see a graph of the temperature and look for the warmest period.\nSo next we tell Jupyter to display any graph created inside this notebook:",
"%matplotlib inline",
"Now let's plot the 'Mean TemperatureC' for the summer:",
"summer[['Mean TemperatureC']].plot(grid=True, figsize=(20,8));",
"Well looking at the graph the second half of July looks good for mean temperatures over 20 degrees C so let's also put precipitation on the graph too:",
"summer[['Mean TemperatureC', 'Precipitationmm']].plot(grid=True, figsize=(20,8));",
"The second half of July is still looking good, with just a couple of peaks showing heavy rain. Let's have a closer look by just plotting mean temperature and precipitation for July.",
"july = summer.loc[datetime(2014,7,1) : datetime(2014,7,31)]\njuly[['Mean TemperatureC', 'Precipitationmm']].plot(grid=True, figsize=(20,8));",
"Yes, second half of July looks pretty good, just two days that have significant rain, the 25th and the 28th and just one day when the mean temperature drops below 20 degrees, also the 28th.\nConclusions\nThe graphs have shown the volatility of a British summer, but a couple of weeks were found when the weather wasn't too bad in 2014. Of course this is no guarantee that the weather pattern will repeat itself in future years. To make a sensible prediction we would need to analyse the summers for many more years. By the time you have finished this course you should be able to do that."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
hadim/public_notebooks
|
Analysis/MSD_Analyzer/notebook.ipynb
|
mit
|
[
"How to analyze particle motion with MSD (Mean Square Displacement)\nNote that this notebook is largely inspired from the excellent tutorials of Jean-Yves Tinevez available at https://tinevez.github.io/msdanalyzer/.\nThe goal of this notebook is mainly to help others (the author included) to analyze particle motion through MSD. I also would like to create a Python module that help dealing with all that kind of stuff.\nTODO: I am still not sure the way I compute the MSD mean and also SEM and STD... I need to double check this.\nTODO: I also need to find a way to improve MSD calculation : https://stackoverflow.com/questions/32988269/speedup-msd-calculation-in-python",
"# Some classic Python modules import\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nplt.rcParams['figure.figsize'] = (15, 10)\n\nimport pandas as pd\nimport numpy as np\n\nfrom scipy import optimize",
"Brownian motion\nSimulate particle motion",
"# Parameters\nSPACE_UNITS = '\\mu m'\nTIME_UNITS = 's'\n\nN_PARTICLES = 10\nN_TIME_STEPS = 100\nCOORDS = ['x', 'y']\nN_DIM = len(COORDS)\n\ncm = plt.get_cmap('gist_rainbow')\nCOLORS = [cm(i/N_PARTICLES) for i in range(N_PARTICLES)]\n\n# Typical values taken from studies of proteins diffusing in membranes:\n\n# Diffusion coefficient\nD = 1e-3 # µm^2/s\n\n# Time step between acquisition; fast acquisition!\ndt = 0.05 # s\n\n# Area size, just used to disperse particles in 2D. Has no impact on\n# analysis\nSIZE = 2 # µm",
"The Einstein equation tells us that displacements follow a Gaussian PDF with standard deviation given by :",
"k = np.sqrt(2 * D * dt)\nprint(k)\n\n# Generate trajectories\n\ntrajs = []\n\nfor i in range(N_PARTICLES):\n \n # Generate time vector\n time = np.arange(0, N_TIME_STEPS) * dt\n\n # Get random displacement\n dp = k * np.random.randn(N_TIME_STEPS, N_DIM)\n\n # Setup custom initial position\n initial_position = SIZE * np.random.rand(1, N_DIM)\n dp[0] = initial_position\n\n # Get position\n p = np.cumsum(dp, axis=0)\n \n # Convert to Dataframe\n p = pd.DataFrame({c: p[:, i] for i, c in enumerate(COORDS)})\n p['t'] = time\n \n trajs.append(p)\n \n# Plot trajectories\nfig, ax = plt.subplots()\n\nfor color, traj in zip(COLORS, trajs):\n traj.plot(x='x', y='y', color=color, ax=ax, legend=False)\n \nax.set_xlabel(COORDS[0])\nax.set_ylabel(COORDS[1])",
"MSD analysis",
"def compute_msd(trajectory, dt, coords=['x', 'y']):\n\n tau = trajectory['t'].copy()\n shifts = np.floor(tau / dt).astype(np.int)\n \n msds = pd.DataFrame()\n msds = np.zeros(shifts.size)\n msds_std = np.zeros(shifts.size)\n msds_sem = np.zeros(shifts.size)\n weights = np.zeros(shifts.size)\n\n for i, shift in enumerate(shifts):\n diffs = trajectory[coords] - trajectory[coords].shift(-shift)\n sqdist = np.square(diffs).sum(axis=1)\n msds[i] = sqdist.mean()\n msds_std[i] = sqdist.std()\n msds_sem[i] = sqdist.sem()\n weights[i] = len(sqdist.dropna())\n\n msds = pd.DataFrame({'msds': msds, 'tau': tau, 'msds_std': msds_std,\n 'msds_sem': msds_sem, 'weights': weights})\n return msds\n\ndef compute_msd_mean(trajs, dt, n_steps, coords=['x', 'y']):\n \n msd_mean = pd.DataFrame()\n msd_mean['tau'] = np.arange(0, n_steps) * dt\n msd_mean['msds'] = np.zeros(n_steps)\n msd_mean['msds_std'] = np.zeros(n_steps)\n msd_mean['msds_sem'] = np.zeros(n_steps)\n msd_mean['weights'] = np.zeros(n_steps)\n \n all_msd = []\n\n for i, traj in zip(range(len(trajs)), trajs):\n msds = compute_msd(traj, dt=dt, coords=coords)\n all_msd.append(msds)\n\n msd_mean['msds'] += msds['msds'] * msds['weights']\n msd_mean['msds_std'] += msds['msds_std'] * msds['weights']\n msd_mean['msds_sem'] += msds['msds_sem'] * msds['weights']\n \n msd_mean['weights'] += msds['weights']\n \n msd_mean['msds'] /= msd_mean['weights']\n msd_mean['msds_std'] /= msd_mean['weights']\n msd_mean['msds_sem'] /= msd_mean['weights']\n\n msd_mean.dropna(inplace=True)\n \n return msd_mean, all_msd\n\n# Compute MSD\nmsd_mean, all_msd = compute_msd_mean(trajs, dt, N_TIME_STEPS, coords=COORDS)\n\n# Fit model\ndef model(tau, D):\n return 2*D*N_DIM*tau\n\nclip_factor = 0.25# Compute MSD\nmsd_mean, all_msd = compute_msd_mean(trajs, dt, N_TIME_STEPS, coords=COORDS)\nt_stamp = np.round(len(msd_mean) * clip_factor, 0)\n(D,), pcov = optimize.curve_fit(model, msd_mean.loc[:t_stamp, 'tau'], msd_mean.loc[:t_stamp, 'msds'])\nprint(D)\n\n# Plot all MSD\n\nfig, ax = plt.subplots()\n\nfor color, msd in zip(COLORS, all_msd): \n msd.plot(x='tau', y='msds', color=color, ax=ax, legend=False)\n \nax.set_xlabel(\"Delay (${}$)\".format(TIME_UNITS))\nax.set_ylabel(\"MSD (${}^2$)\".format(SPACE_UNITS))\n\n# Plot MSD mean\n\nfig, ax = plt.subplots()\n\nmsd_mean.plot(x='tau', y='msds', color=color, ax=ax, legend=False)\nax.fill_between(msd_mean['tau'], msd_mean['msds'] - msd_mean['msds_sem'],\n msd_mean['msds'] + msd_mean['msds_sem'], alpha=0.2)\n\nax.plot(msd_mean['tau'], model(msd_mean['tau'], D), color='red')\n\nax.set_xlabel(\"Delay (${}$)\".format(TIME_UNITS))\nax.set_ylabel(\"MSD (${}^2$)\".format(SPACE_UNITS))",
"Directed motion\nSimulate particle motion",
"# Parameters\nSPACE_UNITS = '\\mu m'\nTIME_UNITS = 's'\n\nN_PARTICLES = 10\nN_TIME_STEPS = 100\nCOORDS = ['x', 'y']\nN_DIM = len(COORDS)\n\ncm = plt.get_cmap('gist_rainbow')\nCOLORS = [cm(i/N_PARTICLES) for i in range(N_PARTICLES)]\n\n# Typical values taken from studies of proteins diffusing in membranes:\n\n# Diffusion coefficient\nD = 1e-3 # µm^2/s\n\n# Time step between acquisition; fast acquisition!\ndt = 0.05 # s\n\n# Mean velocity\nvm = 0.05 # µm/s\n\n# Area size, just used to disperse particles in 2D. Has no impact on\n# analysis\nSIZE = 2 # µm\n\n# Generate trajectories\n\ntrajs = []\n\nfor i in range(N_PARTICLES):\n \n # Generate time vector\n time = np.arange(0, N_TIME_STEPS) * dt\n \n # Velocity orientation\n theta = 2 * np.pi * np.random.rand()\n \n # Mean velocity\n v = vm * (1 + 1/4 * np.random.randn())\n\n # Get random displacement\n dp = k * np.random.randn(N_TIME_STEPS, N_DIM)\n \n dp_brownian = k * np.random.randn(N_TIME_STEPS, N_DIM)\n dp_directed = v * dt * (np.cos(theta) * np.ones((N_TIME_STEPS, 1)) + np.sin(theta) * np.ones((N_TIME_STEPS, 1)))\n \n dp = dp_brownian + dp_directed\n\n # Setup custom initial position\n initial_position = SIZE * np.random.rand(1, N_DIM)\n dp[0] = initial_position\n \n # Get position\n p = np.cumsum(dp, axis=0)\n \n # Convert to Dataframe\n p = pd.DataFrame({c: p[:, i] for i, c in enumerate(COORDS)})\n p['t'] = time\n \n trajs.append(p)\n \n# Plot trajectories\nfig, ax = plt.subplots()\n\nfor color, traj in zip(COLORS, trajs):\n traj.plot(x='x', y='y', color=color, ax=ax, legend=False)\n \nax.set_xlabel(COORDS[0])\nax.set_ylabel(COORDS[1])",
"MSD analysis",
"# Compute MSD\nmsd_mean, all_msd = compute_msd_mean(trajs, dt, N_TIME_STEPS, coords=COORDS)\n\n# Fit model\ndef model(tau, D, v):\n return 2*D*N_DIM*tau + v*tau**2\n\nclip_factor = 1\nt_stamp = np.round(len(msd_mean) * clip_factor, 0)\n(D, v), pcov = optimize.curve_fit(model, msd_mean.loc[:t_stamp, 'tau'], msd_mean.loc[:t_stamp, 'msds'])\nprint(D)\nprint(v)\n\n# Plot all MSD\n\nfig, ax = plt.subplots()\n\nfor color, msd in zip(COLORS, all_msd): \n msd.plot(x='tau', y='msds', color=color, ax=ax, legend=False)\n \nax.set_xlabel(\"Delay (${}$)\".format(TIME_UNITS))\nax.set_ylabel(\"MSD (${}^2$)\".format(SPACE_UNITS))\n\n# Plot MSD mean\n\nfig, ax = plt.subplots()\n\nmsd_mean.plot(x='tau', y='msds', color=color, ax=ax, legend=False)\nax.fill_between(msd_mean['tau'], msd_mean['msds'] - msd_mean['msds_sem'],\n msd_mean['msds'] + msd_mean['msds_sem'], alpha=0.2)\n\nax.plot(msd_mean['tau'], model(msd_mean['tau'], D, v), color='red')\n\nax.set_xlabel(\"Delay (${}$)\".format(TIME_UNITS))\nax.set_ylabel(\"MSD (${}^2$)em\".format(SPACE_UNITS))",
"Confined motion (more work is needed here)\nSimulate particle motion",
"# Parameters\nSPACE_UNITS = '\\mu m'\nTIME_UNITS = 's'\n\nN_PARTICLES = 10\nN_TIME_STEPS = 200\nCOORDS = ['x', 'y']\nN_DIM = len(COORDS)\n\ncm = plt.get_cmap('gist_rainbow')\nCOLORS = [cm(i/N_PARTICLES) for i in range(N_PARTICLES)]\n\n# Typical values taken from studies of proteins diffusing in membranes:\n\n# Diffusion coefficient\nD = 1e-3 # µm^2/s\n\n# Time step between acquisition; fast acquisition!\ndt = 0.05 # s\n\n# Boltzman constant\nkt = 4.2821e-21 # kBoltzman x T @ 37ºC\n\n# Area size, just used to disperse particles in 2D. Has no impact on\n# analysis\nSIZE = 5 # µm\n\nk = np.sqrt(2 * D * dt)\n\n# Confined motion parameters\n\n# Particle in a potential: settings the 'stiffness' of the energy potential\n# Typical diameter of the trap (still in micron)\nltrap = 0.05 # µm\nktrap = kt / ltrap**2 # = thermal energy / trap size ^ 2\n\n# Generate trajectories\n\ndef Fx(x, initial_position):\n return ktrap * (x - initial_position)\n\ntrajs = []\n\nfor i in range(N_PARTICLES):\n \n # Generate time vector\n time = np.arange(0, N_TIME_STEPS) * dt\n \n # Energy potential:\n #V = @(x) 0.5 * ktrap * sum (x .^ 2) # Unused, just to show\n\n p = np.zeros((N_TIME_STEPS, N_DIM))\n\n # Setup custom initial position\n initial_position = SIZE * np.random.rand(1, N_DIM)\n p[0] = initial_position\n\n for j in range(1, N_TIME_STEPS):\n dxtrap = D / kt * Fx(p[j-1], initial_position) * dt # ad hoc displacement\n dxbrownian = k * np.random.randn(1, N_DIM);\n\n p[j] = p[j-1] + dxtrap + dxbrownian\n \n # Convert to Dataframe\n p = pd.DataFrame({c: p[:, i] for i, c in enumerate(COORDS)})\n p['t'] = time\n \n trajs.append(p)\n \n# Plot trajectories\nfig, ax = plt.subplots()\n\nfor color, traj in zip(COLORS, trajs):\n traj.plot(x='x', y='y', color=color, ax=ax, legend=False)\n \nax.set_xlabel(COORDS[0])\nax.set_ylabel(COORDS[1])",
"MSD analysis",
"# Compute MSD\nmsd_mean, all_msd = compute_msd_mean(trajs, dt, N_TIME_STEPS, coords=COORDS)\n\n# Plot all MSD\n\nfig, ax = plt.subplots()\n\nfor color, msd in zip(COLORS, all_msd): \n msd.plot(x='tau', y='msds', color=color, ax=ax, legend=False)\n \nax.set_xlabel(\"Delay (${}$)\".format(TIME_UNITS))\nax.set_ylabel(\"MSD (${}^2$)\".format(SPACE_UNITS))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
basnijholt/holoviews
|
doc/Homepage.ipynb
|
bsd-3-clause
|
[
"HoloViews is a Python library that makes analyzing and visualizing scientific or engineering data much simpler, more intuitive, and more easily reproducible. Instead of specifying every step for each plot, HoloViews lets you store your data in an annotated format that is instantly visualizable, with immediate access to both the numeric data and its visualization. Examples of how HoloViews is used in Python scripts as well as in live Jupyter Notebooks may be accessed directly from the holoviews-contrib repository. Here is a quick example of HoloViews in action:",
"import numpy as np\nimport holoviews as hv\nhv.notebook_extension('matplotlib')\nfractal = hv.Image(np.load('mandelbrot.npy'))\n\n((fractal * hv.HLine(y=0)).hist() + fractal.sample(y=0))",
"Fundamentally, a HoloViews object is just a thin wrapper around your data, with the data always being accessible in its native numerical format, but with the data displaying itself automatically whether alone or alongside or overlaid with other HoloViews objects as shown above. The actual rendering is done using a separate library like matplotlib or bokeh, but all of the HoloViews objects can be used without any plotting library available, so that you can easily create, save, load, and manipulate HoloViews objects from within your own programs for later analysis. HoloViews objects support arbitrarily high dimensions, using continuous, discrete, or categorical indexes and values, with flat or hierarchical organizations, and sparse or dense data formats. The objects can then be flexibly combined, selected, sliced, sorted, sampled, or animated, all by specifying what data you want to see rather than by writing plotting code. The goal is to put the plotting code into the background, as an implementation detail to be written once and reused often, letting you focus clearly on your data itself in daily work.\nMore detailed example\nEven extremely complex relationships between data elements can be expressed succinctly in HoloViews, allowing you to explore them with ease:",
"%%opts Points [scaling_factor=50] Contours (color='w')\ndots = np.linspace(-0.45, 0.45, 19)\n\nlayouts = {y: (fractal * hv.Points(fractal.sample([(i,y) for i in dots])) +\n fractal.sample(y=y) +\n hv.operation.threshold(fractal, level=np.percentile(fractal.sample(y=y)['z'], 90)) +\n hv.operation.contours(fractal, levels=[np.percentile(fractal.sample(y=y)['z'], 60)]))\n for y in np.linspace(-0.3, 0.3, 21)}\n\nhv.HoloMap(layouts, kdims=['Y']).collate().cols(2)",
"Here we have built a dictionary indexed by a numerical value y, containing a set of Layout objects that are each composed of four HoloViews objects. We then collated the Layout objects into a HoloViews data structure that can display arbitrarily high dimensional data. The result is that in A we can see the same fractal data as above, but with a horizontal cross section indicated using a set of dots with sizes proportional to the underlying data values, illustrating how even a simple annotation can be used to reflect other data of interest. B shows a cross-section of the fractal, C shows a thresholded version of it, and D shows the same data with a contour outline overlaid. The threshold and contour levels used are not fixed, but are calculated as the 90th or 60th percentile of the data values along the selected cross section, using standard Python/NumPy functions. All of this data is packaged into a single HoloViews data structure for a range of cross sections, allowing the data for a particular cross section to be revealed by moving the Y-value slider at right. Even with these complicated interrelationships between data elements, the code still only needs to focus on the data that you want to see, not on the details of the plotting or interactive controls, which are handled by HoloViews and the underlying plotting libraries.\nNote that just as the 2D array became a 1D curve automatically by sampling to get the cross section, this entire figure would become a single static frame with no slider bar if you chose a specific Y value by re-running with .select(Y=0.3) before .cols(2). There is nothing in the code above that adds the slider bar explicitly -- it appears automatically, just because there is an additional dimension of data that has not been laid out spatially. Additional sliders would appear if there were other dimensions being varied as well, e.g. for parameter-space explorations.\nThis functionality is designed to complement the IPython/Jupyter Notebook interface, though it can be used just as well separately. This web page and all the HoloViews Tutorials are runnable notebooks, which allow you to interleave text, Python code, and graphical results easily. With HoloViews, you can put a minimum of code in the notebook (typically one or two lines per subfigure), specifying what you would like to see rather than the details of how it should be plotted. HoloViews makes the IPython Notebook a practical solution for both exploratory research (since viewing nearly any chunk of data just takes a line or two of code) and for long-term reproducibility of the work (because both the code and the visualizations are preserved in the notebook file forever, and the data and publishable figures can both easily be exported to an archive on disk). See the Tutorials for detailed examples, and then start enjoying working with your data!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ericmjl/Network-Analysis-Made-Simple
|
archive/7-game-of-thrones-case-study-student.ipynb
|
mit
|
[
"Let's change gears and talk about Game of thrones or shall I say Network of Thrones.\nIt is suprising right? What is the relationship between a fatansy TV show/novel and network science or python(it's not related to a dragon).\nIf you haven't heard of Game of Thrones, then you must be really good at hiding. Game of Thrones is the hugely popular television series by HBO based on the (also) hugely popular book series A Song of Ice and Fire by George R.R. Martin. In this notebook, we will analyze the co-occurrence network of the characters in the Game of Thrones books. Here, two characters are considered to co-occur if their names appear in the vicinity of 15 words from one another in the books.\n\nAndrew J. Beveridge, an associate professor of mathematics at Macalester College, and Jie Shan, an undergraduate created a network from the book A Storm of Swords by extracting relationships between characters to find out the most important characters in the book(or GoT).\nThe dataset is publicly avaiable for the 5 books at https://github.com/mathbeveridge/asoiaf. This is an interaction network and were created by connecting two characters whenever their names (or nicknames) appeared within 15 words of one another in one of the books. The edge weight corresponds to the number of interactions. \nCredits:\nBlog: https://networkofthrones.wordpress.com\nMath Horizons Article: https://www.maa.org/sites/default/files/pdf/Mathhorizons/NetworkofThrones%20%281%29.pdf",
"import pandas as pd\nimport networkx as nx\nimport matplotlib.pyplot as plt\nimport community\nimport numpy as np\nimport warnings\nwarnings.filterwarnings('ignore')\n\n%matplotlib inline",
"Let's load in the datasets",
"book1 = pd.read_csv('datasets/game_of_thrones_network/asoiaf-book1-edges.csv')\nbook2 = pd.read_csv('datasets/game_of_thrones_network/asoiaf-book2-edges.csv')\nbook3 = pd.read_csv('datasets/game_of_thrones_network/asoiaf-book3-edges.csv')\nbook4 = pd.read_csv('datasets/game_of_thrones_network/asoiaf-book4-edges.csv')\nbook5 = pd.read_csv('datasets/game_of_thrones_network/asoiaf-book5-edges.csv')",
"The resulting DataFrame book1 has 5 columns: Source, Target, Type, weight, and book. Source and target are the two nodes that are linked by an edge. A network can have directed or undirected edges and in this network all the edges are undirected. The weight attribute of every edge tells us the number of interactions that the characters have had over the book, and the book column tells us the book number.",
"book1",
"Once we have the data loaded as a pandas DataFrame, it's time to create a network. We create a graph for each book. It's possible to create one MultiGraph instead of 5 graphs, but it is easier to play with different graphs.",
"G_book1 = nx.Graph()\nG_book2 = nx.Graph()\nG_book3 = nx.Graph()\nG_book4 = nx.Graph()\nG_book5 = nx.Graph()",
"Let's populate the graph with edges from the pandas DataFrame.",
"for row in book1.iterrows():\n G_book1.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book'])\n\nfor row in book2.iterrows():\n G_book2.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book'])\nfor row in book3.iterrows():\n G_book3.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book'])\nfor row in book4.iterrows():\n G_book4.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book'])\nfor row in book5.iterrows():\n G_book5.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book'])\n\nbooks = [G_book1, G_book2, G_book3, G_book4, G_book5]",
"Let's have a look at these edges.",
"list(G_book1.edges(data=True))[16]\n\nlist(G_book1.edges(data=True))[400]",
"Finding the most important node i.e character in these networks.\nIs it Jon Snow, Tyrion, Daenerys, or someone else? Let's see! Network Science offers us many different metrics to measure the importance of a node in a network as we saw in the first part of the tutorial. Note that there is no \"correct\" way of calculating the most important node in a network, every metric has a different meaning.\nFirst, let's measure the importance of a node in a network by looking at the number of neighbors it has, that is, the number of nodes it is connected to. For example, an influential account on Twitter, where the follower-followee relationship forms the network, is an account which has a high number of followers. This measure of importance is called degree centrality.\nUsing this measure, let's extract the top ten important characters from the first book (book[0]) and the fifth book (book[4]).",
"deg_cen_book1 = nx.degree_centrality(books[0])\n\ndeg_cen_book5 = nx.degree_centrality(books[4])\n\nsorted(deg_cen_book1.items(), key=lambda x:x[1], reverse=True)[0:10]\n\nsorted(deg_cen_book5.items(), key=lambda x:x[1], reverse=True)[0:10]\n\n# Plot a histogram of degree centrality\nplt.hist(list(nx.degree_centrality(G_book4).values()))\nplt.show()",
"Exercise\nCreate a new centrality measure, weighted_degree(Graph, weight) which takes in Graph and the weight attribute and returns a weighted degree dictionary. Weighted degree is calculated by summing the weight of the all edges of a node and find the top five characters according to this measure. [5 mins]",
"def weighted_degree(G, weight):\n result = dict()\n for node in G.nodes():\n weight_degree = 0\n for n in G.edges([node], data=True):\n weight_degree += ____________\n result[node] = weight_degree\n return result\n\nplt.hist(___________)\nplt.show()\n\nsorted(weighted_degree(G_book1, 'weight').items(), key=lambda x:x[1], reverse=True)[0:10]",
"Let's do this for Betweeness centrality and check if this makes any difference\nHaha, evil laugh",
"# First check unweighted, just the structure\n\nsorted(nx.betweenness_centrality(G_book1).items(), key=lambda x:x[1], reverse=True)[0:10]\n\n# Let's care about interactions now\n\nsorted(nx.betweenness_centrality(G_book1, weight='weight').items(), key=lambda x:x[1], reverse=True)[0:10]",
"PageRank\nThe billion dollar algorithm, PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites.",
"# by default weight attribute in pagerank is weight, so we use weight=None to find the unweighted results\nsorted(nx.pagerank_numpy(G_book1, weight=None).items(), key=lambda x:x[1], reverse=True)[0:10]\n\nsorted(nx.pagerank_numpy(G_book1, weight='weight').items(), key=lambda x:x[1], reverse=True)[0:10]",
"Is there a correlation between these techniques?\nExercise\nFind the correlation between these four techniques.\n\npagerank\nbetweenness_centrality\nweighted_degree\ndegree centrality",
"cor = pd.DataFrame.from_records([______, _______, _______, ______])\n\ncor.T\n\ncor.T.______()",
"Evolution of importance of characters over the books\nAccording to degree centrality the most important character in the first book is Eddard Stark but he is not even in the top 10 of the fifth book. The importance changes over the course of five books, because you know stuff happens ;)\nLet's look at the evolution of degree centrality of a couple of characters like Eddard Stark, Jon Snow, Tyrion which showed up in the top 10 of degree centrality in first book.\nWe create a dataframe with character columns and index as books where every entry is the degree centrality of the character in that particular book and plot the evolution of degree centrality Eddard Stark, Jon Snow and Tyrion.\nWe can see that the importance of Eddard Stark in the network dies off and with Jon Snow there is a drop in the fourth book but a sudden rise in the fifth book",
"evol = [nx.degree_centrality(book) for book in books]\nevol_df = pd.DataFrame.from_records(evol).fillna(0)\nevol_df[['Eddard-Stark', 'Tyrion-Lannister', 'Jon-Snow']].plot()\n\nset_of_char = set()\nfor i in range(5):\n set_of_char |= set(list(evol_df.T[i].sort_values(ascending=False)[0:5].index))\nset_of_char",
"Exercise\nPlot the evolution of weighted degree centrality of the above mentioned characters over the 5 books, and repeat the same exercise for betweenness centrality.",
"evol_df[__________].plot(figsize=(29,15))\n\nevol = [____________ for graph in books]\nevol_df = pd.DataFrame.from_records(evol).fillna(0)\n\nset_of_char = set()\nfor i in range(5):\n set_of_char |= set(list(evol_df.T[i].sort_values(ascending=False)[0:5].index))\n\n\nevol_df[___________].plot(figsize=(19,10))",
"So what's up with Stannis Baratheon?",
"nx.draw(nx.barbell_graph(5, 1), with_labels=True)\n\nsorted(nx.degree_centrality(G_book5).items(), key=lambda x:x[1], reverse=True)[:5]\n\nsorted(nx.betweenness_centrality(G_book5).items(), key=lambda x:x[1], reverse=True)[:5]",
"Community detection in Networks\nA network is said to have community structure if the nodes of the network can be easily grouped into (potentially overlapping) sets of nodes such that each set of nodes is densely connected internally.\nWe will use louvain community detection algorithm to find the modules in our graph.",
"partition = community.best_partition(G_book1)\nsize = float(len(set(partition.values())))\npos = nx.spring_layout(G_book1)\ncount = 0.\nfor com in set(partition.values()) :\n count = count + 1.\n list_nodes = [nodes for nodes in partition.keys()\n if partition[nodes] == com]\n nx.draw_networkx_nodes(G_book1, pos, list_nodes, node_size = 20,\n node_color = str(count / size))\n\n\nnx.draw_networkx_edges(G_book1, pos, alpha=0.5)\nplt.show()\n\nd = {}\nfor character, par in partition.items():\n if par in d:\n d[par].append(character)\n else:\n d[par] = [character]\nd\n\nnx.draw(nx.subgraph(G_book1, d[3]))\n\nnx.draw(nx.subgraph(G_book1, d[1]))\n\nnx.density(G_book1)\n\nnx.density(nx.subgraph(G_book1, d[4]))\n\nnx.density(nx.subgraph(G_book1, d[4]))/nx.density(G_book1)",
"Exercise\nFind the most important node in the partitions according to degree centrality of the nodes.",
"max_d = {}\ndeg_book1 = nx.degree_centrality(G_book1)\n\nfor ______ in d:\n temp = 0\n for _______ in d[group]:\n if deg_book1[_______] > temp:\n max_d[______] = _______\n temp = deg_book1[_______]\n\nmax_d",
"A bit about power law in networks",
"G_random = nx.erdos_renyi_graph(100, 0.1)\n\nnx.draw(G_random)\n\nG_ba = nx.barabasi_albert_graph(100, 2)\n\nnx.draw(G_ba)\n\n# Plot a histogram of degree centrality\nplt.hist(list(nx.degree_centrality(G_random).values()))\nplt.show()\n\nplt.hist(list(nx.degree_centrality(G_ba).values()))\nplt.show()\n\nG_random = nx.erdos_renyi_graph(2000, 0.2)\nG_ba = nx.barabasi_albert_graph(2000, 20)\n\nd = {}\nfor i, j in dict(nx.degree(G_random)).items():\n if j in d:\n d[j] += 1\n else:\n d[j] = 1\nx = np.log2(list((d.keys())))\ny = np.log2(list(d.values()))\nplt.scatter(x, y, alpha=0.9)\nplt.show()\n\nd = {}\nfor i, j in dict(nx.degree(G_ba)).items():\n if j in d:\n d[j] += 1\n else:\n d[j] = 1\nx = np.log2(list((d.keys())))\ny = np.log2(list(d.values()))\nplt.scatter(x, y, alpha=0.9)\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dsacademybr/PythonFundamentos
|
Cap03/Notebooks/DSA-Python-Cap03-01-If-Elif-Else.ipynb
|
gpl-3.0
|
[
"<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 3</font>\nDownload: http://github.com/dsacademybr",
"# Versão da Linguagem Python\nfrom platform import python_version\nprint('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())",
"Condicional If",
"# Condicional If\nif 5 > 2:\n print(\"Python funciona!\")\n\n# Statement If...Else\nif 5 < 2:\n print(\"Python funciona!\")\nelse:\n print(\"Algo está errado!\")\n\n6 > 3\n\n3 > 7\n\n4 < 8\n\n4 >= 4\n\nif 5 == 5:\n print(\"Testando Python!\")\n\nif True:\n print('Parece que Python funciona!')\n\n# Atenção com a sintaxe\nif 4 > 3\n print(\"Tudo funciona!\")\n\n# Atenção com a sintaxe\nif 4 > 3:\nprint(\"Tudo funciona!\")",
"Condicionais Aninhados",
"idade = 18\nif idade > 17:\n print(\"Você pode dirigir!\")\n\nNome = \"Bob\"\nif idade > 13:\n if Nome == \"Bob\":\n print(\"Ok Bob, você está autorizado a entrar!\")\n else:\n print(\"Desculpe, mas você não pode entrar!\")\n\nidade = 13\nNome = \"Bob\"\nif idade >= 13 and Nome == \"Bob\":\n print(\"Ok Bob, você está autorizado a entrar!\")\n\nidade = 12\nNome = \"Bob\"\nif (idade >= 13) or (Nome == \"Bob\"):\n print(\"Ok Bob, você está autorizado a entrar!\")",
"Elif",
"dia = \"Terça\"\nif dia == \"Segunda\":\n print(\"Hoje fará sol!\")\nelse:\n print(\"Hoje vai chover!\")\n\nif dia == \"Segunda\":\n print(\"Hoje fará sol!\")\nelif dia == \"Terça\":\n print(\"Hoje vai chover!\")\nelse:\n print(\"Sem previsão do tempo para o dia selecionado\")",
"Operadores Lógicos",
"idade = 18\nnome = \"Bob\"\nif idade > 17:\n print(\"Você pode dirigir!\")\n\nidade = 18\nif idade > 17 and nome == \"Bob\":\n print(\"Autorizado!\")\n\n# Usando mais de uma condição na cláusula if \n\ndisciplina = input('Digite o nome da disciplina: ')\nnota_final = input('Digite a nota final (entre 0 e 100): ')\n\nif disciplina == 'Geografia' and nota_final >= '70':\n print('Você foi aprovado!')\nelse:\n print('Lamento, acho que você precisa estudar mais!')\n\n# Usando mais de uma condição na cláusula if e introduzindo Placeholders\n\ndisciplina = input('Digite o nome da disciplina: ')\nnota_final = input('Digite a nota final (entre 0 e 100): ')\nsemestre = input('Digite o semestre (1 a 4): ')\n\nif disciplina == 'Geografia' and nota_final >= '50' and int(semestre) != 1:\n print('Você foi aprovado em %s com média final %r!' %(disciplina, nota_final))\nelse:\n print('Lamento, acho que você precisa estudar mais!')",
"--> Fique atento aos espaços entre a margem e cada um dos seus comandos. Falaremos mais sobre indentação ao longo do curso. A indentação faz parte da sintaxe da linguagem Python.\nFim\nObrigado\nVisite o Blog da Data Science Academy - <a href=\"http://blog.dsacademy.com.br\">Blog DSA</a>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
DistrictDataLabs/yellowbrick
|
examples/rebeccabilbro/freqtext.ipynb
|
apache-2.0
|
[
"%matplotlib inline",
"Yellowbrick Text Examples\nThis notebook is a sample of the text visualizations that yellowbrick provides",
"import os\nimport sys \n\n# Modify the path \nsys.path.append(\"..\")\n\nimport yellowbrick as yb \nimport matplotlib.pyplot as plt ",
"Load Text Corpus for Example Code\nYellowbrick has provided a text corpus wrangled from the Baleen RSS Corpus to present the following examples. If you haven't downloaded the data, you can do so by running:\n$ python download.py\nIn the same directory as the text notebook. Note that this will create a directory called data that contains subdirectories with the provided datasets.",
"from download import download_all \nfrom sklearn.datasets.base import Bunch\n\n## The path to the test data sets\nFIXTURES = os.path.join(os.getcwd(), \"data\")\n\n## Dataset loading mechanisms\ndatasets = {\n \"hobbies\": os.path.join(FIXTURES, \"hobbies\")\n}\n\n\ndef load_data(name, download=True):\n \"\"\"\n Loads and wrangles the passed in text corpus by name.\n If download is specified, this method will download any missing files. \n \"\"\"\n \n # Get the path from the datasets \n path = datasets[name]\n \n # Check if the data exists, otherwise download or raise \n if not os.path.exists(path):\n if download:\n download_all() \n else:\n raise ValueError((\n \"'{}' dataset has not been downloaded, \"\n \"use the download.py module to fetch datasets\"\n ).format(name))\n \n # Read the directories in the directory as the categories. \n categories = [\n cat for cat in os.listdir(path) \n if os.path.isdir(os.path.join(path, cat))\n ]\n \n \n files = [] # holds the file names relative to the root \n data = [] # holds the text read from the file \n target = [] # holds the string of the category \n \n # Load the data from the files in the corpus \n for cat in categories:\n for name in os.listdir(os.path.join(path, cat)):\n files.append(os.path.join(path, cat, name))\n target.append(cat)\n \n with open(os.path.join(path, cat, name), 'r') as f:\n data.append(f.read())\n \n \n # Return the data bunch for use similar to the newsgroups example\n return Bunch(\n categories=categories,\n files=files,\n data=data,\n target=target,\n )\n\ncorpus = load_data('hobbies') ",
"Frequency Distribution Visualization\nA method for visualizing the frequency of tokens within and across corpora is frequency distribution. A frequency distribution tells us the frequency of each vocabulary item in the text. In general, it could count any kind of observable event. It is a distribution because it tells us how the total number of word tokens in the text are distributed across the vocabulary items.",
"from sklearn.feature_extraction.text import CountVectorizer\nfrom yellowbrick.text.freqdist import FreqDistVisualizer ",
"Note that the FreqDistVisualizer does not perform any normalization or vectorization, and it expects text that has already be count vectorized.\nWe first instantiate a FreqDistVisualizer object, and then call fit() on that object with the count vectorized documents and the features (i.e. the words from the corpus), which computes the frequency distribution. The visualizer then plots a bar chart of the top 50 most frequent terms in the corpus, with the terms listed along the x-axis and frequency counts depicted at y-axis values. As with other Yellowbrick visualizers, when the user invokes show(), the finalized visualization is shown.",
"vectorizer = CountVectorizer()\ndocs = vectorizer.fit_transform(corpus.data)\nfeatures = vectorizer.get_feature_names()\n\nvisualizer = FreqDistVisualizer()\nvisualizer.fit(docs, features)\nvisualizer.show()",
"Visualizing Stopwords Removal\nFor example, it is interesting to compare the results of the FreqDistVisualizer before and after stopwords have been removed from the corpus:",
"vectorizer = CountVectorizer(stopwords='english')\ndocs = vectorizer.fit_transform(corpus.data)\nfeatures = vectorizer.get_feature_names()\n\nvisualizer = FreqDistVisualizer()\nvisualizer.fit(docs, features)\nvisualizer.show()",
"Visualizing tokens across corpora\nIt is also interesting to explore the differences in tokens across a corpus. The hobbies corpus that comes with Yellowbrick has already been categorized (try corpus['categories']), so let's visually compare the differences in the frequency distributions for two of the categories: \"cooking\" and \"gaming\"",
"hobby_types = {}\n\nfor category in corpus['categories']:\n texts = []\n for idx in range(len(corpus['data'])):\n if corpus['target'][idx] == category:\n texts.append(corpus['data'][idx])\n hobby_types[category] = texts\n\nvectorizer = CountVectorizer(stop_words='english')\ndocs = vectorizer.fit_transform(text for text in hobby_types['cooking'])\nfeatures = vectorizer.get_feature_names()\n\nvisualizer = FreqDistVisualizer()\nvisualizer.fit(docs, features)\nvisualizer.show()\n\nvectorizer = CountVectorizer(stop_words='english')\ndocs = vectorizer.fit_transform(text for text in hobby_types['gaming'])\nfeatures = vectorizer.get_feature_names()\n\nvisualizer = FreqDistVisualizer()\nvisualizer.fit(docs, features)\nvisualizer.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Wei1234c/Elastic_Network_of_Things_with_MQTT_and_MicroPython
|
notebooks/demo/Neural Network demo.ipynb
|
gpl-3.0
|
[
"Neural Network demo (not tested yet)\nStart a Mosquitto container first. For example:\n- Use codes\\_demo\\1_start_broker.sh to start a Mosquitto container on Raspberry Pi.\n- Config files are in mqtt_config\\mqtt.\n- set allow_anonymous true in mqtt_config\\mqtt\\config\\mosquitto.conf to allow anonymous client.\nGetting Started\nWhat this notebook does:\n- Using:\n - a client on PC\n - 6 ESP8266 modules (NodeMCU and D1 mini) as remote nodes\n- List connected nodes\n- Rename remote nodes\n- Setup neural network configuration (connections, weights, thresholds)\n- Fire up neurons and get logs.",
"import os\nimport sys\nimport time\n \nsys.path.append(os.path.abspath(os.path.join(os.path.pardir, '..\\\\codes', 'client')))\nsys.path.append(os.path.abspath(os.path.join(os.path.pardir, '..\\\\codes', 'node')))\nsys.path.append(os.path.abspath(os.path.join(os.path.pardir, '..\\\\codes', 'shared')))\nsys.path.append(os.path.abspath(os.path.join(os.path.pardir, '..\\\\codes', 'micropython')))\n \nimport client\nfrom collections import OrderedDict\n\nimport pandas as pd\nfrom pandas import DataFrame\nfrom time import sleep\nREFRACTORY_PERIOD = 0.1 # 0.1 seconds\n\n# Each ESP8266 modules represents a neuron. We have 6 of them.\nneurons = ['neuron_x1', 'neuron_x2', 'neuron_h1', 'neuron_h2', 'neuron_h3', 'neuron_y'] ",
"Start client",
"the_client = client.Client()\nthe_client.start()\n\nwhile not the_client.status['Is connected']: \n time.sleep(1)\n print('Node not ready yet.')",
"Utility functions",
"# Ask Hub for a list of connected nodes\ndef list_nodes():\n the_client.node.worker.roll_call()\n time.sleep(2)\n remote_nodes = sorted(the_client.node.worker.contacts.keys())\n\n print('\\n[____________ Connected nodes ____________]\\n')\n print('\\nConnected nodes:\\n{}\\n'.format(remote_nodes))\n \n return remote_nodes\n\ndef reset_node(node):\n message = {'message_type': 'exec',\n 'to_exec': 'import machine;machine.reset()'}\n the_client.request(node, message) \n\ndef rename_node(node, new_name):\n \n with open('temp.py', 'w') as f:\n f.write('WORKER_NAME = ' + '\\\"' + new_name + '\\\"\\n')\n \n with open('temp.py') as f:\n script = f.read()\n message = {'message_type': 'file',\n 'file': script,\n 'kwargs': {'filename': 'worker_config.py'}}\n the_client.request(node, message)\n \n os.remove('temp.py')\n \n time.sleep(1)\n reset_node(node)\n \n\ndef rename_nodes(nodes, neurons): \n i = 0 \n for node in nodes:\n if node != the_client.node.worker.name: # exclude client self\n rename_node(node, neurons[i])\n i += 1\n\ndef fire(node):\n message = {'message_type': 'function',\n 'function': 'fire'}\n the_client.request(node, message) \n\ndef addConnection(node, neuron):\n message = {'message_type': 'function',\n 'function': 'addConnection',\n 'kwargs': {'neuron_id': neuron}}\n the_client.request(node, message) \n \ndef set_connections(node, connections):\n message = {'message_type': 'function',\n 'function': 'setConnections',\n 'kwargs': {'connections': connections}}\n the_client.request(node, message) \n \ndef get_connections(node):\n message = {'message_type': 'function',\n 'function': 'getConnections', \n 'need_result': True}\n _, result = the_client.request(node, message) \n return result.get() \n\ndef setWeight(node, neuron, weight):\n message = {'message_type': 'function',\n 'function': 'setWeight',\n 'kwargs': {'neuron_id': neuron,\n 'weight': weight,}}\n the_client.request(node, message) \n\ndef setThreshold(node, threshold):\n message = {'message_type': 'function',\n 'function': 'setThreshold',\n 'kwargs': {'threshold': threshold}}\n the_client.request(node, message) \n \ndef getConfig(node):\n message = {'message_type': 'function',\n 'function': 'getConfig', \n 'need_result': True}\n _, result = the_client.request(node, message) \n return result.get()\n\ndef getLog(node):\n message = {'message_type': 'function',\n 'function': 'getLog', \n 'need_result': True}\n _, result = the_client.request(node, message) \n return result.get()\n\ndef emptyLog(node):\n message = {'message_type': 'function',\n 'function': 'emptyLog'}\n the_client.request(node, message)\n \ndef emptyLogs():\n for neuron in neurons:\n emptyLog(neuron) \n \ndef mergeLogs():\n logs = []\n \n for neuron in neurons:\n if neuron != the_client.node.worker.name: # exclude client self\n currentLog = getLog(neuron)\n if currentLog:\n logs += currentLog \n \n df = DataFrame(list(logs), columns = ['time', 'neuron', 'message']) \n df.set_index('time', inplace = True)\n df.sort_index(inplace = True)\n \n return df \n\ndef printConfig(neuron):\n print('{0:_^78}\\n {1}\\n'.format(neuron + \" config:\", getConfig(neuron)))\n\n# fire('NodeMCU_1dsc000')",
"List connected nodes",
"remote_nodes = list_nodes()",
"Rename nodes",
"rename_nodes(remote_nodes, neurons) \ntime.sleep(2)\nremote_nodes = list_nodes()\n\nremote_nodes = list_nodes()",
"Setup network configuration\nClear log files",
"emptyLogs()",
"Setup connections",
"addConnection('neuron_x1', 'neuron_h1')\naddConnection('neuron_x1', 'neuron_h2')\n\naddConnection('neuron_x2', 'neuron_h2')\naddConnection('neuron_x2', 'neuron_h3')\n\naddConnection('neuron_h1', 'neuron_y')\naddConnection('neuron_h2', 'neuron_y')\naddConnection('neuron_h3', 'neuron_y')",
"Setup weights",
"setWeight('neuron_h1', 'neuron_x1', 1) \nsetWeight('neuron_h2', 'neuron_x1', 1) \n\nsetWeight('neuron_h2', 'neuron_x2', 1) \nsetWeight('neuron_h3', 'neuron_x2', 1) \n\nsetWeight('neuron_y', 'neuron_h1', 1) \nsetWeight('neuron_y', 'neuron_h2', -2) \nsetWeight('neuron_y', 'neuron_h3', 1) ",
"Setup thresholds",
"setThreshold('neuron_x1', 0.9)\nsetThreshold('neuron_x2', 0.9)\n\nsetThreshold('neuron_h1', 0.9)\nsetThreshold('neuron_h2', 1.9)\nsetThreshold('neuron_h3', 0.9)\n\nsetThreshold('neuron_y', 0.9)",
"Simulate sensor input,then observe outputs of neurons",
"### Wait for a while until action potential quiet down.\nemptyLogs()\nsleep(REFRACTORY_PERIOD) \nmergeLogs()\n\n### Simulate sensor input,force neuron_x1 to fire\nemptyLogs()\nsleep(REFRACTORY_PERIOD)\nfire('neuron_x1') \nmergeLogs() \n\n### Simulate sensor input,force neuron_x2 to fire\nemptyLogs()\nsleep(REFRACTORY_PERIOD)\nfire('neuron_x2') \nmergeLogs() \n\n### Simulate sensor input,force neuron_x1 and neuron_x2 to fire\nemptyLogs()\nsleep(REFRACTORY_PERIOD)\nfire('neuron_x1')\nfire('neuron_x2') \nmergeLogs() \n\nfor neuron in reversed(neurons): printConfig(neuron)",
"Stop the demo",
"# Stopping\nthe_client.stop()\nthe_client = None\nprint ('\\n[________________ Demo stopped ________________]\\n')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
HrantDavtyan/Data_Scraping
|
Week 6/Scrapy_1.ipynb
|
apache-2.0
|
[
"Scrapy: part 1\nScrapy is a powerful web scraping framework for Python. A framework is still a library (\"an API of functions\") yet with more powerful built-in features. It can be described as the combination of all we learnt till now including requests, BeautifulSoup, lxml and RegEx. To install Scrapy, open the command prompt and run the following command:\npip install scrapy\nOnce scrapy is installed one can start experiencing it by just running the following command inside the command prompt (e.g. let's assume you want to scrape the http://quotes.toscrape.com/page/1/ page):\nscrapy shell http://quotes.toscrape.com/page/1/\nNow, you must be able to apply powerful scrapy functions to get the data you want. However, all of this are available inside the command prompt. If you want to experience the same inside a Jupyter notebook, you must try to mimic the command prompt behaviour by adding 5 additional lines as shown below (instead of running the abovementioned command). As this material is provided in a Jupyter notebook, we will also mimic the command prompt behavior, yet you are encouraged to experience it yourself.",
"import requests\nfrom scrapy.http import TextResponse\n\nurl = \"http://quotes.toscrape.com/page/1/\"\n\nr = requests.get(url)\nresponse = TextResponse(r.url, body=r.text, encoding='utf-8')",
"Fine, now we are ready to apply the scrapy functions on our response object. All the code following this line is same for both Jupyter notebook users and those you chose to experience the command prompt approach.\nAs we covered before, there are two main ways to navigate over an HTML file: using CSS selectors and the XPath approach. While BeautifulSoup supported only the former, Scrapy has functions for both: css() for using css selectors and xpath() for the xpath approach.\nCSS selectors\nLet's use CSS selectors to find the title of the page.",
"response.css('title')",
"As you can see it provides more information than needed. That's why there is an extract() function, that will extract only the component we are interested in without the additional information. It can be said that css() and extract() function mimic the findAll() behaviour from BeautifulSoup.",
"response.css('title').extract()",
"Excellent! We now have the correct tag we were looking for with the text inside. If we want to choose only the text content there is no need for using additional function: one just needs to add the following component to the CSS selector ::text as shown below.",
"response.css('title::text').extract()\n\ntype(response.css('title::text').extract())",
"As mentioned before, the extract() function applied on the css selector mimics the findAll() behavior. This is true also about the output we receive: it has the type of list. If one needs to receive the unoce element as an output, the extract_first() function must be used, which will return the very first matched element (similarly to find() from BeautifulSoup).",
"response.css('title::text').extract_first()\n\ntype(response.css('title::text').extract_first())",
"Let's now try to find the heading of the page (which is Quotes to Scrape). Heading is provided inside a <h1> tag as usually.",
"response.css('h1').extract()",
"Again, we can get the heading text by using the ::text guy.",
"response.css('h1::text').extract()",
"The latter did not really help because the heading text was inside an <a> tag, which in its turn was inside the above found <h1> tag.",
"response.css('h1 a').extract()",
"Nice! We found it. As you can see it has the style attribute that differenciates this <a> tag from others (kind of an identifier). We could use it to find this <a> tag even without mentioning that it is inside a <h1> guy. To do this in Scrapy, square brackets should be used.",
"response.css('a[style=\"text-decoration: none\"]').extract()",
"Great! Let's now extract the text first and then go for the link inside this tag (i.e. the value of the href attribute).",
"response.css('a[style=\"text-decoration: none\"]::text').extract()",
"To get the value of href attirubute (and same for any other attirubte) the following approach can be used in Scrapy, which can be considered the alternative to get() function in BeautifulSoup or lxml.",
"response.css('a[style=\"text-decoration: none\"]::attr(href)').extract()",
"Scrapy also supports regular expressions that can directly be applied on matched response. For example, let's select only the \"to Scrape\" part from the heading using regular expressions. We just need to substitute the extract() function with a re() function that will take the expression as an argument.",
"# expression explanation: find Quotes, a whitespace, anything else\n# return only anything else component\nresponse.css('h1 a::text').re('Quotes\\s(.*)')",
"Similarly, we could use RegEx to find and match and return each for of the heading separately as a list element:",
"response.css('h1 a::text').re('(\\S+)\\s(\\S+)\\s(\\S+)')",
"Perfect, we are done now with css() function, let's now implement the same in xpath().\nXPath approach",
"response.xpath('//title').extract()",
"To get the text only, the following should be added to the Xpath argument: /text()",
"response.xpath('//title/text()').extract()",
"Similarly, we can find the <a> tag inside the <h1> and extract first text then the link.",
"response.xpath('//h1/a').extract()\n\nresponse.xpath('//h1/a/text()').extract()",
"xpath() function operates in the same way as in the lxml package, which means /@href should be added to the path to select the value of the href attribute (i.e. the link).",
"response.xpath('//h1/a/@href').extract()",
"This is all for Part 1. We just used Scrapy as a library and experienced part its power: Scrapy is kind of the combination of whatever we learnt till now. Yet, this is not the only reason Scrapy is powerful and demanded. The rest will be covered in following parts.\nP.S. If you were using command prompt to run this code, then run exit() comand to exit Scrapy. If you want to save your commands before exiting into a Python file, then the following command will be of use:\n%save my_commands 1-56\nwhere my_commands is the name of the file to be created (change it based on your taste) and 1-56 tells Python to save the code starting from the line 1 (very begining) and ending with line 56 (put the line that you want here, last one if you want to save whole code)."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
spencerchan/ctabus
|
notebooks/Visualizing Bus Bunching.ipynb
|
gpl-3.0
|
[
"%matplotlib inline\nfrom IPython.display import HTML\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nimport sys\nsys.path.insert(0, '../src/processing/')\nimport tools",
"A Common City Scene\nIf you've ever ridden the bus, you've probably had the following experience. You're standing at the bus stop waiting for the bus to come. You've been waiting over ten minutes. You're running late to an important meeting, a date, or maybe a house warming party you didn't really want to go to. Some people at the bus stop are commenting out loud about how this bus is always late. Others are checking their transit apps or Google maps and complaining how the apps always lie: \"it said the bus was coming in five minutes, five minutes ago!\" One guy keeps stepping into the street every 30 seconds and staring down oncoming traffic to check for the bus. Suddenly, you see the bus appear in the distance. As the bus gets closer you realize, there's not just one bus, but two buses! (If it's a strange day, maybe there is a caravan of three buses arriving together or in close succession). If you're lucky, you witnessed this from the bus stop across the street. This phenomenon is appropriately called bus bunching.\nWhat is Bus Bunching?\nBus bunching is when two or more buses on the same route arrive at a bus stop together or in close succession. Bus bunching occurs when the headway, or the temporal distance, between two buses is sufficiently small. What counts as \"sufficiently small\" depends on the context. There's no standard threshold headway used to define bus bunching. On a high frequency route, two buses may be considered bunched if the headway between them is 2 minutes or less. On a lower frequency route, however, two buses may be considered bunched at headways over 5 minutes.\nWhat Causes Bus Bunching?\nBus bunching has a number of possible causes: \n* The sheer volume of passengers entering the bus at a given stop or stops. From this arises a possible negative feedback loop: as passengers entering and exiting further along the route the have a harder time navigating the crowd, service continues to slow down.\n* The amount of traffic on the road\n* The number of available lanes decreases at some point along the route, whether from construction or the road's design\n* A bus operator driving too fast or too slow\nDetermining when and where bus bunching occurs along a route is critical for transit planners working to improve service. \nThe Data\nIn this notebook, I visualize bus bunching for eastbound trips on CTA Route 73 Armitage. To determine if buses are bunched, I analyze a dataset of observed eastbound headways at each stop along the route for the month of March 2019. I (arbitrarily) count a pair of buses as bunched if their headway is less than 2 minutes.\nThe Goal\nMy primary goal is to learn where bus bunching tends to occur along Route 73 Armitage. I'm also interested in learning if bus bunching occurs more often at certain times of the day. My plan is to create a series of heatmap-like visualizations indexed by time that highlight the bus stops at which the most bunching incidents were observed during the data collection period. The idea is for the heatmap to be visually similar to Google Traffic that communicates traffic conditions as a colored overlay on city streets.\nLoad and process the route's geospatial data\nTo create something like a heatmap, I need to load the geospatial data for the bus route (the route pattern), subdivide it into smaller segments, and color each according to the number of bunching incidents observed on that segment.\nFirst, I load the pattern data for Route 73 Armitage. Each row represents either a bus stop (typ == 'S') or an intermediate waypoint (typ == 'W'). I load the patterns with the waypoints, since they will allow me to create a more accurate image of the route. For simplicity, I will only work with one pattern: pid 2170 (eastbound trips between Grand & Latrobe and Clark & North).",
"patterns = tools.load_patterns(73, waypoints=True)\npatterns = patterns[patterns.pid == 2170]\npatterns.head()",
"We can get an idea of the shape of the route pattern by creating a LineString object from the latitude and longitude of each of its stops and waypoints using the Shapely library.",
"from shapely.geometry import LineString\n\nLineString(zip(patterns.lon, patterns.lat))",
"Transform the data\nA logical way to divide the pattern is into segments with end points at adjacent bus stops labeled with the name of the stop visited earlier in the sequence. Any waypoints falling between the two bus stops should be included in the segment, so as to preserve the route's shape. It is possible that some segments won't be straight line, and that is okay. Finally, I can create a LineString from the coordinates of the segment's stops and waypoints, so that they can be mapped.\nTo accomplish the above for each pair of adjacent stops A and B, I will:\n* Associate all waypoints between stop A and B with stop A.\n* Append all lon-lat coordinates associated with stop A to a list in sequence order.\n* Append the coordinates of stop B to the end of the previous list.\n* Create a LineString from the coordinate list.\nOnce the above steps are accomplished, I can load the geometry into a GeoDataFrame so that it can be plotted.\nConveniently, waypoints have null-valued stpnm and stpid. I make sure the DataFrame is sorted in sequence order, and then apply the forward fill method .ffill() to populate the null values in those fields with the name and id of the preceding stop. I drop any remaining rows with null values, i.e. any points along the route before the first stop.",
"patterns.sort_values('seq')\npatterns.ffill(inplace=True)\npatterns.dropna(inplace=True)\npatterns.head()",
"Next, I populate the column lon_lat with one-item lists containing a tuple of each row's latitude and longitude. Grouping the DataFrame by stpid and applying sum to the lon_lat column concatenates the one-item coordinate lists together.",
"patterns['lon_lat'] = [[xy] for xy in zip(patterns.lon, patterns.lat)]\ngrouped = patterns.groupby(['stpid']).lon_lat.agg('sum').to_frame().reset_index()\ngrouped.head()",
"I join this result back to the patterns DataFrame.",
"merged = patterns.merge(grouped, on='stpid')\nmerged.head()",
"I drop the waypoints from the DataFrame, since they are no longer needed. I also rename some of the columns.",
"merged = merged[merged.typ != \"W\"]\nmerged.rename({'lon_lat_y': 'coordinates', 'lon_lat_x': 'lon_lat'}, inplace=True, axis='columns')\nmerged.head()",
"The coordinate lists include all of the points up to but not including the final end point. To include the final endpoint, I concatenate the coordinates of each bus stop (lon_lat) to the coordinate list (coordinates) in the row above it with the help of the .shift() method and the + operator. Passing -1 to .shift() shifts each row backward by 1, so that the 0th row is removed, the 1st row becomes the 0th row, and so on, and the last row becomes null. When the + operator is applied to two Series of lists, it concatenates the lists in each Series. Notice that a non-null value + NaN = NaN.",
"merged.coordinates = (merged.coordinates + merged.lon_lat.shift(-1))\nmerged.tail()",
"Load the processed geospatial data into a GeoDataFrame\nOnce I create a LineString for each list of coordinates, I can load the DataFrame and the geometry into a GeoDataFrame.",
"import geopandas as gpd\n\nmerged.dropna(inplace=True)\ngeometry = [LineString(xys) for xys in merged.coordinates]\ngdf = gpd.GeoDataFrame(merged, geometry=geometry)\ngdf.drop(['lon', 'lat', 'lon_lat', 'coordinates'], inplace=True, axis=1)\ngdf.head()\n\ngdf.plot().set_axis_off()",
"Create the heatmap\nTo color the map, I need to associate each segment with the corresponding number of observed bunching incidents.\nFor simplicity, I only load wait time data from March 2019.",
"travels_waits = tools.load_travels_waits(73, \"Eastbound\", \"201903\")\ntravels_waits.head()\n\npatterns[patterns.stpid.isin([\"15417\", \"4040\"])]",
"I count the number of wait times under 2 minutes at each stop for eastbound buses traveling the full length of the route to Clark & North (as opposed to Armitage & Pulaski, which is only a small portionof the route). I then perform an attribute join of the geospatial data with the counts.",
"bunching = travels_waits[travels_waits[\"wait|15417\"] < 2].groupby('origin').count().tatripid.to_frame().reset_index().rename({'tatripid': 'counts'}, axis='columns')\n\ngdf.stpid = gdf.stpid.astype(int)\nbunching_gdf = gdf.merge(bunching, left_on=\"stpid\", right_on=\"origin\")\nbunching_gdf.head()\n\nbunching_gdf.plot(column=\"counts\", legend=True, cmap='YlGnBu', linewidth=5).set_axis_off()",
"The number of bunching incidents increases as the buses travel further east. This result is intuitive: there are more opportunities for the eastbound buses to get off schedule the further they travel along the route. Once a group of buses become bunched, it may be difficult for them to become unbunched, unless one of the drivers take deliberate action to do so, e.g. stopping and waiting for several minutes.\nThe heatmap gives a general idea of where bunching tends to occur, but it would be helpful if it had some more context. Luckily, the contextily package makes it easy to add background maps to plots created with geopandas. Read a short tutorial here.",
"import contextily as ctx\n\ndef add_basemap(ax, zoom, url='http://tile.stamen.com/toner/tileZ/tileX/tileY.png'):\n xmin, xmax, ymin, ymax = ax.axis()\n basemap, extent = ctx.bounds2img(xmin, ymin, xmax, ymax, zoom=zoom, url=url)\n ax.imshow(basemap, extent=extent, interpolation='bilinear')\n ax.axis((xmin, xmax, ymin, ymax))\n return basemap\n\nf, ax = plt.subplots(1, figsize=(15, 8))\nax.axis([-9770000, -9750000, 5145000, 5155000])\n\nbunching_gdf.crs = {'init': 'epsg:4326'}\nbunching_gdf = bunching_gdf.to_crs(epsg=3857)\nbunching_gdf.plot(column=\"counts\", ax=ax, legend=True, cmap='Reds', linewidth=5)\n\nax.set_axis_off()\nadd_basemap(ax, zoom=12)\nplt.show()",
"Create small multiple maps indexed by time of day\nTo see how bunching varies throughout the day, I count the number of bunching incidents over different time periods and create a map for each. The CTA defines four weekday service intervals that I use:\n* AM Peak: 6AM-9AM\n* Midday: 9AM-3PM\n* PM Peak: 3PM-6PM\n* Evening: 7PM-10PM",
"cta_time_periods = [6, 9, 15, 19, 22]\ncta_time_period_labels = [\"AM Peak\", \"Midday\", \"PM Peak\", \"Evening\"]\ntravels_waits[\"bin\"] = pd.cut(travels_waits.decimal_time, cta_time_periods, labels=cta_time_period_labels, right=False)\n\nbinned_bunching = travels_waits[travels_waits[\"wait|15417\"] < 2].groupby(['origin', 'bin']).count().tatripid.to_frame().reset_index().rename({'tatripid': 'counts'}, axis='columns')\nbinned_bunching.counts = binned_bunching.counts.fillna(0)\nbinned_gdf = gdf.merge(binned_bunching, left_on=\"stpid\", right_on=\"origin\")\n\nbinned_gdf.crs = {'init': 'epsg:4326'}\nbinned_gdf = binned_gdf.to_crs(epsg=3857)\n\nfig, ax = plt.subplots(2, 2, figsize=(16, 8))\nfig.suptitle(\"Eastbound Armitage Bus Bunching\")\n\nbin_names = [[\"AM Peak\", \"Midday\"], [\"PM Peak\", \"Evening\"]]\nvmin = 0\nvmax = binned_gdf.counts.max()\nfor i, r in enumerate(bin_names):\n for j, c in enumerate(r):\n _ax = ax[i][j]\n binned_gdf[binned_gdf.bin == c].plot(column=\"counts\", ax=_ax, cmap='Reds', linewidth=5, vmin=vmin, vmax=vmax)\n _ax.axis([-9770000, -9750000, 5145000, 5155000])\n _ax.title.set_text(c)\n _ax.set_axis_off()\n add_basemap(_ax, zoom=12)\n\ncax = fig.add_axes([0.9, 0.1, 0.03, 0.8])\nsm = plt.cm.ScalarMappable(cmap='Reds', norm=plt.Normalize(vmin=vmin, vmax=vmax))\nsm._A = []\nfig.colorbar(sm, cax=cax)\n\nplt.show()",
"Most bunching incidents for eastbound trips were observed during morning rush hour. Again, this result is intuitive: during morning rush hour, there is more traffic congestion in eastbound lanes as commuter travel downtown, so it is more likely for the buses to get off schedule.\nThese observations are for the entire month of March 2019. Notice that during this period around 30 bunching incidents occured at bus stops on the eastern edge of the route during morning rush hour. In other words, an average of one busing incident was observed each day at some stops."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
hongguangguo/shogun
|
doc/ipython-notebooks/intro/Introduction.ipynb
|
gpl-3.0
|
[
"Machine Learning with Shogun\nBy Saurabh Mahindre - <a href=\"https://github.com/Saurabh7\">github.com/Saurabh7</a> as a part of <a href=\"http://www.google-melange.com/gsoc/project/details/google/gsoc2014/saurabh7/5750085036015616\">Google Summer of Code 2014 project</a> mentored by - Heiko Strathmann - <a href=\"https://github.com/karlnapf\">github.com/karlnapf</a> - <a href=\"http://herrstrathmann.de/\">herrstrathmann.de</a>\nIn this notebook we will see how machine learning problems are generally represented and solved in Shogun. As a primer to Shogun's many capabilities, we will see how various types of data and its attributes are handled and also how prediction is done. \n\nIntroduction\nUsing datasets\nFeature representations\nLabels\nPreprocessing data\nSupervised Learning with Shogun's CMachine interface\nEvaluating performance and Model selection\nExample: Regression\n\nIntroduction\nMachine learning concerns the construction and study of systems that can learn from data via exploiting certain types of structure within these. The uncovered patterns are then used to predict future data, or to perform other kinds of decision making. Two main classes (among others) of Machine Learning algorithms are: predictive or supervised learning and descriptive or Unsupervised learning. Shogun provides functionality to address those (and more) problem classes.",
"%pylab inline\n%matplotlib inline\n#To import all Shogun classes\nfrom modshogun import *",
"In a general problem setting for the supervised learning approach, the goal is to learn a mapping from inputs $x_i\\in\\mathcal{X} $ to outputs $y_i \\in \\mathcal{Y}$, given a labeled set of input-output pairs $ \\mathcal{D} = {(x_i,y_i)}^{\\text N}{i=1} $$\\subseteq \\mathcal{X} \\times \\mathcal{Y}$. Here $ \\mathcal{D}$ is called the training set, and $\\text N$ is the number of training examples. In the simplest setting, each training input $x_i$ is a $\\mathcal{D}$ -dimensional vector of numbers, representing, say, the height and weight of a person. These are called $\\textbf {features}$, attributes or covariates. In general, however, $x_i$ could be a complex structured object, such as an image.<ul><li>When the response variable $y_i$ is categorical and discrete, $y_i \\in$ {1,...,C} (say male or female) it is a classification problem.</li><li>When it is continuous (say the prices of houses) it is a regression problem.</li></ul>\nFor the unsupervised learning\napproach we are only given inputs, $\\mathcal{D} = {(x_i)}^{\\text N}{i=1}$ , and the goal is to find “interesting\npatterns” in the data. \nUsing datasets\nLet us consider an example, we have a dataset about various attributes of individuals and we know whether or not they are diabetic. The data reveals certain configurations of attributes that correspond to diabetic patients and others that correspond to non-diabetic patients. When given a set of attributes for a new patient, the goal is to predict whether the patient is diabetic or not. This type of learning problem falls under Supervised learning, in particular, classification.\nShogun provides the capability to load datasets of different formats using CFile.</br> A real world dataset: Pima Indians Diabetes data set is used now. We load the LibSVM format file using Shogun's LibSVMFile class. The LibSVM format is: $$\\space \\text {label}\\space \\text{attribute1:value1 attribute2:value2 }...$$$$\\space.$$$$\\space .$$ LibSVM uses the so called \"sparse\" format where zero values do not need to be stored.",
"#Load the file\ndata_file=LibSVMFile('../../../data/uci/diabetes/diabetes_scale.svm')",
"This results in a LibSVMFile object which we will later use to access the data.\nFeature representations\nTo get off the mark, let us see how Shogun handles the attributes of the data using CFeatures class. Shogun supports wide range of feature representations. We believe it is a good idea to have different forms of data, rather than converting them all into matrices. Among these are: $\\hspace {20mm}$<ul><li>String features: Implements a list of strings. Not limited to character strings, but could also be sequences of floating point numbers etc. Have varying dimensions. </li> <li>Dense features: Implements dense feature matrices</li> <li>Sparse features: Implements sparse matrices.</li><li>Streaming features: For algorithms working on data streams (which are too large to fit into memory) </li></ul> \nSpareRealFeatures (sparse features handling 64 bit float type data) are used to get the data from the file. Since LibSVM format files have labels included in the file, load_with_labels method of SpareRealFeatures is used. In this case it is interesting to play with two attributes, Plasma glucose concentration and Body Mass Index (BMI) and try to learn something about their relationship with the disease. We get hold of the feature matrix using get_full_feature_matrix and row vectors 1 and 5 are extracted. These are the attributes we are interested in.",
"f=SparseRealFeatures()\ntrainlab=f.load_with_labels(data_file)\nmat=f.get_full_feature_matrix()\n\n#exatract 2 attributes\nglucose_conc=mat[1]\nBMI=mat[5]\n\n#generate a numpy array\nfeats=array(glucose_conc)\nfeats=vstack((feats, array(BMI)))\nprint feats, feats.shape",
"In numpy, this is a matrix of 2 row-vectors of dimension 768. However, in Shogun, this will be a matrix of 768 column vectors of dimension 2. This is beacuse each data sample is stored in a column-major fashion, meaning each column here corresponds to an individual sample and each row in it to an atribute like BMI, Glucose concentration etc. To convert the extracted matrix into Shogun format, RealFeatures are used which are nothing but the above mentioned Dense features of 64bit Float type. To do this call RealFeatures with the matrix (this should be a 64bit 2D numpy array) as the argument.",
"#convert to shogun format\nfeats_train=RealFeatures(feats)",
"Some of the general methods you might find useful are:\n\nget_feature_matrix(): The feature matrix can be accessed using this.\nget_num_features(): The total number of attributes can be accesed using this.\nget_num_vectors(): To get total number of samples in data.\nget_feature_vector(): To get all the attribute values (A.K.A feature vector) for a particular sample by passing the index of the sample as argument.</li></ul>",
"#Get number of features(attributes of data) and num of vectors(samples)\nfeat_matrix=feats_train.get_feature_matrix()\nnum_f=feats_train.get_num_features()\nnum_s=feats_train.get_num_vectors()\n\nprint('Number of attributes: %s and number of samples: %s' %(num_f, num_s))\nprint('Number of rows of feature matrix: %s and number of columns: %s' %(feat_matrix.shape[0], feat_matrix.shape[1]))\nprint('First column of feature matrix (Data for first individual):')\nprint feats_train.get_feature_vector(0)",
"Assigning labels\nIn supervised learning problems, training data is labelled. Shogun provides various types of labels to do this through Clabels. Some of these are:<ul><li>Binary labels: Binary Labels for binary classification which can have values +1 or -1.</li><li>Multiclass labels: Multiclass Labels for multi-class classification which can have values from 0 to (num. of classes-1).</li><li>Regression labels: Real-valued labels used for regression problems and are returned as output of classifiers.</li><li>Structured labels: Class of the labels used in Structured Output (SO) problems</li></ul></br> In this particular problem, our data can be of two types: diabetic or non-diabetic, so we need binary labels. This makes it a Binary Classification problem, where the data has to be classified in two groups.",
"#convert to shogun format labels\nlabels=BinaryLabels(trainlab)",
"The labels can be accessed using get_labels and the confidence vector using get_values. The total number of labels is available using get_num_labels.",
"n=labels.get_num_labels()\nprint 'Number of labels:', n",
"Preprocessing data\nIt is usually better to preprocess data to a standard form rather than handling it in raw form. The reasons are having a well behaved-scaling, many algorithms assume centered data, and that sometimes one wants to de-noise data (with say PCA). Preprocessors do not change the domain of the input features. It is possible to do various type of preprocessing using methods provided by CPreprocessor class. Some of these are:<ul><li>Norm one: Normalize vector to have norm 1.</li><li>PruneVarSubMean: Substract the mean and remove features that have zero variance. </li><li>Dimension Reduction: Lower the dimensionality of given simple features.<ul><li>PCA: Principal component analysis.</li><li>Kernel PCA: PCA using kernel methods.</li></ul></li></ul> The training data will now be preprocessed using CPruneVarSubMean. This will basically remove data with zero variance and subtract the mean. Passing a True to the constructor makes the class normalise the varaince of the variables. It basically dividies every dimension through its standard-deviation. This is the reason behind removing dimensions with constant values. It is required to initialize the preprocessor by passing the feature object to init before doing anything else. The raw and processed data is now plotted.",
"preproc=PruneVarSubMean(True)\npreproc.init(feats_train)\nfeats_train.add_preprocessor(preproc)\nfeats_train.apply_preprocessor()\n\n# Store preprocessed feature matrix.\npreproc_data=feats_train.get_feature_matrix()\n\n# Plot the raw training data.\nfigure(figsize=(13,6))\npl1=subplot(121)\ngray()\n_=scatter(feats[0, :], feats[1,:], c=labels, s=50)\nvlines(0, -1, 1, linestyle='solid', linewidths=2)\nhlines(0, -1, 1, linestyle='solid', linewidths=2)\ntitle(\"Raw Training Data\")\n_=xlabel('Plasma glucose concentration')\n_=ylabel('Body mass index')\np1 = Rectangle((0, 0), 1, 1, fc=\"w\")\np2 = Rectangle((0, 0), 1, 1, fc=\"k\")\npl1.legend((p1, p2), [\"Non-diabetic\", \"Diabetic\"], loc=2)\n\n#Plot preprocessed data.\npl2=subplot(122)\n_=scatter(preproc_data[0, :], preproc_data[1,:], c=labels, s=50)\nvlines(0, -5, 5, linestyle='solid', linewidths=2)\nhlines(0, -5, 5, linestyle='solid', linewidths=2)\ntitle(\"Training data after preprocessing\")\n_=xlabel('Plasma glucose concentration')\n_=ylabel('Body mass index')\np1 = Rectangle((0, 0), 1, 1, fc=\"w\")\np2 = Rectangle((0, 0), 1, 1, fc=\"k\")\npl2.legend((p1, p2), [\"Non-diabetic\", \"Diabetic\"], loc=2)\ngray()",
"Horizontal and vertical lines passing through zero are included to make the processing of data clear. Note that the now processed data has zero mean.\n<a id='supervised'>Supervised Learning with Shogun's <a href='http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMachine.html'>CMachine</a> interface</a>\nCMachine is Shogun's interface for general learning machines. Basically one has to train() the machine on some training data to be able to learn from it. Then we apply() it to test data to get predictions. Some of these are: <ul><li>Kernel machine: Kernel based learning tools.</li><li>Linear machine: Interface for all kinds of linear machines like classifiers.</li><li>Distance machine: A distance machine is based on a a-priori choosen distance.</li><li>Gaussian process machine: A base class for Gaussian Processes. </li><li>And many more</li></ul>\nMoving on to the prediction part, Liblinear, a linear SVM is used to do the classification (more on SVMs in this notebook). A linear SVM will find a linear separation with the largest possible margin. Here C is a penalty parameter on the loss function.",
"#prameters to svm\nC=0.9\n\nsvm=LibLinear(C, feats_train, labels)\nsvm.set_liblinear_solver_type(L2R_L2LOSS_SVC)\n\n#train\nsvm.train()\n\nsize=100\n",
"We will now apply on test features to get predictions. For visualising the classification boundary, the whole XY is used as test data, i.e. we predict the class on every point in the grid.",
"x1=linspace(-5.0, 5.0, size)\nx2=linspace(-5.0, 5.0, size)\nx, y=meshgrid(x1, x2)\n#Generate X-Y grid test data\ngrid=RealFeatures(array((ravel(x), ravel(y))))\n\n#apply on test grid\npredictions = svm.apply(grid)\n#get output labels\nz=predictions.get_values().reshape((size, size))\n\n#plot\njet()\nfigure(figsize=(9,6))\ntitle(\"Classification\")\nc=pcolor(x, y, z)\n_=contour(x, y, z, linewidths=1, colors='black', hold=True)\n_=colorbar(c)\n\n_=scatter(preproc_data[0, :], preproc_data[1,:], c=trainlab, cmap=gray(), s=50)\n_=xlabel('Plasma glucose concentration')\n_=ylabel('Body mass index')\np1 = Rectangle((0, 0), 1, 1, fc=\"w\")\np2 = Rectangle((0, 0), 1, 1, fc=\"k\")\nlegend((p1, p2), [\"Non-diabetic\", \"Diabetic\"], loc=2)\ngray()",
"Let us have a look at the weight vector of the separating hyperplane. It should tell us about the linear relationship between the features. The decision boundary is now plotted by solving for $\\bf{w}\\cdot\\bf{x}$ + $\\text{b}=0$. Here $\\text b$ is a bias term which allows the linear function to be offset from the origin of the used coordinate system. Methods get_w() and get_bias() are used to get the necessary values.",
"w=svm.get_w()\nb=svm.get_bias()\n\nx1=linspace(-2.0, 3.0, 100)\n\n#solve for w.x+b=0\ndef solve (x1):\n return -( ( (w[0])*x1 + b )/w[1] )\nx2=map(solve, x1)\n\n#plot\nfigure(figsize=(7,6))\nplot(x1,x2, linewidth=2)\ntitle(\"Decision boundary using w and bias\")\n_=scatter(preproc_data[0, :], preproc_data[1,:], c=trainlab, cmap=gray(), s=50)\n_=xlabel('Plasma glucose concentration')\n_=ylabel('Body mass index')\np1 = Rectangle((0, 0), 1, 1, fc=\"w\")\np2 = Rectangle((0, 0), 1, 1, fc=\"k\")\nlegend((p1, p2), [\"Non-diabetic\", \"Diabetic\"], loc=2)\n\nprint 'w :', w\nprint 'b :', b",
"For this problem, a linear classifier does a reasonable job in distinguishing labelled data. An interpretation could be that individuals below a certain level of BMI and glucose are likely to have no Diabetes. \nFor problems where the data cannot be separated linearly, there are more advanced classification methods, as for example all of Shogun's kernel machines, but more on this later. To play with this interactively have a look at this: web demo \nEvaluating performance and Model selection\nHow do you assess the quality of a prediction? Shogun provides various ways to do this using CEvaluation. The preformance is evaluated by comparing the predicted output and the expected output. Some of the base classes for performance measures are:\n\nBinary class evaluation: used to evaluate binary classification labels. \nClustering evaluation: used to evaluate clustering.\nMean absolute error: used to compute an error of regression model.\nMulticlass accuracy: used to compute accuracy of multiclass classification. \n\nEvaluating on training data should be avoided since the learner may adjust to very specific random features of the training data which are not very important to the general relation. This is called overfitting. Maximising performance on the training examples usually results in algorithms explaining the noise in data (rather than actual patterns), which leads to bad performance on unseen data. The dataset will now be split into two, we train on one part and evaluate performance on other using CAccuracyMeasure.",
"#split features for training and evaluation\nnum_train=700\nfeats=array(glucose_conc)\nfeats_t=feats[:num_train]\nfeats_e=feats[num_train:]\nfeats=array(BMI)\nfeats_t1=feats[:num_train]\nfeats_e1=feats[num_train:]\nfeats_t=vstack((feats_t, feats_t1))\nfeats_e=vstack((feats_e, feats_e1))\n\nfeats_train=RealFeatures(feats_t)\nfeats_evaluate=RealFeatures(feats_e)",
"Let's see the accuracy by applying on test features.",
"label_t=trainlab[:num_train]\nlabels=BinaryLabels(label_t)\nlabel_e=trainlab[num_train:]\nlabels_true=BinaryLabels(label_e)\n\nsvm=LibLinear(C, feats_train, labels)\nsvm.set_liblinear_solver_type(L2R_L2LOSS_SVC)\n\n#train and evaluate\nsvm.train()\noutput=svm.apply(feats_evaluate)\n\n#use AccuracyMeasure to get accuracy\nacc=AccuracyMeasure()\nacc.evaluate(output,labels_true)\naccuracy=acc.get_accuracy()*100\nprint 'Accuracy(%):', accuracy",
"To evaluate more efficiently cross-validation is used. As you might have wondered how are the parameters of the classifier selected? Shogun has a model selection framework to select the best parameters. More description of these things in this notebook.\nMore predictions: Regression\nThis section will demonstrate another type of machine learning problem on real world data.</br> The task is to estimate prices of houses in Boston using the Boston Housing Dataset provided by StatLib library. The attributes are: Weighted distances to employment centres and percentage lower status of the population. Let us see if we can predict a good relationship between the pricing of houses and the attributes. This type of problems are solved using Regression analysis.\nThe data set is now loaded using LibSVMFile as in the previous sections and the attributes required (7th and 12th vector ) are converted to Shogun format features.",
"temp_feats=RealFeatures(CSVFile('../../../data/uci/housing/fm_housing.dat'))\nlabels=RegressionLabels(CSVFile('../../../data/uci/housing/housing_label.dat'))\n\n#rescale to 0...1\npreproc=RescaleFeatures()\npreproc.init(temp_feats)\ntemp_feats.add_preprocessor(preproc)\ntemp_feats.apply_preprocessor(True)\nmat = temp_feats.get_feature_matrix()\n\ndist_centres=mat[7]\nlower_pop=mat[12]\n\nfeats=array(dist_centres)\nfeats=vstack((feats, array(lower_pop)))\nprint feats, feats.shape\n#convert to shogun format features\nfeats_train=RealFeatures(feats)",
"The tool we will use here to perform regression is Kernel ridge regression. Kernel Ridge Regression is a non-parametric version of ridge regression where the kernel trick is used to solve a related linear ridge regression problem in a higher-dimensional space, whose results correspond to non-linear regression in the data-space. Again we train on the data and apply on the XY grid to get predicitions.",
"from mpl_toolkits.mplot3d import Axes3D\nsize=100\nx1=linspace(0, 1.0, size)\nx2=linspace(0, 1.0, size)\nx, y=meshgrid(x1, x2)\n#Generate X-Y grid test data\ngrid=RealFeatures(array((ravel(x), ravel(y))))\n\n#Train on data(both attributes) and predict\nwidth=1.0\ntau=0.5\nkernel=GaussianKernel(feats_train, feats_train, width)\nkrr=KernelRidgeRegression(tau, kernel, labels)\nkrr.train(feats_train)\nkernel.init(feats_train, grid)\nout = krr.apply().get_labels()\n",
"The out variable now contains a relationship between the attributes. Below is an attempt to establish such relationship between the attributes individually. Separate feature instances are created for each attribute. You could skip the code and have a look at the plots directly if you just want the essence.",
"#create feature objects for individual attributes.\nfeats_test=RealFeatures(x1.reshape(1,len(x1)))\nfeats_t0=array(dist_centres)\nfeats_train0=RealFeatures(feats_t0.reshape(1,len(feats_t0)))\nfeats_t1=array(lower_pop)\nfeats_train1=RealFeatures(feats_t1.reshape(1,len(feats_t1)))\n\n#Regression with first attribute\nkernel=GaussianKernel(feats_train0, feats_train0, width)\nkrr=KernelRidgeRegression(tau, kernel, labels)\nkrr.train(feats_train0)\nkernel.init(feats_train0, feats_test)\nout0 = krr.apply().get_labels()\n\n#Regression with second attribute \nkernel=GaussianKernel(feats_train1, feats_train1, width)\nkrr=KernelRidgeRegression(tau, kernel, labels)\nkrr.train(feats_train1)\nkernel.init(feats_train1, feats_test)\nout1 = krr.apply().get_labels()\n\n\n#Visualization of regression\nfig=figure(figsize(20,6))\n#first plot with only one attribute\nfig.add_subplot(131)\ntitle(\"Regression with 1st attribute\")\n_=scatter(feats[0, :], labels.get_labels(), cmap=gray(), s=20)\n_=xlabel('Weighted distances to employment centres ')\n_=ylabel('Median value of homes')\n\n_=plot(x1,out0, linewidth=3)\n\n#second plot with only one attribute\nfig.add_subplot(132)\ntitle(\"Regression with 2nd attribute\")\n_=scatter(feats[1, :], labels.get_labels(), cmap=gray(), s=20)\n_=xlabel('% lower status of the population')\n_=ylabel('Median value of homes')\n_=plot(x1,out1, linewidth=3)\n\n#Both attributes and regression output\nax=fig.add_subplot(133, projection='3d')\nz=out.reshape((size, size))\ngray()\ntitle(\"Regression\")\nax.plot_wireframe(y, x, z, linewidths=2, alpha=0.4)\nax.set_xlabel('% lower status of the population')\nax.set_ylabel('Distances to employment centres ')\nax.set_zlabel('Median value of homes')\nax.view_init(25, 40)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fivetentaylor/rpyca
|
RPCA_Testing.ipynb
|
mit
|
[
"Robust PCA Example\nRobust PCA is an awesome relatively new method for factoring a matrix into a low rank component and a sparse component. This enables really neat applications for outlier detection, or models that are robust to outliers.",
"%matplotlib inline",
"Make Some Toy Data",
"import matplotlib.pyplot as plt\nimport numpy as np\n\ndef mk_rot_mat(rad=np.pi / 4):\n rot = np.array([[np.cos(rad),-np.sin(rad)], [np.sin(rad), np.cos(rad)]])\n return rot\n\nrot_mat = mk_rot_mat( np.pi / 4)\nx = np.random.randn(100) * 5\ny = np.random.randn(100)\npoints = np.vstack([y,x])\n\nrotated = np.dot(points.T, rot_mat).T",
"Add Some Outliers to Make Life Difficult",
"outliers = np.tile([15,-10], 10).reshape((-1,2))\n\npts = np.vstack([rotated.T, outliers]).T",
"Compute SVD on both the clean data and the outliery data",
"U,s,Vt = np.linalg.svd(rotated)\nU_n,s_n,Vt_n = np.linalg.svd(pts)",
"Just 10 outliers can really screw up our line fit!",
"plt.ylim([-20,20])\nplt.xlim([-20,20])\nplt.scatter(*pts)\npts0 = np.dot(U[0].reshape((2,1)), np.array([-20,20]).reshape((1,2)))\nplt.plot(*pts0)\npts1 = np.dot(U_n[0].reshape((2,1)), np.array([-20,20]).reshape((1,2)))\nplt.plot(*pts1, c='r')",
"Now the robust pca version!",
"import rpca\n\nreload(rpca)\n\nimport logging\nlogger = logging.getLogger(rpca.__name__)\nlogger.setLevel(logging.INFO)",
"Factor the matrix into L (low rank) and S (sparse) parts",
"L,S = rpca.rpca(pts, eps=0.0000001, r=1)",
"Run SVD on the Low Rank Part",
"U,s,Vt = np.linalg.svd(L)",
"And have a look at this!",
"plt.ylim([-20,20])\nplt.xlim([-20,20])\nplt.scatter(*pts)\npts0 = np.dot(U[0].reshape((2,1)), np.array([-20,20]).reshape((1,2)))\nplt.plot(*pts0)\nplt.scatter(*L, c='red')",
"Have a look at the factored components...",
"plt.ylim([-20,20])\nplt.xlim([-20,20])\nplt.scatter(*L)\nplt.scatter(*S, c='red')",
"It really does add back to the original matrix!",
"plt.ylim([-20,20])\nplt.xlim([-20,20])\nplt.scatter(*(L+S))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
folivetti/BIGDATA
|
Spark/.ipynb_checkpoints/Lab04-Resposta-checkpoint.ipynb
|
mit
|
[
"Lab 5b - k-Means para Quantização de Atributos\nOs algoritmos de agrupamento de dados, além de serem utilizados em análise exploratória para extrair padrões de similaridade entre os objetos, pode ser utilizado para compactar o espaço de dados.\nNeste notebook vamos utilizar nossa base de dados de Sentiment Movie Reviews para os experimentos. Primeiro iremos utilizar a técnica word2vec que aprende uma transformação dos tokens de uma base em um vetor de atributos. Em seguida, utilizaremos o algoritmo k-Means para compactar a informação desses atributos e projetar cada objeto em um espaço de atributos de tamanho fixo.\nAs células-exercícios iniciam com o comentário # EXERCICIO e os códigos a serem completados estão marcados pelos comentários <COMPLETAR>.\n Nesse notebook: \nParte 1: Word2Vec\nParte 2: k-Means para quantizar os atributos\nParte 3: Aplicando um k-NN\nParte 0: Preliminares\nPara este notebook utilizaremos a base de dados Movie Reviews que será utilizada para o segundo projeto.\nA base de dados tem os campos separados por '\\t' e o seguinte formato:\n\"id da frase\",\"id da sentença\",\"Frase\",\"Sentimento\"\nPara esse laboratório utilizaremos apenas o campo \"Frase\".",
"import os\nimport numpy as np\n\ndef parseRDD(point):\n \"\"\" Parser for the current dataset. It receives a data point and return\n a sentence (third field).\n Args:\n point (str): input data point\n Returns:\n str: a string\n \"\"\" \n data = point.split('\\t')\n return (int(data[0]),data[2])\n\ndef notempty(point):\n \"\"\" Returns whether the point string is not empty\n Args:\n point (str): input string\n Returns:\n bool: True if it is not empty\n \"\"\" \n return len(point[1])>0\n\nfilename = os.path.join(\"Data\",\"MovieReviews2.tsv\")\nrawRDD = sc.textFile(filename,100)\nheader = rawRDD.take(1)[0]\n\ndataRDD = (rawRDD\n #.sample(False, 0.1, seed=42)\n .filter(lambda x: x!=header)\n .map(parseRDD)\n .filter(notempty)\n #.sample( False, 0.1, 42 )\n )\n\nprint ('Read {} lines'.format(dataRDD.count()))\nprint ('Sample line: {}'.format(dataRDD.takeSample(False, 1)[0]))",
"Parte 1: Word2Vec\nA técnica word2vec aprende através de uma rede neural semântica uma representação vetorial de cada token em um corpus de tal forma que palavras semanticamente similares sejam similares na representação vetorial.\nO PySpark contém uma implementação dessa técnica, para aplicá-la basta passar um RDD em que cada objeto representa um documento e cada documento é representado por uma lista de tokens na ordem em que aparecem originalmente no corpus. Após o processo de treinamento, podemos transformar um token utilizando o método transform para transformar cada token em uma representaçã vetorial.\nNesse ponto, cada objeto de nossa base será representada por uma matriz de tamanho variável.\n(1a) Gerando RDD de tokens\nUtilize a função de tokenização tokenize do Lab4d para gerar uma RDD wordsRDD contendo listas de tokens da nossa base original.",
"# EXERCICIO\nimport re\n\nsplit_regex = r'\\W+'\n\nstopfile = os.path.join(\"Data\",\"stopwords.txt\")\nstopwords = set(sc.textFile(stopfile).collect())\n\ndef tokenize(string):\n \"\"\" An implementation of input string tokenization that excludes stopwords\n Args:\n string (str): input string\n Returns:\n list: a list of tokens without stopwords\n \"\"\"\n str_list = re.split(split_regex, string)\n str_list = filter(lambda w: len(w)>0, map(lambda w: w.lower(), str_list))\n return [w for w in str_list if w not in stopwords]\n\nwordsRDD = dataRDD.map(lambda x: tokenize(x[1]))\n\nprint (wordsRDD.take(1)[0])\n\n# TEST Tokenize a String (1a)\nassert wordsRDD.take(1)[0]==[u'quiet', u'introspective', u'entertaining', u'independent', u'worth', u'seeking'], 'lista incorreta!'",
"(1b) Aplicando transformação word2vec\nCrie um modelo word2vec aplicando o método fit na RDD criada no exercício anterior.\nPara aplicar esse método deve ser fazer um pipeline de métodos, primeiro executando Word2Vec(), em seguida aplicando o método setVectorSize() com o tamanho que queremos para nosso vetor (utilize tamanho 5), seguido de setSeed() para a semente aleatória, em caso de experimentos controlados (utilizaremos 42) e, finalmente, fit() com nossa wordsRDD como parâmetro.",
"# EXERCICIO\nfrom pyspark.mllib.feature import Word2Vec\n\nmodel = (Word2Vec()\n .setVectorSize(5)\n .setSeed(42)\n .fit(wordsRDD))\n\nprint (model.transform(u'entertaining'))\nprint (list(model.findSynonyms(u'entertaining', 2)))\n\ndist = np.abs(model.transform(u'entertaining')-np.array([0.0136831374839,0.00371457682922,-0.135785803199,0.047585401684,0.0414853096008])).mean()\nassert dist<1e-6, 'valores incorretos'\nassert list(model.findSynonyms(u'entertaining', 1))[0][0] == 'god', 'valores incorretos'",
"(1c) Gerando uma RDD de matrizes\nComo primeiro passo, precisamos gerar um dicionário em que a chave são as palavras e o valor é o vetor representativo dessa palavra.\nPara isso vamos primeiro gerar uma lista uniqueWords contendo as palavras únicas do RDD words, removendo aquelas que aparecem menos do que 5 vezes $^1$. Em seguida, criaremos um dicionário w2v que a chave é um token e o valor é um np.array do vetor transformado daquele token$^2$.\nFinalmente, vamos criar uma RDD chamada vectorsRDD em que cada registro é representado por uma matriz onde cada linha representa uma palavra transformada.\n1\nNa versão 1.3 do PySpark o modelo Word2Vec utiliza apenas os tokens que aparecem mais do que 5 vezes no corpus, na versão 1.4 isso é parametrizado.\n2\nNa versão 1.4 do PySpark isso pode ser feito utilizando o método `getVectors()",
"# EXERCICIO\nuniqueWords = (wordsRDD\n .flatMap(lambda ws: [(w, 1) for w in ws])\n .reduceByKey(lambda x,y: x+y)\n .filter(lambda wf: wf[1]>=5)\n .map(lambda wf: wf[0])\n .collect()\n )\n\nprint ('{} tokens únicos'.format(len(uniqueWords)))\n\nw2v = {}\nfor w in uniqueWords:\n w2v[w] = model.transform(w)\nw2vb = sc.broadcast(w2v) \nprint ('Vetor entertaining: {}'.format( w2v[u'entertaining']))\n\nvectorsRDD = (wordsRDD\n .map(lambda ws: np.array([w2vb.value[w] for w in ws if w in w2vb.value]))\n )\nrecs = vectorsRDD.take(2)\nfirstRec, secondRec = recs[0], recs[1]\nprint (firstRec.shape, secondRec.shape)\n\n\n# TEST Tokenizing the small datasets (1c)\nassert len(uniqueWords) == 3388, 'valor incorreto'\nassert np.mean(np.abs(w2v[u'entertaining']-[0.0136831374839,0.00371457682922,-0.135785803199,0.047585401684,0.0414853096008]))<1e-6,'valor incorreto'\nassert secondRec.shape == (10,5)",
"Parte 2: k-Means para quantizar os atributos\nNesse momento é fácil perceber que não podemos aplicar nossas técnicas de aprendizado supervisionado nessa base de dados:\n\n\nA regressão logística requer um vetor de tamanho fixo representando cada objeto\n\n\nO k-NN necessita uma forma clara de comparação entre dois objetos, que métrica de similaridade devemos aplicar?\n\n\nPara resolver essa situação, vamos executar uma nova transformação em nossa RDD. Primeiro vamos aproveitar o fato de que dois tokens com significado similar são mapeados em vetores similares, para agrupá-los em um atributo único.\nAo aplicarmos o k-Means nesse conjunto de vetores, podemos criar $k$ pontos representativos e, para cada documento, gerar um histograma de contagem de tokens nos clusters gerados.\n(2a) Agrupando os vetores e criando centros representativos\nComo primeiro passo vamos gerar um RDD com os valores do dicionário w2v. Em seguida, aplicaremos o algoritmo k-Means com $k = 200$.",
"# EXERCICIO\nfrom pyspark.mllib.clustering import KMeans\n\nvectors2RDD = sc.parallelize(np.array(list(w2v.values())),1)\nprint ('Sample vector: {}'.format(vectors2RDD.take(1)))\n\nmodelK = KMeans.train(vectors2RDD, 200, seed=42)\n\nclustersRDD = vectors2RDD.map(lambda x: modelK.predict(x))\nprint ('10 first clusters allocation: {}'.format(clustersRDD.take(10)))\n\n# TEST Amazon record with the most tokens (1d)\nassert clustersRDD.take(10)==[142, 83, 42, 0, 87, 52, 190, 17, 56, 0], 'valor incorreto'",
"(2b) Transformando matriz de dados em vetores quantizados\nO próximo passo consiste em transformar nosso RDD de frases em um RDD de pares (id, vetor quantizado). Para isso vamos criar uma função quantizador que receberá como parâmetros o objeto, o modelo de k-means, o valor de k e o dicionário word2vec.\nPara cada ponto, vamos separar o id e aplicar a função tokenize na string. Em seguida, transformamos a lista de tokens em uma matriz word2vec. Finalmente, aplicamos cada vetor dessa matriz no modelo de k-Means, gerando um vetor de tamanho $k$ em que cada posição $i$ indica quantos tokens pertencem ao cluster $i$.",
"# EXERCICIO\ndef quantizador(point, model, k, w2v):\n key = point[0]\n words = tokenize(point[1])\n matrix = np.array( [w2v[w] for w in words if w in w2v] )\n features = np.zeros(k)\n for v in matrix:\n c = model.predict(v)\n features[c] += 1\n return (key, features)\n \nquantRDD = dataRDD.map(lambda x: quantizador(x, modelK, 500, w2v))\n\nprint (quantRDD.take(1))\n\n# TEST Implement a TF function (2a)\nassert quantRDD.take(1)[0][1].sum() == 5, 'valores incorretos'"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
shareactorIO/pipeline
|
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/GoogleTraining/workshop_sections/mnist_series/the_hard_way/mnist_onehlayer.ipynb
|
apache-2.0
|
[
"Copyright 2016 Google Inc. All Rights Reserved.\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\nThis notebook is similar to this python script, and uses this README.\nIt implements a \"one hidden layer\" version of MNIST. \nAs part of this lab, you'll add another hidden layer (and make a few other small modifications). You can make your edits in the notebook if you like, or edit the corresponding python script.\nStart with some imports and variable definitions:",
"import argparse\nimport math\nimport os\nimport time\n\nfrom six.moves import xrange\nimport tensorflow as tf\nfrom tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets\n\n\n# Define some constants.\n# The MNIST dataset has 10 classes, representing the digits 0 through 9.\nNUM_CLASSES = 10\n# The MNIST images are always 28x28 pixels.\nIMAGE_SIZE = 28\nIMAGE_PIXELS = IMAGE_SIZE * IMAGE_SIZE\n# Batch size. Must be evenly dividable by dataset sizes.\nBATCH_SIZE = 100\nEVAL_BATCH_SIZE = 3\n# Number of units in hidden layers.\nHIDDEN1_UNITS = 128\n\nNUM_STEPS = 25000\nDATA_DIR = \"MNIST_data\"\nMODEL_DIR = default=os.path.join(\n \"/tmp/tfmodels/mnist_onehlayer\",\n str(int(time.time())))\n",
"Next, we'll define a function that builds the core of the model graph. We'll see below, when this function is called, that the 'images' arg passed to this function is the images input placeholder.",
"# Build inference graph.\ndef mnist_inference(images, hidden1_units):\n \"\"\"Build the MNIST model up to where it may be used for inference.\n Args:\n images: Images placeholder.\n hidden1_units: Size of the first hidden layer.\n Returns:\n logits: Output tensor with the computed logits.\n \"\"\"\n # Hidden 1\n with tf.name_scope('hidden1'):\n weights = tf.Variable(\n tf.truncated_normal([IMAGE_PIXELS, hidden1_units],\n stddev=1.0 / math.sqrt(float(IMAGE_PIXELS))),\n name='weights')\n biases = tf.Variable(tf.zeros([hidden1_units]),\n name='biases')\n hidden1 = tf.nn.relu(tf.matmul(images, weights) + biases)\n # Linear\n with tf.name_scope('softmax_linear'):\n weights = tf.Variable(\n tf.truncated_normal([hidden1_units, NUM_CLASSES],\n stddev=1.0 / math.sqrt(float(hidden1_units))),\n name='weights')\n biases = tf.Variable(tf.zeros([NUM_CLASSES]),\n name='biases')\n logits = tf.matmul(hidden1, weights) + biases\n return logits\n",
"Next, we'll define a function that builds on the inference graph above in order to create a training graph. We define a loss function, create an optimizer, and create a 'train_op' that tells the optimizer to apply the gradients that minimize the loss.",
"# Build training graph.\ndef mnist_training(logits, labels, learning_rate):\n \"\"\"Build the training graph.\n\n Args:\n logits: Logits tensor, float - [BATCH_SIZE, NUM_CLASSES].\n labels: Labels tensor, int32 - [BATCH_SIZE], with values in the\n range [0, NUM_CLASSES).\n learning_rate: The learning rate to use for gradient descent.\n Returns:\n train_op: The Op for training.\n loss: The Op for calculating loss.\n \"\"\"\n # Create an operation that calculates loss.\n labels = tf.to_int64(labels)\n cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(\n logits, labels, name='xentropy')\n loss = tf.reduce_mean(cross_entropy, name='xentropy_mean')\n \n # Create the gradient descent optimizer with the given learning rate.\n optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n # Create a variable to track the global step.\n global_step = tf.Variable(0, name='global_step', trainable=False)\n \n # Use the optimizer to apply the gradients that minimize the loss\n # (and also increment the global step counter) as a single training step.\n train_op = optimizer.minimize(loss, global_step=global_step)\n return train_op, loss\n",
"Now we'll put it all together. We process the input data and generate the necessary placeholders.\nWe'll also add ops to the graph for calculating accuracy.\nWe'll also add support for generating \"summary\" info used by TensorBoard.\nThen, we create a session in which to run a training loop. \nAfter training, the model is checkpointed to a file.\nFinally, we show that we can load a checkpointed model file into a new session, and run a prediction based on the checkpointed model.",
"def model_train_and_eval():\n \"\"\"Build the full graph for feeding inputs, training, and\n saving checkpoints. Run the training. Then, load the saved graph and\n run some predictions.\"\"\"\n\n # Get input data: get the sets of images and labels for training,\n # validation, and test on MNIST.\n data_sets = read_data_sets(DATA_DIR, False)\n\n mnist_graph = tf.Graph()\n with mnist_graph.as_default():\n # Generate placeholders for the images and labels.\n images_placeholder = tf.placeholder(tf.float32)\n labels_placeholder = tf.placeholder(tf.int32)\n tf.add_to_collection(\"images\", images_placeholder) # Remember this Op.\n tf.add_to_collection(\"labels\", labels_placeholder) # Remember this Op.\n\n # Build a Graph that computes predictions from the inference model.\n logits = mnist_inference(images_placeholder,\n HIDDEN1_UNITS)\n tf.add_to_collection(\"logits\", logits) # Remember this Op.\n\n # Add to the Graph the Ops that calculate and apply gradients.\n train_op, loss = mnist_training(\n logits, labels_placeholder, 0.01)\n\n # prediction accuracy\n _, indices_op = tf.nn.top_k(logits)\n flattened = tf.reshape(indices_op, [-1])\n correct_prediction = tf.cast(\n tf.equal(labels_placeholder, flattened), tf.float32)\n accuracy = tf.reduce_mean(correct_prediction)\n\n # Define info to be used by the SummaryWriter. This will let\n # TensorBoard plot values during the training process.\n loss_summary = tf.scalar_summary(\"loss\", loss)\n train_summary_op = tf.merge_summary([loss_summary])\n\n # Add the variable initializer Op.\n init = tf.initialize_all_variables()\n\n # Create a saver for writing training checkpoints.\n saver = tf.train.Saver()\n\n # Create a summary writer.\n print(\"Writing Summaries to %s\" % MODEL_DIR)\n train_summary_writer = tf.train.SummaryWriter(MODEL_DIR)\n\n # Run training for MAX_STEPS and save checkpoint at the end.\n with tf.Session(graph=mnist_graph) as sess:\n # Run the Op to initialize the variables.\n sess.run(init)\n\n # Start the training loop.\n print(\"Starting training...\")\n for step in xrange(NUM_STEPS):\n # Read a batch of images and labels.\n images_feed, labels_feed = data_sets.train.next_batch(BATCH_SIZE)\n\n # Run one step of the model. The return values are the activations\n # from the `train_op` (which is discarded) and the `loss` Op. To\n # inspect the values of your Ops or variables, you may include them\n # in the list passed to sess.run() and the value tensors will be\n # returned in the tuple from the call.\n _, loss_value, tsummary, acc = sess.run(\n [train_op, loss, train_summary_op, accuracy],\n feed_dict={images_placeholder: images_feed,\n labels_placeholder: labels_feed})\n if step % 100 == 0:\n # Write summary info\n train_summary_writer.add_summary(tsummary, step)\n if step % 1000 == 0:\n # Print loss/accuracy info\n print('----Step %d: loss = %.4f' % (step, loss_value))\n print(\"accuracy: %s\" % acc)\n\n print(\"\\nFinished training. Writing checkpoint file.\")\n checkpoint_file = os.path.join(MODEL_DIR, 'checkpoint')\n saver.save(sess, checkpoint_file, global_step=step)\n _, loss_value = sess.run(\n [train_op, loss],\n feed_dict={images_placeholder: data_sets.test.images,\n labels_placeholder: data_sets.test.labels})\n print(\"Test set loss: %s\" % loss_value)\n\n # Run evaluation based on the saved checkpoint.\n with tf.Session(graph=tf.Graph()) as sess:\n checkpoint_file = tf.train.latest_checkpoint(MODEL_DIR)\n print(\"\\nRunning predictions based on saved checkpoint.\")\n print(\"checkpoint file: {}\".format(checkpoint_file))\n # Load the saved meta graph and restore variables\n saver = tf.train.import_meta_graph(\"{}.meta\".format(checkpoint_file))\n saver.restore(sess, checkpoint_file)\n\n # Retrieve the Ops we 'remembered'.\n logits = tf.get_collection(\"logits\")[0]\n images_placeholder = tf.get_collection(\"images\")[0]\n labels_placeholder = tf.get_collection(\"labels\")[0]\n\n # Add an Op that chooses the top k predictions.\n eval_op = tf.nn.top_k(logits)\n\n # Run evaluation.\n images_feed, labels_feed = data_sets.validation.next_batch(\n EVAL_BATCH_SIZE)\n prediction = sess.run(eval_op,\n feed_dict={images_placeholder: images_feed,\n labels_placeholder: labels_feed})\n for i in range(len(labels_feed)):\n print(\"Ground truth: %d\\nPrediction: %d\" %\n (labels_feed[i], prediction.indices[i][0]))",
"The payoff for all those function definitions... now we'll run model_train_and_eval()!",
"model_train_and_eval()",
"This model did okay... but not great. Next we'll try adding another hidden layer.\nContrast this code with that in the previous lab, which used TensorFlow's high-level APIs. You can probably see how it's much easier to use those high-level \"pre-canned\" Estimators where appropriate."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
evanmiltenburg/python-for-text-analysis
|
Chapters/Chapter 09 - Looping over containers.ipynb
|
apache-2.0
|
[
"Chapter 9 - Looping over containers\nSo far, you have been introduced to integers (e.g., 1), strings (e.g., 'hello'), lists (e.g. [1, 'a', 3]), and sets (e.g., {1, 2, 'Python'}). You have learned how to create and inspect them using methods and built-in functions. \nHowever, you'll find that most of the time, you want to carry out some operation for all the items in a sequence. Built-in functions and methods are then not always the answer. However, the for-loop is!\nThis chapter aims to introduce you to the for-loop, which is the most commonly used loop.\nAt the end of this chapter, you will be able to:\n* loop over a list\n* loop over a set\n* loop over a string\n* work with a new data structure: tuples\n* use two slightly more advanced concepts:\n * break\n * continue\nIf you want to learn more about these topics, you might find the following links useful:\n* Loop like a native: really good video \nIf you have questions about this chapter, please contact us (cltl.python.course@gmail.com).\n1. Looping over a list\nLet's look at an example using a for-loop over a list.",
"for number in [1,2,3]:\n print(number)",
"This loop prints the numbers 1, 2, and 3, each on a new line. The variable name number is just something I have chosen. It could have been anything, even something like sugar_bunny. But number is nice and descriptive. \nOK, so how does the loop work?\n\nThe Python interpreter starts by checking whether there's anything to iterate over. If the list is empty, it just passes over the for-loop and does nothing.\nThen, the first value in the iterable (in this case a list) gets assigned to the variable number.\nFollowing this, we enter a for-loop context, indicated by the indentation (four spaces preceding the print function). This for-loop context can be as big as you want. All Python cares about is those four spaces. Everything that is indented is part of the for-loop context.\nThen, Python carries out all the operations in the for-loop context. In this case, this is just print(number). Because number refers to the first element of the list, it prints 1.\nOnce all operations in the for-loop context have been carried out, the interpreter checks if there are any more elements in the list. If so, the next value (in this case 2) gets assigned to the variable number.\nThen, we move to step 3 again: enter the for-loop context, carry out all the operations, and check if there's another element in the list, and so on, until there are no more elements left.\n\nPlease note that if a container (in this case a list) is empty, we will not print anything.",
"for number in []:\n print(number)",
"Loops work extremely well with if-statements",
"for item in ['Python', 'is', 'really', 'nice', 'right?']:\n if item.startswith('r'):\n print(item, 'startswith the letter r')\n else:\n print(item, 'does not start with the letter r')",
"Tip:\nYou can reverse the order of a list by using list.reverse() or list[::-1]:",
"a_list = [1,2,3,4]\na_list.reverse()\n\nfor item in a_list:\n print(item)\n\na_list = [1,2,3,4]\n\nfor item in a_list[::-1]:\n print(item)",
"2. Looping over a set\nIt is also possible to loop over a set",
"a_set = {'sets', 'are', 'unordered'}\nfor item in a_set:\n print(item)",
"3. Looping over a string\nAlthough you might not expect this, you can actually loop over a string. You will not want to use this very often, but note that it is possible. You may encounter it later on, when you do it accidentally (instead of looping over a list).",
"word = \"hippopotamus\"\n\nfor letter in word:\n print(letter)\n \n\n# a more advanced example...\n\ntext = 'Python can be confusing!'\nwords = text.split()\n\n# something went wrong here - can you fix it?\nfor word in text:\n print(word)",
"4. Tuples\nOne data structure (a new one!) is very frequently used with for loops. This data structure is a tuple. It's very simple to create a tuple.",
"a_tuple = (1, 'a', 2)\nprint(a_tuple)",
"a tuple is defined in the same way as a list, except that the whole set of elements is enclosed in parentheses instead of square brackets.\nthe elements of a tuple have a defined order, just like a list.\ntuples can contain immutable objects.\ntuples can contain mutable objects, but we would advise you not to use sets and lists in tuples\nitems cannot be added or removed (with some exceptions which are not important here)\na tuple can be empty\ntuples have two methods: index and count\n\nImportant for use\n* you can unpack tuples\n* tuples are often used with for-loops",
"help(tuple)",
"A tuple can be empty. You can use () or tuple():",
"an_empty_tuple = ()\nprint(an_empty_tuple)\n\nanother_empty_tuple = tuple()\nprint(another_empty_tuple)",
"5. Unpacking variables\nIt's possible to unpack tuples to separate variables",
"a_tuple = (1, 'a', 2)\nfirst_el, second_el, third_el = a_tuple\nprint(first_el)\nprint(second_el)\nprint(third_el)",
"5.1 Using tuples in a for-loop\nIn addition, tuples are often used with for-loops. When working with Python, it's quite common that you end up with data structures like the following example:",
"language_data = [('the', 'determiner'),\n ('house', 'noun'),\n ('is', 'verb'),\n ('big', 'adjective')]",
"Usually, you would like to loop through such an example. One way of doing that is the following:",
"for item in language_data:\n print(item[0], item[1])",
"However, we can unpack the tuple within the for-loop to make it more readable:",
"for word, part_of_speech in language_data:\n print(word, part_of_speech)",
"Note about unpacking\nUnpacking is an operation that can also be used with other containers, for instance, with lists:",
"a_list = [1,2,3]\nfirst_el, second_el, third_el = a_list\nprint(first_el)\nprint(second_el)\nprint(third_el)",
"6. Continue and break\nbreak\nThe break statement lets us escape a loop.",
"word = \"hippopotamus\"\n\nfor letter in word:\n print(letter)\n if letter ==\"o\":\n break",
"continue\nThe continue statement ends the current iteration and jumps to the top of the loop and starts the next iteration.",
"word = \"hippopotamus\"\n\nfor letter in word:\n if letter ==\"o\":\n continue\n print(letter)",
"In the two examples above, not all letters in the word 'hippopotamus' are printed. Both break and continue teleport you to another part of the code. break teleports out of the loop, continue teleports you to the next iteration of the loop.\nExercises\nExercise 1:\nUse a loop to print each word in the list in a new line (i.e. only use a single print statement and do not use '\\n' to start a new line).",
"sentence = 'Now I know a lot about looping'\nword_list = sentence.split()\n\n# your code here\n",
"Exercise 2:\nWe have a collection of numbers and want to know how many unique numbers we have. Then we want to print each of them on a single line using a single print statment and not using '\\n' to stat a new line.",
"number_collection = [1,2,2,3,50, 50, 60, 57, 58, 3, 37, 37]\nunique_number = # your code here\n\n# print each number on a single line\n# your code here",
"Exercise 3:\nWhy might unpacking lists in this way sometimes be a bad idea? \nHint: Think of mutability and consider the code below.",
"a_list = [1,2,3]\nfirst_el, second_el, third_el = a_list\nprint(first_el)\nprint(second_el)\nprint(third_el)\n\na_list.append(5)\nfirst_el, second_el, third_el = a_list\nprint(first_el)\nprint(second_el)\nprint(third_el)\n\na_list.append(5)\nfirst_el, second_el, third_el = a_list\nprint(first_el)\nprint(second_el)\nprint(third_el)",
"Exercise 4:\nYou could even do this with a set. However, not a good idea either? Why? Hint: Think about order.",
"a_set = {1,2,3,}\nfirst_el, second_el, third_el = a_set\nprint(first_el)\nprint(second_el)\nprint(third_el)\n",
"Exercise 5:\nPlease use the break statement to make the loop stop after word 'really'",
"['Python', 'is', 'really', 'nice', 'right?']\n# your code here",
"Please use the continue statement to make sure that words that start with the letter 'r' are not printed",
"['Python', 'is', 'really', 'nice', 'right?']\n# your code here"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
aldian/tensorflow
|
tensorflow/lite/micro/examples/micro_speech/train/train_micro_speech_model.ipynb
|
apache-2.0
|
[
"Train a Simple Audio Recognition Model\nThis notebook demonstrates how to train a 20 kB Simple Audio Recognition model to recognize keywords in speech.\nThe model created in this notebook is used in the micro_speech example for TensorFlow Lite for MicroControllers.\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/micro_speech/train/train_micro_speech_model.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/micro_speech/train/train_micro_speech_model.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\nTraining is much faster using GPU acceleration. Before you proceed, ensure you are using a GPU runtime by going to Runtime -> Change runtime type and set Hardware accelerator: GPU. Training 15,000 iterations will take 1.5 - 2 hours on a GPU runtime.\nConfigure Defaults\nMODIFY the following constants for your specific use case.",
"# A comma-delimited list of the words you want to train for.\n# The options are: yes,no,up,down,left,right,on,off,stop,go\n# All the other words will be used to train an \"unknown\" label and silent\n# audio data with no spoken words will be used to train a \"silence\" label.\nWANTED_WORDS = \"yes,no\"\n\n# The number of steps and learning rates can be specified as comma-separated\n# lists to define the rate at each stage. For example,\n# TRAINING_STEPS=12000,3000 and LEARNING_RATE=0.001,0.0001\n# will run 12,000 training loops in total, with a rate of 0.001 for the first\n# 8,000, and 0.0001 for the final 3,000.\nTRAINING_STEPS = \"12000,3000\"\nLEARNING_RATE = \"0.001,0.0001\"\n\n# Calculate the total number of steps, which is used to identify the checkpoint\n# file name.\nTOTAL_STEPS = str(sum(map(lambda string: int(string), TRAINING_STEPS.split(\",\"))))\n\n# Print the configuration to confirm it\nprint(\"Training these words: %s\" % WANTED_WORDS)\nprint(\"Training steps in each stage: %s\" % TRAINING_STEPS)\nprint(\"Learning rate in each stage: %s\" % LEARNING_RATE)\nprint(\"Total number of training steps: %s\" % TOTAL_STEPS)",
"DO NOT MODIFY the following constants as they include filepaths used in this notebook and data that is shared during training and inference.",
"# Calculate the percentage of 'silence' and 'unknown' training samples required\n# to ensure that we have equal number of samples for each label.\nnumber_of_labels = WANTED_WORDS.count(',') + 1\nnumber_of_total_labels = number_of_labels + 2 # for 'silence' and 'unknown' label\nequal_percentage_of_training_samples = int(100.0/(number_of_total_labels))\nSILENT_PERCENTAGE = equal_percentage_of_training_samples\nUNKNOWN_PERCENTAGE = equal_percentage_of_training_samples\n\n# Constants which are shared during training and inference\nPREPROCESS = 'micro'\nWINDOW_STRIDE = 20\nMODEL_ARCHITECTURE = 'tiny_conv' # Other options include: single_fc, conv,\n # low_latency_conv, low_latency_svdf, tiny_embedding_conv\n\n# Constants used during training only\nVERBOSITY = 'WARN'\nEVAL_STEP_INTERVAL = '1000'\nSAVE_STEP_INTERVAL = '1000'\n\n# Constants for training directories and filepaths\nDATASET_DIR = 'dataset/'\nLOGS_DIR = 'logs/'\nTRAIN_DIR = 'train/' # for training checkpoints and other files.\n\n# Constants for inference directories and filepaths\nimport os\nMODELS_DIR = 'models'\nif not os.path.exists(MODELS_DIR):\n os.mkdir(MODELS_DIR)\nMODEL_TF = os.path.join(MODELS_DIR, 'model.pb')\nMODEL_TFLITE = os.path.join(MODELS_DIR, 'model.tflite')\nFLOAT_MODEL_TFLITE = os.path.join(MODELS_DIR, 'float_model.tflite')\nMODEL_TFLITE_MICRO = os.path.join(MODELS_DIR, 'model.cc')\nSAVED_MODEL = os.path.join(MODELS_DIR, 'saved_model')\n\nQUANT_INPUT_MIN = 0.0\nQUANT_INPUT_MAX = 26.0\nQUANT_INPUT_RANGE = QUANT_INPUT_MAX - QUANT_INPUT_MIN",
"Setup Environment\nInstall Dependencies",
"%tensorflow_version 1.x\nimport tensorflow as tf",
"DELETE any old data from previous runs",
"!rm -rf {DATASET_DIR} {LOGS_DIR} {TRAIN_DIR} {MODELS_DIR}",
"Clone the TensorFlow Github Repository, which contains the relevant code required to run this tutorial.",
"!git clone -q --depth 1 https://github.com/tensorflow/tensorflow",
"Load TensorBoard to visualize the accuracy and loss as training proceeds.",
"%load_ext tensorboard\n%tensorboard --logdir {LOGS_DIR}",
"Training\nThe following script downloads the dataset and begin training.",
"!python tensorflow/tensorflow/examples/speech_commands/train.py \\\n--data_dir={DATASET_DIR} \\\n--wanted_words={WANTED_WORDS} \\\n--silence_percentage={SILENT_PERCENTAGE} \\\n--unknown_percentage={UNKNOWN_PERCENTAGE} \\\n--preprocess={PREPROCESS} \\\n--window_stride={WINDOW_STRIDE} \\\n--model_architecture={MODEL_ARCHITECTURE} \\\n--how_many_training_steps={TRAINING_STEPS} \\\n--learning_rate={LEARNING_RATE} \\\n--train_dir={TRAIN_DIR} \\\n--summaries_dir={LOGS_DIR} \\\n--verbosity={VERBOSITY} \\\n--eval_step_interval={EVAL_STEP_INTERVAL} \\\n--save_step_interval={SAVE_STEP_INTERVAL}",
"Skipping the training\nIf you don't want to spend an hour or two training the model from scratch, you can download pretrained checkpoints by uncommenting the lines below (removing the '#'s at the start of each line) and running them.",
"#!curl -O \"https://storage.googleapis.com/download.tensorflow.org/models/tflite/speech_micro_train_2020_05_10.tgz\"\n#!tar xzf speech_micro_train_2020_05_10.tgz",
"Generate a TensorFlow Model for Inference\nCombine relevant training results (graph, weights, etc) into a single file for inference. This process is known as freezing a model and the resulting model is known as a frozen model/graph, as it cannot be further re-trained after this process.",
"!rm -rf {SAVED_MODEL}\n!python tensorflow/tensorflow/examples/speech_commands/freeze.py \\\n--wanted_words=$WANTED_WORDS \\\n--window_stride_ms=$WINDOW_STRIDE \\\n--preprocess=$PREPROCESS \\\n--model_architecture=$MODEL_ARCHITECTURE \\\n--start_checkpoint=$TRAIN_DIR$MODEL_ARCHITECTURE'.ckpt-'{TOTAL_STEPS} \\\n--save_format=saved_model \\\n--output_file={SAVED_MODEL}",
"Generate a TensorFlow Lite Model\nConvert the frozen graph into a TensorFlow Lite model, which is fully quantized for use with embedded devices.\nThe following cell will also print the model size, which will be under 20 kilobytes.",
"import sys\n# We add this path so we can import the speech processing modules.\nsys.path.append(\"/content/tensorflow/tensorflow/examples/speech_commands/\")\nimport input_data\nimport models\nimport numpy as np\n\nSAMPLE_RATE = 16000\nCLIP_DURATION_MS = 1000\nWINDOW_SIZE_MS = 30.0\nFEATURE_BIN_COUNT = 40\nBACKGROUND_FREQUENCY = 0.8\nBACKGROUND_VOLUME_RANGE = 0.1\nTIME_SHIFT_MS = 100.0\n\nDATA_URL = 'https://storage.googleapis.com/download.tensorflow.org/data/speech_commands_v0.02.tar.gz'\nVALIDATION_PERCENTAGE = 10\nTESTING_PERCENTAGE = 10\n\nmodel_settings = models.prepare_model_settings(\n len(input_data.prepare_words_list(WANTED_WORDS.split(','))),\n SAMPLE_RATE, CLIP_DURATION_MS, WINDOW_SIZE_MS,\n WINDOW_STRIDE, FEATURE_BIN_COUNT, PREPROCESS)\naudio_processor = input_data.AudioProcessor(\n DATA_URL, DATASET_DIR,\n SILENT_PERCENTAGE, UNKNOWN_PERCENTAGE,\n WANTED_WORDS.split(','), VALIDATION_PERCENTAGE,\n TESTING_PERCENTAGE, model_settings, LOGS_DIR)\n\nwith tf.Session() as sess:\n float_converter = tf.lite.TFLiteConverter.from_saved_model(SAVED_MODEL)\n float_tflite_model = float_converter.convert()\n float_tflite_model_size = open(FLOAT_MODEL_TFLITE, \"wb\").write(float_tflite_model)\n print(\"Float model is %d bytes\" % float_tflite_model_size)\n\n converter = tf.lite.TFLiteConverter.from_saved_model(SAVED_MODEL)\n converter.optimizations = [tf.lite.Optimize.DEFAULT]\n converter.inference_input_type = tf.lite.constants.INT8\n converter.inference_output_type = tf.lite.constants.INT8\n def representative_dataset_gen():\n for i in range(100):\n data, _ = audio_processor.get_data(1, i*1, model_settings,\n BACKGROUND_FREQUENCY, \n BACKGROUND_VOLUME_RANGE,\n TIME_SHIFT_MS,\n 'testing',\n sess)\n flattened_data = np.array(data.flatten(), dtype=np.float32).reshape(1, 1960)\n yield [flattened_data]\n converter.representative_dataset = representative_dataset_gen\n tflite_model = converter.convert()\n tflite_model_size = open(MODEL_TFLITE, \"wb\").write(tflite_model)\n print(\"Quantized model is %d bytes\" % tflite_model_size)\n",
"Testing the TensorFlow Lite model's accuracy\nVerify that the model we've exported is still accurate, using the TF Lite Python API and our test set.",
"# Helper function to run inference\ndef run_tflite_inference(tflite_model_path, model_type=\"Float\"):\n # Load test data\n np.random.seed(0) # set random seed for reproducible test results.\n with tf.Session() as sess:\n test_data, test_labels = audio_processor.get_data(\n -1, 0, model_settings, BACKGROUND_FREQUENCY, BACKGROUND_VOLUME_RANGE,\n TIME_SHIFT_MS, 'testing', sess)\n test_data = np.expand_dims(test_data, axis=1).astype(np.float32)\n\n # Initialize the interpreter\n interpreter = tf.lite.Interpreter(tflite_model_path)\n interpreter.allocate_tensors()\n\n input_details = interpreter.get_input_details()[0]\n output_details = interpreter.get_output_details()[0]\n\n # For quantized models, manually quantize the input data from float to integer\n if model_type == \"Quantized\":\n input_scale, input_zero_point = input_details[\"quantization\"]\n test_data = test_data / input_scale + input_zero_point\n test_data = test_data.astype(input_details[\"dtype\"])\n\n correct_predictions = 0\n for i in range(len(test_data)):\n interpreter.set_tensor(input_details[\"index\"], test_data[i])\n interpreter.invoke()\n output = interpreter.get_tensor(output_details[\"index\"])[0]\n top_prediction = output.argmax()\n correct_predictions += (top_prediction == test_labels[i])\n\n print('%s model accuracy is %f%% (Number of test samples=%d)' % (\n model_type, (correct_predictions * 100) / len(test_data), len(test_data)))\n\n# Compute float model accuracy\nrun_tflite_inference(FLOAT_MODEL_TFLITE)\n\n# Compute quantized model accuracy\nrun_tflite_inference(MODEL_TFLITE, model_type='Quantized')",
"Generate a TensorFlow Lite for MicroControllers Model\nConvert the TensorFlow Lite model into a C source file that can be loaded by TensorFlow Lite for Microcontrollers.",
"# Install xxd if it is not available\n!apt-get update && apt-get -qq install xxd\n# Convert to a C source file\n!xxd -i {MODEL_TFLITE} > {MODEL_TFLITE_MICRO}\n# Update variable names\nREPLACE_TEXT = MODEL_TFLITE.replace('/', '_').replace('.', '_')\n!sed -i 's/'{REPLACE_TEXT}'/g_model/g' {MODEL_TFLITE_MICRO}",
"Deploy to a Microcontroller\nFollow the instructions in the micro_speech README.md for TensorFlow Lite for MicroControllers to deploy this model on a specific microcontroller.\nReference Model: If you have not modified this notebook, you can follow the instructions as is, to deploy the model. Refer to the micro_speech/train/models directory to access the models generated in this notebook.\nNew Model: If you have generated a new model to identify different words: (i) Update kCategoryCount and kCategoryLabels in micro_speech/micro_features/micro_model_settings.h and (ii) Update the values assigned to the variables defined in micro_speech/micro_features/model.cc with values displayed after running the following cell.",
"# Print the C source file\n!cat {MODEL_TFLITE_MICRO}"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
pbutenee/ml-tutorial
|
release/1/anomaly_detection.ipynb
|
mit
|
[
"Anomaly Detection\nIn this part will learn how you to build an anomaly detection model yourself.\n1. Load Data\nFirst we will load the data using a pickle format.\nThe data we use contains the page views of one of our own websites and for convenience there is only 1 data point per hour.",
"import pickle\n\nwith open('data/past_data.pickle', 'rb') as file:\n past = pickle.load(file, encoding='latin1')\n \nwith open('data/all_data.pickle', 'rb') as file:\n all_data = pickle.load(file, encoding='latin1')\n\nprint(f'Past data shape = {past.shape}')\nprint(f'Full data shape = {all_data.shape}')",
"2. Plot past data\nTo plot the past data we will use matplotlib.pyplot. For convenience we import it as plt. \n% matplotlib inline makes sure you can see the output in the notebook. \n(Use % matplotlib notebook if you want to make it interactive. Don't forget to click the power button to finish the interaction and to be able to plot a new figure.)",
"import matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.figure(figsize=(20,4)) # This creates a new figure with the dimensions of 20 by 4\nplt.plot(past) # This creates the actual plot\nplt.show() # This shows the plot",
"3. Find the minimum and maximum\nUse np.nanmax() and np.nanmin() to find the minmum and maximum while ignoring the NaNs.",
"import numpy as np\n\n##### Implement this part of the code #####\nraise NotImplementedError(\"Code not implemented, follow the instructions.\")\n# maximum = ?\n# minimum = ?\nprint(minimum, maximum)",
"And plot these together with the data using the plt.axhline() function.",
"plt.figure(figsize=(20,4))\nplt.plot(past)\nplt.axhline(maximum, color='r')\nplt.axhline(minimum, color='r')\nplt.show()",
"4. Testing the model on unseen data\nNow plot all the data instead of just the past data",
"plt.figure(figsize=(20,4))\nplt.plot(all_data, color='g')\nplt.plot(past, color='b')\nplt.axhline(maximum, color='r')\nplt.axhline(minimum, color='r')\nplt.show()",
"You can clearly see now that this model does not detect any anomalies. However, the last day of data clearly looks different compared to the other days.\nIn what follows we will build a better model for anomaly detection that is able to detect these 'shape shifts' as well.\n5. Building a model with seasonality\nTo do this we are going to take a step by step approach. Maybe it won't be clear at first why every step is necessary, but that will become clear throughout the process.\nFirst we are going to reshape the past data to a 2 dimensional array with 24 columns. This will give us 1 row for each day and 1 column for each hour. For this we are going to use the np.reshape() function. The newshape parameter is a tuple which in this case should be (-1, 24). If you use a -1 the reshape function will automatically compute that dimension. Pay attention to the order in which the numbers are repositioned (the default ordering should work fine here).",
"##### Implement this part of the code #####\nraise NotImplementedError(\"Code not implemented, follow the instructions.\")\n# reshaped_past = ?\n\nassert len(reshaped_past.shape) == 2\nassert reshaped_past.shape[1] == 24",
"Now we are going to compute the average over all days. For this we are going to use the np.mean() with the axis variable set to the first dimension (axis=0). Next we are going to plot this.",
"##### Implement this part of the code #####\nraise NotImplementedError(\"Code not implemented, follow the instructions.\")\n# average_past = ?\n\nassert average_past.shape == (24,)\n\nplt.plot(average_past)\nplt.show()",
"What you can see in the plot above is the average number of page views for each hour of the day.\nNow let's plot this together with the past data on 1 plot. Use a for loop and the np.concatenate() function to concatenate this average 6 times into the variable model.",
"model = []\nfor i in range(6):\n##### Implement this part of the code #####\nraise NotImplementedError(\"Code not implemented, follow the instructions.\")\n# model = np.concatenate( ? )\n\nplt.figure(figsize=(20,4)) \nplt.plot(model, color='k')\nplt.plot(past, color='b')\nplt.show()",
"In the next step we are going to compute the maximum (= positive) and minimum (= negative) deviations from the average to determine what kind of deviations are normal. (Just subtract the average/model from the past and take the min and the max of that)",
"##### Implement this part of the code #####\nraise NotImplementedError(\"Code not implemented, follow the instructions.\")\n# delta_max = ?\n# delta_min = ?\nprint(delta_min, delta_max)",
"Now let's plot this.",
"plt.figure(figsize=(20,4))\nplt.plot(model, color='k')\nplt.plot(past, color='b')\nplt.plot(model + delta_max, color='r')\nplt.plot(model + delta_min, color='r')\nplt.show()",
"Now let's test this on all data",
"model_all = np.concatenate((model, average_past))\n\nplt.figure(figsize=(20,4))\nplt.plot(all_data, color='g')\nplt.plot(model_all, color='k')\nplt.plot(past, color='b')\nplt.plot(model_all + delta_max, color='r')\nplt.plot(model_all + delta_min, color='r')\nplt.show()",
"Now you can clearly see where the anomaly is detected by this more advanced model. The code below can gives you the exact indices where an anomaly is detected. The functions uses are the following np.argwhere() and np.logical_or().",
"anomaly_timepoints = np.argwhere(np.logical_or(all_data < model_all + delta_min, all_data > model_all + delta_max))\n\nplt.figure(figsize=(20,4))\nplt.scatter(anomaly_timepoints, all_data[anomaly_timepoints], color='r', linewidth=8)\nplt.plot(all_data, color='g')\nplt.plot(model_all, color='k')\nplt.plot(past, color='b')\nplt.plot(model_all + delta_max, color='r')\nplt.plot(model_all + delta_min, color='r')\nplt.xlim(0, len(all_data))\nplt.show()\n\nprint(f'The anomaly occurs at the following timestamps: {anomaly_timepoints}')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ankurankan/pgmpy_notebook
|
notebooks/8. Reading and Writing from pgmpy file formats.ipynb
|
mit
|
[
"readwrite module pgmpy\npgmpy is a python library for creation, manipulation and implementation of Probabilistic graph models. There are various standard file formats for representing PGM data. PGM data basically consists of graph, a distribution assoicated to each node and a few other attributes of a graph.\npgmpy has a functionality to read networks from and write networks to these standard file formats. Currently pgmpy supports 5 file formats ProbModelXML, PomDPX, XMLBIF, XMLBeliefNetwork and UAI file formats. Using these modules, models can be specified in a uniform file format and readily converted to bayesian or markov model objects. \nNow, Let's read a ProbModel XML File and get the corresponding model instance of the probmodel.",
"from pgmpy.readwrite import ProbModelXMLReader\n\nreader_string = ProbModelXMLReader('../files/example.pgmx')",
"Now to get the corresponding model instance we need get_model",
"model = reader_string.get_model()",
"Now we can query this model accoring to our requirements. It is an instance of BayesianModel or MarkovModel depending on the type of the model which is given.\nSuppose we want to know all the nodes in the given model, we can do:",
"print(model.nodes())",
"To get all the edges we can use model.edges method.",
"model.edges()",
"To get all the cpds of the given model we can use model.get_cpds and to get the corresponding values we can iterate over each cpd and call the corresponding get_cpd method.",
"cpds = model.get_cpds()\nfor cpd in cpds:\n print(cpd.get_cpd())",
"pgmpy not only allows us to read from the specific file format but also helps us to write the given model into the specific file format.\nLet's write a sample model into Probmodel XML file.\nFor that first define our data for the model.",
"import numpy as np\n\nedges_list = [('VisitToAsia', 'Tuberculosis'),\n ('LungCancer', 'TuberculosisOrCancer'),\n ('Smoker', 'LungCancer'),\n ('Smoker', 'Bronchitis'),\n ('Tuberculosis', 'TuberculosisOrCancer'),\n ('Bronchitis', 'Dyspnea'),\n ('TuberculosisOrCancer', 'Dyspnea'),\n ('TuberculosisOrCancer', 'X-ray')]\nnodes = {'Smoker': {'States': {'no': {}, 'yes': {}},\n 'role': 'chance',\n 'type': 'finiteStates',\n 'Coordinates': {'y': '52', 'x': '568'},\n 'AdditionalProperties': {'Title': 'S', 'Relevance': '7.0'}},\n 'Bronchitis': {'States': {'no': {}, 'yes': {}},\n 'role': 'chance',\n 'type': 'finiteStates',\n 'Coordinates': {'y': '181', 'x': '698'},\n 'AdditionalProperties': {'Title': 'B', 'Relevance': '7.0'}},\n 'VisitToAsia': {'States': {'no': {}, 'yes': {}},\n 'role': 'chance',\n 'type': 'finiteStates',\n 'Coordinates': {'y': '58', 'x': '290'},\n 'AdditionalProperties': {'Title': 'A', 'Relevance': '7.0'}},\n 'Tuberculosis': {'States': {'no': {}, 'yes': {}},\n 'role': 'chance',\n 'type': 'finiteStates',\n 'Coordinates': {'y': '150', 'x': '201'},\n 'AdditionalProperties': {'Title': 'T', 'Relevance': '7.0'}},\n 'X-ray': {'States': {'no': {}, 'yes': {}},\n 'role': 'chance',\n 'AdditionalProperties': {'Title': 'X', 'Relevance': '7.0'},\n 'Coordinates': {'y': '322', 'x': '252'},\n 'Comment': 'Indica si el test de rayos X ha sido positivo',\n 'type': 'finiteStates'},\n 'Dyspnea': {'States': {'no': {}, 'yes': {}},\n 'role': 'chance',\n 'type': 'finiteStates',\n 'Coordinates': {'y': '321', 'x': '533'},\n 'AdditionalProperties': {'Title': 'D', 'Relevance': '7.0'}},\n 'TuberculosisOrCancer': {'States': {'no': {}, 'yes': {}},\n 'role': 'chance',\n 'type': 'finiteStates',\n 'Coordinates': {'y': '238', 'x': '336'},\n 'AdditionalProperties': {'Title': 'E', 'Relevance': '7.0'}},\n 'LungCancer': {'States': {'no': {}, 'yes': {}},\n 'role': 'chance',\n 'type': 'finiteStates',\n 'Coordinates': {'y': '152', 'x': '421'},\n 'AdditionalProperties': {'Title': 'L', 'Relevance': '7.0'}}}\nedges = {'LungCancer': {'TuberculosisOrCancer': {'directed': 'true'}},\n 'Smoker': {'LungCancer': {'directed': 'true'},\n 'Bronchitis': {'directed': 'true'}},\n 'Dyspnea': {},\n 'X-ray': {},\n 'VisitToAsia': {'Tuberculosis': {'directed': 'true'}},\n 'TuberculosisOrCancer': {'X-ray': {'directed': 'true'},\n 'Dyspnea': {'directed': 'true'}},\n 'Bronchitis': {'Dyspnea': {'directed': 'true'}},\n 'Tuberculosis': {'TuberculosisOrCancer': {'directed': 'true'}}}\n\ncpds = [{'Values': np.array([[0.95, 0.05], [0.02, 0.98]]),\n 'Variables': {'X-ray': ['TuberculosisOrCancer']}},\n {'Values': np.array([[0.7, 0.3], [0.4, 0.6]]),\n 'Variables': {'Bronchitis': ['Smoker']}},\n {'Values': np.array([[0.9, 0.1, 0.3, 0.7], [0.2, 0.8, 0.1, 0.9]]),\n 'Variables': {'Dyspnea': ['TuberculosisOrCancer', 'Bronchitis']}},\n {'Values': np.array([[0.99], [0.01]]),\n 'Variables': {'VisitToAsia': []}},\n {'Values': np.array([[0.5], [0.5]]),\n 'Variables': {'Smoker': []}},\n {'Values': np.array([[0.99, 0.01], [0.9, 0.1]]),\n 'Variables': {'LungCancer': ['Smoker']}},\n {'Values': np.array([[0.99, 0.01], [0.95, 0.05]]),\n 'Variables': {'Tuberculosis': ['VisitToAsia']}},\n {'Values': np.array([[1, 0, 0, 1], [0, 1, 0, 1]]),\n 'Variables': {'TuberculosisOrCancer': ['LungCancer', 'Tuberculosis']}}]",
"Now let's create a BayesianModel for this data.",
"from pgmpy.models import BayesianModel\nfrom pgmpy.factors import TabularCPD\n\nmodel = BayesianModel(edges_list)\n\nfor node in nodes:\n model.node[node] = nodes[node]\nfor edge in edges:\n model.edge[edge] = edges[edge]\n\ntabular_cpds = []\nfor cpd in cpds:\n var = list(cpd['Variables'].keys())[0]\n evidence = cpd['Variables'][var]\n values = cpd['Values']\n states = len(nodes[var]['States'])\n evidence_card = [len(nodes[evidence_var]['States'])\n for evidence_var in evidence]\n tabular_cpds.append(\n TabularCPD(var, states, values, evidence, evidence_card))\n\nmodel.add_cpds(*tabular_cpds)\n\nfrom pgmpy.readwrite import ProbModelXMLWriter, get_probmodel_data",
"To get the data which we need to give to the ProbModelXMLWriter to get the corresponding fileformat we need to use the method get_probmodel_data. This method is only specific to ProbModelXML file, for other file formats we would directly pass the model to the given Writer Class.",
"model_data = get_probmodel_data(model)\nwriter = ProbModelXMLWriter(model_data=model_data)\nprint(writer)",
"To write the xml data into the file we can use the method write_file of the given Writer class.",
"writer.write_file('probmodelxml.pgmx')",
"General WorkFlow of the readwrite module\npgmpy.readwrite.[fileformat]Reader is base class for reading the given file format. Replace file format with the desired fileforamt from which you want to read the file. In this base class there are different methods defined to parse the given file. For example for XMLBelief Network various methods which are defined are as follows:",
"from pgmpy.readwrite.XMLBeliefNetwork import XBNReader\nreader = XBNReader('../files/xmlbelief.xml')",
"get_model: It returns an instance of the given model, for ex, BayesianModel in cases of XMLBelief format.",
"model = reader.get_model()\nprint(model.nodes())\nprint(model.edges())",
"pgmpy.readwrite.[fileformat]Writer is base class for writing the model into the given file format. It takes a model as an argument which can be an instance of BayesianModel, MarkovModel. Replace file fomat with the desired fileforamt from which you want to read the file. In this base class there are different methods defined to set the contents of the new file to be created from the given model. For example for XMLBelief Network various methods such as set_analysisnotebook, etc are defined which helps to set up the network data.",
"from pgmpy.models import BayesianModel\nfrom pgmpy.factors import TabularCPD\nimport numpy as np\nnodes = {'c': {'STATES': ['Present', 'Absent'],\n 'DESCRIPTION': '(c) Brain Tumor',\n 'YPOS': '11935',\n 'XPOS': '15250',\n 'TYPE': 'discrete'},\n 'a': {'STATES': ['Present', 'Absent'],\n 'DESCRIPTION': '(a) Metastatic Cancer',\n 'YPOS': '10465',\n 'XPOS': '13495',\n 'TYPE': 'discrete'},\n 'b': {'STATES': ['Present', 'Absent'],\n 'DESCRIPTION': '(b) Serum Calcium Increase',\n 'YPOS': '11965',\n 'XPOS': '11290',\n 'TYPE': 'discrete'},\n 'e': {'STATES': ['Present', 'Absent'],\n 'DESCRIPTION': '(e) Papilledema',\n 'YPOS': '13240',\n 'XPOS': '17305',\n 'TYPE': 'discrete'},\n 'd': {'STATES': ['Present', 'Absent'],\n 'DESCRIPTION': '(d) Coma',\n 'YPOS': '12985',\n 'XPOS': '13960',\n 'TYPE': 'discrete'}}\nmodel = BayesianModel([('b', 'd'), ('a', 'b'), ('a', 'c'), ('c', 'd'), ('c', 'e')])\ncpd_distribution = {'a': {'TYPE': 'discrete', 'DPIS': np.array([[0.2, 0.8]])},\n 'e': {'TYPE': 'discrete', 'DPIS': np.array([[0.8, 0.2],\n [0.6, 0.4]]), 'CONDSET': ['c'], 'CARDINALITY': [2]},\n 'b': {'TYPE': 'discrete', 'DPIS': np.array([[0.8, 0.2],\n [0.2, 0.8]]), 'CONDSET': ['a'], 'CARDINALITY': [2]},\n 'c': {'TYPE': 'discrete', 'DPIS': np.array([[0.2, 0.8],\n [0.05, 0.95]]), 'CONDSET': ['a'], 'CARDINALITY': [2]},\n 'd': {'TYPE': 'discrete', 'DPIS': np.array([[0.8, 0.2],\n [0.9, 0.1],\n [0.7, 0.3],\n [0.05, 0.95]]), 'CONDSET': ['b', 'c'], 'CARDINALITY': [2, 2]}}\n\ntabular_cpds = []\nfor var, values in cpd_distribution.items():\n evidence = values['CONDSET'] if 'CONDSET' in values else []\n cpd = values['DPIS']\n evidence_card = values['CARDINALITY'] if 'CARDINALITY' in values else []\n states = nodes[var]['STATES']\n cpd = TabularCPD(var, len(states), cpd,\n evidence=evidence,\n evidence_card=evidence_card)\n tabular_cpds.append(cpd)\nmodel.add_cpds(*tabular_cpds)\n\nfor var, properties in nodes.items():\n model.node[var] = properties\n\n\nfrom pgmpy.readwrite.XMLBeliefNetwork import XBNWriter\nwriter = XBNWriter(model = model)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
pedroeml/t1-fcg
|
CrowdDataAnalysis/analysis/plots/group_analysis_plots.ipynb
|
mit
|
[
"import matplotlib.pyplot as plt\nimport json\nimport pandas as pd",
"China - CN: CN-01\nThis notebook contains few data analysis through the China's CN-01 data from Cultural Crowds dataset¹\n¹ <sub>FAVARETTO, R.; DIHL, L. ; BARRETO, R. ; MUSSE, S. R. Using Group Behaviors To Detect Hofstede Cultural Dimensions. IEEE International Conference on Image Processing (ICIP), 2016.</sub>",
"d = None\nwith open('..\\..\\group_analysis.json') as f:\n d = json.load(f)",
"Number of groups for frame\nCollected data about the number of groups through time. The average of the number of groups was 15.7777, minimum of 14 and maximum of 19.",
"df_num_groups = pd.DataFrame(data={'Min. Num. of Groups': d['min_num_groups'], 'Avg. Num. of Groups': d['avg_num_groups'], 'Max. Num. of Groups': d['max_num_groups']})\ndf_num_groups\n\nplt.figure()\nax = df_num_groups.plot(title='Number of Groups Analysis')\nplt.xlabel('Frame nº')\nplt.ylabel('Number of Groups')\nplt.show()",
"Number of elements on each group for frame\nCollected data about the number persons for each group through time. The average of the number of people for group was 2.528361, minimum of 1 and maximum of 7.",
"df_group_elements = pd.DataFrame(data={'Min. Group Elements': d['min_group_elements'], 'Avg. Group Elements': d['avg_group_elements'], 'Max. Group Elements': d['max_group_elements']})\ndf_group_elements\n\nplt.figure()\nax = df_group_elements.plot(title='Number of Group Elements')\nplt.xlabel('Frame nº')\nplt.ylabel('Number of Group Elements')\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mfouesneau/pyphot
|
examples/astropy_QuickStart.ipynb
|
mit
|
[
"pyphot - A tool for computing photometry from spectra\nSome examples are provided in this notebook\nFull documentation available at http://mfouesneau.github.io/docs/pyphot/",
"%matplotlib inline\n\nimport pylab as plt\nimport numpy as np\n\nimport sys\nsys.path.append('../')\nfrom pyphot import astropy as pyphot",
"Quick Start\nQuick start example to access the library and it's content",
"# get the internal default library of passbands filters\nlib = pyphot.get_library()\nprint(\"Library contains: \", len(lib), \" filters\")\n# find all filter names that relates to IRAC\n# and print some info\nf = lib.find('irac')\nfor name in f:\n lib[name].info(show_zeropoints=True)",
"Suppose one has a calibrated spectrum and wants to compute the vega magnitude throug the HST WFC3 F110W passband,",
"# convert to magnitudes\nimport numpy as np\n\n# We'll use Vega spectrum as example\nfrom pyphot.astropy import Vega\nvega = Vega()\nf = lib['HST_WFC3_F110W']\n# compute the integrated flux through the filter f\n# note that it work on many spectra at once\nfluxes = f.get_flux(vega.wavelength, vega.flux, axis=-1)\n# Note that fluxes is now with units of erg/s/cm2/AA\n# pyphot gives Vega in flam and can convert between flux density units. \nfluxes, vega.wavelength, vega.flux\n\n# convert to vega magnitudes\nmags = -2.5 * np.log10(fluxes.value) - f.Vega_zero_mag\nprint(\"Vega magnitude of Vega in {0:s} is : {1:f} mag\".format(f.name, mags))\nmags = -2.5 * np.log10(fluxes.value) - f.AB_zero_mag\nprint(\"AB magnitude of Vega in {0:s} is : {1:f} mag\".format(f.name, mags))\nmags = -2.5 * np.log10(fluxes.value) - f.ST_zero_mag\nprint(\"ST magnitude of Vega in {0:s} is : {1:f} mag\".format(f.name, mags))",
"Provided Filter library\nThis section shows the content of the provided library with respective properties of the passband filters. The code to generate the table is also provided in the documentation.",
"# define header and table format (as csv)\nhdr = (\"name\", \"detector type\", \"wavelength units\",\n \"central wavelength\", \"pivot wavelength\", \"effective wavelength\",\n \"Vega mag\", \"Vega flux\", \"Vega Jy\",\n \"AB mag\", \"AB flux\", \"AB Jy\",\n \"ST mag\", \"ST flux\", \"ST Jy\")\nfmt = \"{0:s},{1:s},{2:s},{3:.3f},{4:.3f},{5:.3f},{6:.5f},{7:.5g},{8:.5g},{9:.5f},{10:.5g},{11:.5g},{12:.5f},{13:.5g},{14:.5g}\\n\"\n\nl = pyphot.get_library()\n\nwith open('table.csv', 'w') as output:\n output.write(','.join(hdr) + '\\n')\n\n for k in sorted(l.content):\n fk = l[k]\n rec = (fk.name, fk.dtype, fk.wavelength_unit,\n fk.cl.value, fk.lpivot.value, fk.leff.value,\n fk.Vega_zero_mag, fk.Vega_zero_flux.value, fk.Vega_zero_Jy.value,\n fk.AB_zero_mag, fk.AB_zero_flux.value, fk.AB_zero_Jy.value,\n fk.ST_zero_mag, fk.ST_zero_flux.value, fk.ST_zero_Jy.value)\n output.write(fmt.format(*rec)) ",
"Table description\n\nname: the identification name of the filter in the library.\ndetector type: energy or photon counter.\nwavelength units: filter defined with these units and all wavelength properties: central wavelength, pivot wavelength, and effective wavelength.\n<X> mag: magnitude in Vega, AB or ST system (w.r.t. the detector type)\n<X> flux: flux in $erg/s/cm^2/AA $ in the X system\n<X> Jy: flux in $Jy$ (Jansky) in the X system",
"import pandas as pd\ndf = pd.read_csv('./table.csv')\ndf.head()",
"Extention to Lick indices\nWe also include functions to compute lick indices and provide a series of commonly use ones.\nThe Lick system of spectral line indices is one of the most commonly used methods of determining ages and metallicities of unresolved (integrated light) stellar populations.",
"# convert to magnitudes\nimport numpy as np\nfrom pyphot.astropy import UnitLickLibrary as LickLibrary\nfrom pyphot.astropy import Vega\n\nvega = Vega()\n# using the internal collection of indices\nlib = LickLibrary()\nf = lib['CN_1']\n# work on many spectra at once\nindex = f.get(vega.wavelength, vega.flux, axis=-1)\nprint(\"The index of Vega in {0:s} is {1:f} {2:s}\".format(f.name, index, f.index_unit))",
"Similarly, we show the content of the provided library with respective properties of the passband filters. \nThe table below is also part of the documentation.",
"# define header and table format (as csv)\nhdr = (\"name\", \"wavelength units\", \"index units\", \"min\", \"max\" \"min blue\", \"max blue\", \"min red\", \"max red\")\nfmt = \"{0:s},{1:s},{2:s},{3:.3f},{4:.3f},{5:.3f},{6:.5f},{7:.3f},{8:.3f}\\n\"\n\nl = pyphot.UnitLickLibrary()\n\nwith open('licks_table.csv', 'w') as output:\n output.write(','.join(hdr) + '\\n')\n\n for k in sorted(l.content):\n fk = l[k]\n # wavelength have units\n band = fk.band.value\n blue = fk.blue.value\n red = fk.red.value\n rec = (fk.name, fk.wavelength_unit, fk.index_unit, band[0], band[1],\n blue[0], blue[1], red[0], red[1])\n output.write(fmt.format(*rec))\n\nimport pandas as pd\ndf = pd.read_csv('./licks_table.csv', index_col=False)\ndf.head()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
MartyWeissman/Python-for-number-theory
|
PwNT Notebook 2.ipynb
|
gpl-3.0
|
[
"Part 2: Functions in Python\nA distinguishing property of programming languages is that the programmer can create their own functions. Creating a function is like teaching the computer a new trick. Typically a function will receive some data as input, will perform an algorithm involving the input data, and will output data when the algorithm terminates. \nIn this part, we explore Python functions. We also explore control statements, which allow a program to behave in different ways for different inputs. We also introduce the while loop, a loop whose repetition can be more carefully controlled than a for loop. As an application of these techniques, we implement the Euclidean algorithm as a Python function in a few ways, to effectively find the GCD of integers and solve linear Diophantine equations. This complements Chapter 1 of An Illustrated Theory of Numbers.\nTable of Contents\n\nGetting started with Python functions\nControl statements\nWhile loops and implementation of the Eucidean algorithm\nSolving the linear Diophantine equation\n\n<a id='functions'></a>\nGetting started with Python functions\nA function in Python is a construction which takes input data, performs some actions, and outputs data. It is best to start with a few examples and break down the code. Here is a function square. Run the code as usual by pressing shift-Enter when the code block is selected.",
"def square(x):\n answer = x * x\n return answer",
"When you run the code block, you probably didn't see anything happen. But you have effectively taught your computer a new trick, increasing the vocabulary of commands it understands through the Python interpreter. Now you can use the square command as you wish.",
"square(12)\n\nsquare(1.5)",
"Let's break down the syntax of the function declaration, line by line.\npython\ndef square(x):\n answer = x * x\n return answer\nThe first line begins with the Python reserved word def. (So don't use def as a variable name!). The word def stands for \"define\" and it defines a function called square. After the function name square comes parentheses (x) containing the argument x. The arguments or parameters of a function refer to the input data. Even if your function has no arguments, you need parentheses. The argument x is used to name whatever number is input into the square function. \nAt the end of the function declaration line is a colon : and the following two lines are indented. As in the case of for loops, the colon and indentation are signals of scope. Everything on the indented lines is considered the scope of the function and is carried out when the function is used later.\nThe second line answer = x * x is the beginning of the scope of the function. It declares a variable answer and sets the value to be x * x. So if the argument x is 12, then answer will be set to 144. The variable answer, being declared within the scope of the function, will not be accessible outside the scope of the function. It is called a local variable.\nThe last line return answer contains the Python reserved word return, which terminates the function and outputs the value of the variable answer. So when you apply the function with the command square(1.5), the number 1.5 is the argument x, and answer is 2.25, and that number 2.25 becomes the output.\nA function does not have to return a value. Some functions might just provide some information. Here is a function which displays the result of division with remainder as a sentence with addition and multiplication.",
"def display_divmod(a,b):\n quotient = a // b # Integer division\n remainder = a % b #\n print \"%d = %d (%d) + %d\"%(a,quotient,b,remainder)\n\ndisplay_divmod(23,5)",
"Notice that this function has no return line. The function terminates automatically at the end of its scope.\nThe function also uses a slick Python string manipulation called string substitution. This has changed between Python 2.x and 3.x, and as usual we are following Python 2.x syntax. \nString substitution allows you to place variable numbers in a string by putting placeholders like %d within the string, and then placing the list of variables after the string (as a tuple). Here are some examples to get used to the syntax, since we will use this often in this lesson.",
"print \"My favorite number is %d\"%(17) # The % symbol substitutes 17 for %d.\n\nprint \"%d + %d = %d\"%(13,12,13+12)",
"The placeholder %d is meant to hold the place of an integer. For floating point numbers, use %f instead. This will impose a bit of rounding before printing. One can modify the placeholder to change the amount of digits displayed. Here is the official reference for placeholder syntax and string substitutions. We will only use the most basic features.",
"print \"pi is approximately %f\"%(3.14159265) # This uses a default precision for display.\n\nprint \"pi is approximately %.3f\"%(3.14159265) # This sets 3 digits after the decimal point.",
"Exercises\n\n\nWhat are the signals of scope in Python?\n\n\nHow is the symbol % used in Python? Can you think of three ways?\n\n\nWrite a function called area_circle, which takes one argument radius. The function should return the area of the circle, as a floating point number. Then add one line to the function, using string substitutions, so that it additionally prints a helpful sentence of the form \"The area of a circle of radius 1.0 is 3.14159.\" (depending on the radius and the area it computes).\n\n\nCan you think of a reason you might want to have a function with no arguments?",
"# Use this space to work on the Exercises. \n# Remember that you can add a new cell above/below by clicking to the left of a cell,\n# (the cell will have a blue bar at the left) and then pressing \"a\" or \"b\" on the keyboard.\n",
"<a id='controls'></a>\nControl statements\nIt is important for a computer program to behave differently under different circumstances. The simplest control statements, if and its relative else, can be used to tell Python to carry out different actions depending on the value of a boolean variable. The following function exhibits the syntax.",
"def is_even(n):\n if n%2 == 0:\n print \"%d is even.\"%(n)\n return True\n else:\n print \"%d is odd.\"%(n)\n return False\n\nis_even(17)\n\nis_even(1000)",
"The broad syntax of the function should be familiar. We have created a function called is_even with one argument called n. The body of the function uses the control statement if n%2 == 0:. Recall that n%2 gives the remainder after dividing n by 2. Thus n%2 is 0 or 1, depending on whether n is even or odd. Therefore the boolean n%2 == 0 is True if n is even, and False if n is odd.\nThe next two lines (the first print and return statements) are within the scope of the if <boolean>: statement, as indicated by the colon and the indentation. The if <boolean>: statement tells the Python interpreter to perform the statements within the scope if the boolean is True, and to ignore the statements within the scope if the boolean is False.\nPutting it together, we can analyze the code.\npython\nif n%2 == 0:\n print \"%d is even.\"%(n)\n return True\nIf n is even, then the Python interpreter will print a sentence of the form n is even. Then the interpreter will return (output) the value True and the function will terminate. If n is odd, the Python interpreter will ignore the two lines of scope.\nOften we don't just want Python to do nothing when a condition is not satisfied. In the case above, we would rather Python tell us that the number is odd. The else: control statement tells Python what to do in case the if <boolean>: control statement receives a False boolean. We analyze the code\npython\n else:\n print \"%d is odd.\"%(n)\n return False\nThe print and return commands are within the scope of the else: control statement. So when the if statement receives a false signal (the number n is odd), the program prints a sentence of the form n is odd. and then returns the value False and terminates the function.\nThe function is_even is a verbose, or \"talkative\" sort of function. Such a function is sometimes useful in an interactive setting, where the programmer wants to understand everything that's going on. But if the function had to be called a million times, the screen would fill with printed sentences! In practice, an efficient and silent function is_even might look like the following.",
"def is_even(n):\n return (n%2 == 0)\n\nis_even(17)",
"A for loop and an if control statement, used together, allow us to carry out a brute force search. We can search for factors in order to check whether a number is prime. Or we can look for solutions to an equation until we find one.\nOne thing to note: the function below begins with a block of text between a triple-quote (three single-quotes when typing). That text is called a docstring and it is meant to document what the function does. Writing clear docstrings becomes more important as you write longer programs, collaborate with other programmers, and when you want to return months or years later to use a program again. There are different style conventions for docstrings; for example, here are Google's docstring conventions. We take a less formal approach.",
"def is_prime(n):\n '''\n Checks whether the argument n is a prime number.\n Uses a brute force search for factors between 1 and n.\n '''\n for j in range(2,n): # the list of numbers 2,3,...,n-1.\n if n%j == 0: # is n divisible by j?\n print \"%d is a factor of %d.\"%(j,n)\n return False\n return True",
"An important note: the return keyword terminates the function. So as soon as a factor is found, the function terminates and outputs False. If no factor is found, then the function execution survives past the loop, and the line return True is executed to terminate the function.",
"is_prime(91)\n\nis_prime(101)",
"Try the is_prime function on bigger numbers -- try numbers with 4 digits, 5 digits, 6 digits. Where does it start to slow down? Do you get any errors when the numbers are large? Make sure to save your work first, just in case this crashes your computer!",
"# Experiment with is_prime here.\n",
"There are two limiting factors, which we study in more detail in the next lesson. These are time and space (your computer's memory space). As the loop of is_prime goes on and on, it might take your computer a long time! If each step of the loop takes only a nanosecond (1 billionth of a second), the loop would take about a second when executing is_prime(1000000001). If you tried is_prime on a much larger number, like is_prime(2**101 - 1), the loop would take longer than the lifetime of the Earth.\nBut if you try such a big number, you'll hit a more immediate problem with space. Namely, the range(2,n) attempts to store the entire list of numbers [2,3,4,...,n-1] in the memory of your computer. Your computer has some (4 or 8 or 16, perhaps) gigabytes of memory (RAM). A gigabyte is a billion bytes, and a byte is enough memory to store a number between 0 and 255. (More detail about this later!). So a gigabyte will not even hold a billion numbers, and your computer will probably run out of memory if you attempt to try is_prime(n) when n is close to a billion.\nWe will address both of these problems in the next lesson, to some extent. The memory (space) problem is easier, and we will be able to adapt the is_prime(n) function to deal with n in the trillions. To go beyond this, to work with n of size $10^{100}$ or even $10^{1000}$, we will find completely different methods in a later lesson.\nExercises\n\n\nCreate a function my_abs(x) which outputs the absolute value of the argument x. (Note that Python already has a built-in abs(x) function). \n\n\nModify the is_prime function so that it prints a message Number too big and returns None if the input argument is bigger than one million. (Note that None is a Python reserved word. You can use the one-line statement return None.) \n\n\nWrite a Python function thrarity which takes an argument n, and outputs the string threeven if n is a multiple of three, or throdd is n is one more than a multiple of three, or thrugly if n is one less than a multiple of three. Example: thrarity(31) should output throdd and thrarity(44) should output thrugly. Hint: study the if/elif syntax at the official Python tutorial\n\n\nWrite a Python function sum_of_squares(n) which finds and prints a pair of natural numbers $x$, $y$, such that $x^2 + y^2 = n$. The function should use a brute force search.",
"# Use this space for your solutions to the questions.\n",
"<a id='while'></a>\nWhile loops and implementation of the Eucidean algorithm\nWe almost have all the tools we need to implement the Euclidean algorithm. The last tool we will need is the while loop. We have seen the for loop already, for iterating over a range of numbers. The Euclidean algorithm involves repetition, but there is no way to know in advance how many steps it will take. The while loop allows us to repeat a process as long as a boolean value (sometimes called a flag) is True. The following countdown example illustrates the structure of a while loop.",
"def countdown(n):\n current_value = n\n while current_value > 0: # The condition (current_value > 0) is checked before every instance of the scope!\n print current_value\n current_value = current_value - 1\n\ncountdown(10)",
"The while loop syntax begins with while <boolean>: and the following indented lines comprise the scope of the loop. If the boolean is True, then the scope of the loop is executed. If the boolean is True again afterwards, then the scope of the loop is executed again. And again and again and so on.\nThis can be a dangerous process! For example, what would happen if you made a little typo and the last line of the while loop read current_value = current_value + 1? The numbers would increase and increase... and the boolean current_value > 0 would always be True. Therefore the loop would never end. Bigger and bigger numbers would scroll down your computer screen. \nYou might panic under such a circumstance, and maybe turn your computer off to stop the loop. Here is some advice for when your computer gets stuck in such a neverending loop:\n\nBack up your work often. When you're programming, make sure everything else is saved just in case.\nSave your programming work (use \"Save and checkpoint\" under the \"File\" menu) often, especially before running a cell with a loop for the first time.\nIf you do get stuck in a neverending loop, click on \"Kernel... Interrupt\". This will often unstick the loop and allow you to pick up where you left off. \nOn a Mac, you might try a \"Force Quit\" of the Python process, using the Activity Manager.\n\nNow, if you're feeling brave, save your work, change the while loop so that it never ends, and try to recover where you left off. But be aware that this could cause your computer to freeze or behave erratically, crashing your browser, etc. Don't panic... it won't break your computer permanently.\nThe neverending loop causes two problems here. One is with your computer processor, which will be essentially spinning its wheels. This is called busy waiting, and your computer will essentially be busy waiting forever. The other problem is that your loop is printing more and more lines of text into the notebook. This could easily crash your web browser, which is trying to store and display zillions of lines of numbers. So be ready for problems! \nThe Euclidean algorithm with a while loop\nThe Euclidean Algorithm is a process of repeated division with remainder. Beginning with two integers a (dividend) and b (divisor), one computes quotient q and remainder q to express a = qb + r. Then b becomes the dividend and r becomes the divisor, and one repeats. The repetition continues, and the last nonzero remainder is the greatest common divisor of a and b. It is the subject of Chapter 1 of An Illustrated Theory of Numbers.\nWe implement the Euclidean algorithm in a few variations. The first will be a verbose version, to show the user what happens at every step. We use a while loop to take care of the repetition.",
"def Euclidean_algorithm(a,b):\n dividend = a\n divisor = b\n while divisor != 0: # Recall that != means \"is not equal to\".\n quotient = dividend // divisor\n remainder = dividend % divisor\n print \"%d = %d (%d) + %d\"%(dividend, quotient, divisor, remainder)\n dividend = divisor \n divisor = remainder\n\nEuclidean_algorithm(133, 58)\n\nEuclidean_algorithm(1312331323, 58123123)",
"This is excellent if we want to know every step of the Euclidean algorithm. If we just want to know the GCD of two numbers, we can be less verbose. We carefully return the last nonzero remainder after the while loop is concluded. This last nonzero remainder becomes the divisor when the remainder becomes zero, and then it would become the dividend in the next (unprinted) line. That is why we return the (absolute value) of the dividend after the loop is concluded. You might insert a line at the end of the loop, like print dividend, divisor, remainder to help you track the variables.",
"def GCD(a,b):\n dividend = a # The first dividend is a.\n divisor = b # The first divisor is b.\n while divisor != 0: # Recall that != means \"not equal to\".\n quotient = dividend // divisor\n remainder = dividend % divisor\n dividend = divisor \n divisor = remainder\n return abs(dividend) # abs() is used, since we like our GCDs to be positive.",
"Note that the return dividend statement occurs after the scope of the while loop. So as soon as the divisor variable equals zero, the funtion GCD returns the dividend variable and terminates.",
"GCD(111,27)\n\nGCD(111,-27)",
"We can refine our code in a few ways. First, note that the quotient variable is never used! It was nice in the verbose version of the Euclidean algorithm, but plays no role in finding the GCD. Our refined code reads\npython\ndef GCD(a,b):\n dividend = a \n divisor = b \n while divisor != 0: # Recall that != means \"not equal to\".\n remainder = dividend % divisor\n dividend = divisor \n divisor = remainder\n return abs(dividend)\nNow there are two slick Python tricks we can use to shorten the code. The first is called multiple assignment. It is possible to set the values of two variables in a single line of code, with a syntax like below.",
"x,y = 2,3 # Sets x to 2 and y to 3.",
"This is particular useful for self-referential assignments, because as for ordinary assignment, the right side is evaluated first and then bound to the variables on the left side. For example, after the line above, try the line below. Use print statements to see what the values of the variables are afterwards!",
"x,y = y,x # Guess what this does!\n\nprint \"x =\",x\nprint \"y =\",y",
"Now we can use multiple assignment to turn three lines of code into one line of code. For the remainder variable is only used temporarily before its value is given to the divisor variable. Using multiple assignment, the three lines\npython\n remainder = dividend % divisor\n dividend = divisor \n divisor = remainder\ncan be written in one line,\npython\n dividend, divisor = divisor, dividend % divisor # Evaluations on the right occur before any assignments!\nOur newly shortened GCD function looks like this.\npython\ndef GCD(a,b):\n dividend = a \n divisor = b \n while divisor != 0: # Recall that != means \"not equal to\".\n dividend, divisor = divisor, dividend % divisor\n return abs(dividend)\nThe next trick involves the while loop. The usual syntax has the form while <boolean>:. But if while is followed by a numerical type, e.g. while <int>:, then the scope of the while loop will execute as long as the number is nonzero! Therefore, the line\npython\nwhile divisor != 0:\ncan be replaced by the shorter line\npython\nwhile divisor:\nThis is truly a trick. It probably won't speed anything up, and it does not make your program easier to read for beginners. So use it if you prefer communicating with experienced Python programmers! Here is the whole function again.\npython\ndef GCD(a,b):\n dividend = a \n divisor = b \n while divisor: # Executes the scope if divisor is nonzero.\n dividend, divisor = divisor, dividend % divisor\n return abs(dividend)\nThe next optimization is a bit more dangerous for beginners, but it works here. In general, it can be dangerous to operate directly on the arguments to a function. But in this setting, it is safe, and makes no real difference to the Python interpreter. Instead of creating new variables called dividend and divisor, one can manipulate a and b directly within the function. If you do this, the GCD function can be shortened to the following.",
"def GCD(a,b):\n while b: # Recall that != means \"not equal to\".\n a, b = b, a % b\n return abs(a)\n\n# Try it out. Try it on some big numbers and see how quick it runs!\n",
"This code is essentially optimal, if one wishes to execute the Euclidean algorithm to find the GCD of two integers. It almost matches the GCD code in a standard Python library. It might be slightly faster than our original code -- but there is a tradeoff here between execution speed and readability of code. In this and the following lessons, we often optimize enough for everyday purposes, but not so much that readability is lost.\nExercises\n\n\nModify the is_prime function by using a while loop instead of for j in range(2,n):. Hint: the function should contain the lines j = 2 and while j < n: and j = j + 1 in various places. Why might this be an improvement from the for loop?\n\n\nModify the Euclidean_algorithm function to create a function which returns the number of steps that the Euclidean algorithm requires, i.e., the number of divisions-with-remainder. \n\n\nCreate a function which carries out division with minimal remainder. In other words, given integers a,b, the function expresses a = q(b) + r, where r is a positive or negative integer of magnitude bounded by b/2. Use such a function to create a new Euclidean algorithm function which uses minimal remainder.\n\n\nWhat GCD(a,b) function do you think strikes the best balance between efficiency and readability?",
"# Use this space to work on the exercises.\n",
"<a id='solving'></a>\nSolving the linear Diophantine equation\nIn Chapter 1 of An Illustrated Theory of Numbers, we not only used the Euclidean algorithm to find the GCD of two integers, but also to solve the linear Diophantine equation $ax + by = c$. On paper, this required us to perform the Euclidean algorithm, then \"work backwards\" to carefully solve a series of linear equations. This process is repetetive and error-prone... perfect for a computer. \nSo here we develop a function solve_LDE(a,b,c) which will describe all integer solutions $x,y$ to the equation $ax + by = c$.\nThe idea of the algorithm is to keep track of \"hops\" and \"skips\" throughout the Euclidean algorithm. A general step in the Euclidean algorithm looks like u = q(v) + r. The remainder can then be expressed by the formula r = u - q(v). If u and v can be built from hops and skips, then r can be built from hops and skips. How many? Just tally the hops and skips to find: \n<p style=\"text-align: center;\">`r_hops = u_hops - q (v_hops)` and `r_skips = u_skips - q (v_skips)`.</p>\n\nThis sort of tallying is what makes the algorithm below work. The function below does not introduce any new programming concepts, but it assembles many ideas together.",
"def hop_and_skip(a,b):\n '''\n Takes two integer arguments a,b, and prints a sentence of the form\n GCD(a,b) = x(a) + y(b). The method is the Euclidean algorithm,\n tallying hops (units of a) and skips (units of b) along the way.\n '''\n u = a # We use u instead of dividend.\n v = b # We use v instead of divisor.\n u_hops, u_skips = 1,0 # u is built from one hop (a) and no skips, for now.\n v_hops, v_skips = 0,1 # v is built from no hops and one skip (b), for now.\n while v != 0: # We could just write \"while v:\"\n q = u // v # q stands for quotient.\n r = u % v # r stands for remainder. So u = q(v) + r.\n \n r_hops = u_hops - q * v_hops # Tally hops\n r_skips = u_skips - q * v_skips # Tally skips\n \n u,v = v,r # The new dividend,divisor is the old divisor,remainder.\n u_hops, v_hops = v_hops, r_hops # The new u_hops, v_hops is the old v_hops, r_hops\n u_skips, v_skips = v_skips, r_skips # The new u_skips, v_skips is the old v_skips, r_skips\n \n print \"%d = %d(%d) + %d(%d)\"%(u,u_hops,a,u_skips,b)\n\nhop_and_skip(102,45)",
"Try out the hop_and_skip code on some integers of your choice. Does it behave correctly? Check the results, using Python as a calculator. Does it run quickly for large integers?",
"# Experimentation space here.\n",
"To conclude this lesson, we put everything together to create a long(ish) function to solve linear Diophantine equations. We want this function to be smart enough to respond when an equation has no solutions, and to describe all solutions when they exist. \nThe first part of the function solve_LDE is the same as the hop and skip function above. But rather than expressing $GCD(a,b)$ as $ax + by$, the function uses the GCD to determine the existence and the general form of a solution to $ax + by = c$. The formula for the general form comes from An Illustrated Theory of Numbers, Chapter 1, Corollary 1.25.",
"def solve_LDE(a,b,c):\n '''\n Describes all of the solutions to the linear Diophantine equation\n ax + by = c. There are either no solutions or infinitely many solutions.\n Prints a description of the solution set, and returns None if there are no solutions\n or returns a single solution if one exists.\n ''' \n u = a # We use u instead of dividend.\n v = b # We use v instead of divisor.\n u_hops, u_skips = 1,0 # u is built from one hop (a) and no skips.\n v_hops, v_skips = 0,1 # v is built from no hops and one skip (b).\n while v != 0: # We could just write while v:\n q = u // v # q stands for quotient.\n r = u % v # r stands for remainder. So u = q(v) + r.\n \n r_hops = u_hops - q * v_hops # Tally hops\n r_skips = u_skips - q * v_skips # Tally skips\n \n u,v = v,r # The new dividend,divisor is the old divisor,remainder.\n u_hops, v_hops = v_hops, r_hops # The new u_hops, v_hops is the old v_hops, r_hops\n u_skips, v_skips = v_skips, r_skips # The new u_skips, v_skips is the old v_skips, r_skips\n \n g = u # The variable g now describes the GCD of a and b.\n \n if c%g == 0: # When GCD(a,b) divides c...\n d = c/g\n x = d * u_hops\n y = d * u_skips # Now ax + by = c is a specific solution!\n print \"%d x + %d y = %d if and only if \"%(a, b, c)\n print \"x = %d + %d n and y = %d - %d n, for some integer n.\"%(x,b/g,y,-a/g)\n return x,y\n else: # When GCD(a,b) does not divide c...\n print \"There are no solutions to %d x + %d y = %d,\"%(a,b,c)\n print \"because GCD(%d, %d) = %d, which does not divide %d.\"%(a,b,g,c)\n return None\n\nsolve_LDE(102,45,3)\n\nsolve_LDE(72,100,17)",
"Exercises\n\n\nSolve problems 4-7 of Chapter 1 of An Illustrated Theory of Numbers using the solve_LDE function.\n\n\nWrite an LCM function, using the previous GCD function and the GCD-LCM product formula, Theorem 1.23 of An Illustrated Theory of Numbers.\n\n\nSometimes it is important to find not the integer solutions, but the positive integer solutions to a Diophantine equation. Modify the solve_LDE function to create a solve_LDE_positive(a,b,c) function. The output of the function should be all pairs of positive integers $x$, $y$, such that $ax + by = c$ (if any pairs exist), and a helpful message if no pairs exist (and a return None should be used in this case).",
"# Use this space to work on the exercises.\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
zomansud/coursera
|
ml-regression/week-2/week-2-multiple-regression-assignment-2-blank.ipynb
|
mit
|
[
"Regression Week 2: Multiple Regression (gradient descent)\nIn the first notebook we explored multiple regression using graphlab create. Now we will use graphlab along with numpy to solve for the regression weights with gradient descent.\nIn this notebook we will cover estimating multiple regression weights via gradient descent. You will:\n* Add a constant column of 1's to a graphlab SFrame to account for the intercept\n* Convert an SFrame into a Numpy array\n* Write a predict_output() function using Numpy\n* Write a numpy function to compute the derivative of the regression weights with respect to a single feature\n* Write gradient descent function to compute the regression weights given an initial weight vector, step size and tolerance.\n* Use the gradient descent function to estimate regression weights for multiple features\nFire up graphlab create\nMake sure you have the latest version of graphlab (>= 1.7)",
"import graphlab",
"Load in house sales data\nDataset is from house sales in King County, the region where the city of Seattle, WA is located.",
"sales = graphlab.SFrame('kc_house_data.gl/')\n\nsales.head()",
"If we want to do any \"feature engineering\" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the other Week 2 notebook. For this notebook, however, we will work with the existing features.\nConvert to Numpy Array\nAlthough SFrames offer a number of benefits to users (especially when using Big Data and built-in graphlab functions) in order to understand the details of the implementation of algorithms it's important to work with a library that allows for direct (and optimized) matrix operations. Numpy is a Python solution to work with matrices (or any multi-dimensional \"array\").\nRecall that the predicted value given the weights and the features is just the dot product between the feature and weight vector. Similarly, if we put all of the features row-by-row in a matrix then the predicted value for all the observations can be computed by right multiplying the \"feature matrix\" by the \"weight vector\". \nFirst we need to take the SFrame of our data and convert it into a 2D numpy array (also called a matrix). To do this we use graphlab's built in .to_dataframe() which converts the SFrame into a Pandas (another python library) dataframe. We can then use Panda's .as_matrix() to convert the dataframe into a numpy matrix.",
"import numpy as np # note this allows us to refer to numpy as np instead ",
"Now we will write a function that will accept an SFrame, a list of feature names (e.g. ['sqft_living', 'bedrooms']) and an target feature e.g. ('price') and will return two things:\n* A numpy matrix whose columns are the desired features plus a constant column (this is how we create an 'intercept')\n* A numpy array containing the values of the output\nWith this in mind, complete the following function (where there's an empty line you should write a line of code that does what the comment above indicates)\nPlease note you will need GraphLab Create version at least 1.7.1 in order for .to_numpy() to work!",
"def get_numpy_data(data_sframe, features, output):\n \n data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame\n \n # add the column 'constant' to the front of the features list so that we can extract it along with the others:\n features = ['constant'] + features # this is how you combine two lists\n print features\n \n # select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):\n features_sframe = data_sframe[features]\n \n # the following line will convert the features_SFrame into a numpy matrix:\n feature_matrix = features_sframe.to_numpy()\n \n # assign the column of data_sframe associated with the output to the SArray output_sarray\n output_sarray = data_sframe[output]\n\n # the following will convert the SArray into a numpy array by first converting it to a list\n output_array = output_sarray.to_numpy()\n \n return(feature_matrix, output_array)",
"For testing let's use the 'sqft_living' feature and a constant as our features and price as our output:",
"(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price') # the [] around 'sqft_living' makes it a list\nprint example_features[0,:] # this accesses the first row of the data the ':' indicates 'all columns'\nprint example_output[0] # and the corresponding output",
"Predicting output given regression weights\nSuppose we had the weights [1.0, 1.0] and the features [1.0, 1180.0] and we wanted to compute the predicted output 1.0*1.0 + 1.0*1180.0 = 1181.0 this is the dot product between these two arrays. If they're numpy arrayws we can use np.dot() to compute this:",
"my_weights = np.array([1., 1.]) # the example weights\nmy_features = example_features[0,] # we'll use the first data point\npredicted_value = np.dot(my_features, my_weights)\nprint predicted_value",
"np.dot() also works when dealing with a matrix and a vector. Recall that the predictions from all the observations is just the RIGHT (as in weights on the right) dot product between the features matrix and the weights vector. With this in mind finish the following predict_output function to compute the predictions for an entire matrix of features given the matrix and the weights:",
"def predict_output(feature_matrix, weights):\n # assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array\n # create the predictions vector by using np.dot()\n predictions = np.dot(feature_matrix, weights)\n \n return(predictions)",
"If you want to test your code run the following cell:",
"test_predictions = predict_output(example_features, my_weights)\nprint test_predictions[0] # should be 1181.0\nprint test_predictions[1] # should be 2571.0",
"Computing the Derivative\nWe are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output.\nSince the derivative of a sum is the sum of the derivatives we can compute the derivative for a single data point and then sum over data points. We can write the squared difference between the observed output and predicted output for a single point as follows:\n(w[0]*[CONSTANT] + w[1]*[feature_1] + ... + w[i] *[feature_i] + ... + w[k]*[feature_k] - output)^2\nWhere we have k features and a constant. So the derivative with respect to weight w[i] by the chain rule is:\n2*(w[0]*[CONSTANT] + w[1]*[feature_1] + ... + w[i] *[feature_i] + ... + w[k]*[feature_k] - output)* [feature_i]\nThe term inside the paranethesis is just the error (difference between prediction and output). So we can re-write this as:\n2*error*[feature_i]\nThat is, the derivative for the weight for feature i is the sum (over data points) of 2 times the product of the error and the feature itself. In the case of the constant then this is just twice the sum of the errors!\nRecall that twice the sum of the product of two vectors is just twice the dot product of the two vectors. Therefore the derivative for the weight for feature_i is just two times the dot product between the values of feature_i and the current errors. \nWith this in mind complete the following derivative function which computes the derivative of the weight given the value of the feature (over all data points) and the errors (over all data points).",
"def feature_derivative(errors, feature):\n # Assume that errors and feature are both numpy arrays of the same length (number of data points)\n # compute twice the dot product of these vectors as 'derivative' and return the value\n derivative = 2 * np.dot(errors, feature)\n \n return(derivative)",
"To test your feature derivartive run the following:",
"(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price') \nmy_weights = np.array([0., 0.]) # this makes all the predictions 0\ntest_predictions = predict_output(example_features, my_weights) \n# just like SFrames 2 numpy arrays can be elementwise subtracted with '-': \nerrors = test_predictions - example_output # prediction errors in this case is just the -example_output\nfeature = example_features[:,0] # let's compute the derivative with respect to 'constant', the \":\" indicates \"all rows\"\nderivative = feature_derivative(errors, feature)\nprint derivative\nprint -np.sum(example_output)*2 # should be the same as derivative",
"Gradient Descent\nNow we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of decrease and we're trying to minimize a cost function. \nThe amount by which we move in the negative gradient direction is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. We define this by requiring that the magnitude (length) of the gradient vector to be smaller than a fixed 'tolerance'.\nWith this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent we update the weight for each feature befofe computing our stopping criteria",
"from math import sqrt # recall that the magnitude/length of a vector [g[0], g[1], g[2]] is sqrt(g[0]^2 + g[1]^2 + g[2]^2)\n\ndef regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance):\n converged = False \n weights = np.array(initial_weights) # make sure it's a numpy array\n while not converged:\n # compute the predictions based on feature_matrix and weights using your predict_output() function\n predictions = predict_output(feature_matrix, weights)\n \n # compute the errors as predictions - output\n errors = predictions - output\n \n gradient_sum_squares = 0 # initialize the gradient sum of squares\n \n # while we haven't reached the tolerance yet, update each feature's weight\n for i in range(len(weights)): # loop over each weight\n # Recall that feature_matrix[:, i] is the feature column associated with weights[i]\n # compute the derivative for weight[i]:\n derivative_i = feature_derivative(errors, feature_matrix[:, i])\n \n # add the squared value of the derivative to the gradient sum of squares (for assessing convergence)\n gradient_sum_squares = derivative_i * derivative_i\n \n # subtract the step size times the derivative from the current weight\n weights[i] = weights[i] - (step_size * derivative_i)\n \n # compute the square-root of the gradient sum of squares to get the gradient matnigude:\n gradient_magnitude = sqrt(gradient_sum_squares)\n \n if gradient_magnitude < tolerance:\n converged = True\n \n return(weights)",
"A few things to note before we run the gradient descent. Since the gradient is a sum over all the data points and involves a product of an error and a feature the gradient itself will be very large since the features are large (squarefeet) and the output is large (prices). So while you might expect \"tolerance\" to be small, small is only relative to the size of the features. \nFor similar reasons the step size will be much smaller than you might expect but this is because the gradient has such large values.\nRunning the Gradient Descent as Simple Regression\nFirst let's split the data into training and test data.",
"train_data,test_data = sales.random_split(.8,seed=0)",
"Although the gradient descent is designed for multiple regression since the constant is now a feature we can use the gradient descent function to estimat the parameters in the simple regression on squarefeet. The folowing cell sets up the feature_matrix, output, initial weights and step size for the first model:",
"# let's test out the gradient descent\nsimple_features = ['sqft_living']\nmy_output = 'price'\n(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)\ninitial_weights = np.array([-47000., 1.])\nstep_size = 7e-12\ntolerance = 2.5e7",
"Next run your gradient descent with the above parameters.",
"simple_features_weights = regression_gradient_descent(\n simple_feature_matrix, \n output, \n initial_weights, \n step_size, \n tolerance\n)\nprint simple_features_weights\nprint round(simple_features_weights[1], 1)",
"How do your weights compare to those achieved in week 1 (don't expect them to be exactly the same)? \nQuiz Question: What is the value of the weight for sqft_living -- the second element of ‘simple_weights’ (rounded to 1 decimal place)?\nUse your newly estimated weights and your predict_output() function to compute the predictions on all the TEST data (you will need to create a numpy array of the test feature_matrix and test output first:",
"(test_simple_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output)",
"Now compute your predictions using test_simple_feature_matrix and your weights from above.",
"predictions_test_simple_feature = predict_output(test_simple_feature_matrix, simple_features_weights)",
"Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 1 (round to nearest dollar)?",
"print round(predictions_test_simple_feature[0])",
"Now that you have the predictions on test data, compute the RSS on the test data set. Save this value for comparison later. Recall that RSS is the sum of the squared errors (difference between prediction and output).",
"test_simple_feature_matrix\n\ndiff = predictions_test_simple_feature - test_data['price']\ndiff_squared = diff * diff\nrss_test_simple_feature = diff_squared.sum()\nprint rss_test_simple_feature",
"Running a multiple regression\nNow we will use more than one actual feature. Use the following code to produce the weights for a second model with the following parameters:",
"model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors. \nmy_output = 'price'\n(feature_matrix, output) = get_numpy_data(train_data, model_features, my_output)\ninitial_weights = np.array([-100000., 1., 1.])\nstep_size = 4e-12\ntolerance = 1e9",
"Use the above parameters to estimate the model weights. Record these values for your quiz.",
"model_weights = regression_gradient_descent(\n feature_matrix, \n output, \n initial_weights, \n step_size, \n tolerance\n)\nprint model_weights",
"Use your newly estimated weights and the predict_output function to compute the predictions on the TEST data. Don't forget to create a numpy array for these features from the test set first!",
"(test_feature_matrix, test_output) = get_numpy_data(test_data, model_features, my_output)\npredictions_test_feature = predict_output(test_feature_matrix, model_weights)",
"Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 2 (round to nearest dollar)?",
"print round(predictions_test_feature[0])",
"What is the actual price for the 1st house in the test data set?",
"print test_data[0][my_output]",
"Quiz Question: Which estimate was closer to the true price for the 1st house on the Test data set, model 1 or model 2?\nNow use your predictions and the output to compute the RSS for model 2 on TEST data.",
"diff = predictions_test_feature - test_data['price']\ndiff_squared = diff * diff\nrss_test_model_feature = diff_squared.sum()\nprint rss_test_model_feature",
"Quiz Question: Which model (1 or 2) has lowest RSS on all of the TEST data?",
"if rss_test_model_feature < rss_test_simple_feature:\n print \"Model 2\"\nelse:\n print \"Model 1\""
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.19/_downloads/2be4fb4bf7f4e0825af6c222c396d97a/plot_compute_csd.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"==================================================\nCompute a cross-spectral density (CSD) matrix\n==================================================\nA cross-spectral density (CSD) matrix is similar to a covariance matrix, but in\nthe time-frequency domain. It is the first step towards computing\nsensor-to-sensor coherence or a DICS beamformer.\nThis script demonstrates the three methods that MNE-Python provides to compute\nthe CSD:\n\nUsing short-term Fourier transform: :func:mne.time_frequency.csd_fourier\nUsing a multitaper approach: :func:mne.time_frequency.csd_multitaper\nUsing Morlet wavelets: :func:mne.time_frequency.csd_morlet",
"# Author: Marijn van Vliet <w.m.vanvliet@gmail.com>\n# License: BSD (3-clause)\nfrom matplotlib import pyplot as plt\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.time_frequency import csd_fourier, csd_multitaper, csd_morlet\n\nprint(__doc__)",
"In the following example, the computation of the CSD matrices can be\nperformed using multiple cores. Set n_jobs to a value >1 to select the\nnumber of cores to use.",
"n_jobs = 1",
"Loading the sample dataset.",
"data_path = sample.data_path()\nfname_raw = data_path + '/MEG/sample/sample_audvis_raw.fif'\nfname_event = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'\nraw = mne.io.read_raw_fif(fname_raw)\nevents = mne.read_events(fname_event)",
"By default, CSD matrices are computed using all MEG/EEG channels. When\ninterpreting a CSD matrix with mixed sensor types, be aware that the\nmeasurement units, and thus the scalings, differ across sensors. In this\nexample, for speed and clarity, we select a single channel type:\ngradiometers.",
"picks = mne.pick_types(raw.info, meg='grad')\n\n# Make some epochs, based on events with trigger code 1\nepochs = mne.Epochs(raw, events, event_id=1, tmin=-0.2, tmax=1,\n picks=picks, baseline=(None, 0),\n reject=dict(grad=4000e-13), preload=True)",
"Computing CSD matrices using short-term Fourier transform and (adaptive)\nmultitapers is straightforward:",
"csd_fft = csd_fourier(epochs, fmin=15, fmax=20, n_jobs=n_jobs)\ncsd_mt = csd_multitaper(epochs, fmin=15, fmax=20, adaptive=True, n_jobs=n_jobs)",
"When computing the CSD with Morlet wavelets, you specify the exact\nfrequencies at which to compute it. For each frequency, a corresponding\nwavelet will be constructed and convolved with the signal, resulting in a\ntime-frequency decomposition.\nThe CSD is constructed by computing the correlation between the\ntime-frequency representations between all sensor-to-sensor pairs. The\ntime-frequency decomposition originally has the same sampling rate as the\nsignal, in our case ~600Hz. This means the decomposition is over-specified in\ntime and we may not need to use all samples during our CSD computation, just\nenough to get a reliable correlation statistic. By specifying decim=10,\nwe use every 10th sample, which will greatly speed up the computation and\nwill have a minimal effect on the CSD.",
"frequencies = [16, 17, 18, 19, 20]\ncsd_wav = csd_morlet(epochs, frequencies, decim=10, n_jobs=n_jobs)",
"The resulting :class:mne.time_frequency.CrossSpectralDensity objects have a\nplotting function we can use to compare the results of the different methods.\nWe're plotting the mean CSD across frequencies.",
"csd_fft.mean().plot()\nplt.suptitle('short-term Fourier transform')\n\ncsd_mt.mean().plot()\nplt.suptitle('adaptive multitapers')\n\ncsd_wav.mean().plot()\nplt.suptitle('Morlet wavelet transform')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n
|
site/ko/lattice/tutorials/canned_estimators.ipynb
|
apache-2.0
|
[
"Copyright 2020 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"TF Lattice Canned Estimator\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/lattice/tutorials/canned_estimators\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org에서 보기</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/lattice/tutorials/canned_estimators.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Google Colab에서 실행하기</a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ko/lattice/tutorials/canned_estimators.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">GitHub에서소스 보기</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/lattice/tutorials/canned_estimators.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">노트북 다운로드하기</a></td>\n</table>\n\n개요\n준비된 estimator는 일반적인 사용 사례를 위해 TFL 모델을 훈련하는 빠르고 쉬운 방법입니다. 이 가이드에서는 TFL canned estimator를 만드는 데 필요한 단계를 설명합니다.\n설정\nTF Lattice 패키지 설치하기",
"#@test {\"skip\": true}\n!pip install tensorflow-lattice",
"필수 패키지 가져오기",
"import tensorflow as tf\n\nimport copy\nimport logging\nimport numpy as np\nimport pandas as pd\nimport sys\nimport tensorflow_lattice as tfl\nfrom tensorflow import feature_column as fc\nlogging.disable(sys.maxsize)",
"UCI Statlog(Heart) 데이터세트 다운로드하기",
"csv_file = tf.keras.utils.get_file(\n 'heart.csv', 'http://storage.googleapis.com/download.tensorflow.org/data/heart.csv')\ndf = pd.read_csv(csv_file)\ntarget = df.pop('target')\ntrain_size = int(len(df) * 0.8)\ntrain_x = df[:train_size]\ntrain_y = target[:train_size]\ntest_x = df[train_size:]\ntest_y = target[train_size:]\ndf.head()",
"이 가이드에서 훈련에 사용되는 기본값 설정하기",
"LEARNING_RATE = 0.01\nBATCH_SIZE = 128\nNUM_EPOCHS = 500\nPREFITTING_NUM_EPOCHS = 10",
"특성 열\n다른 TF estimator와 마찬가지로 데이터는 일반적으로 input_fn을 통해 estimator로 전달되어야 하며 FeatureColumns를 사용하여 구문 분석됩니다.",
"# Feature columns.\n# - age\n# - sex\n# - cp chest pain type (4 values)\n# - trestbps resting blood pressure\n# - chol serum cholestoral in mg/dl\n# - fbs fasting blood sugar > 120 mg/dl\n# - restecg resting electrocardiographic results (values 0,1,2)\n# - thalach maximum heart rate achieved\n# - exang exercise induced angina\n# - oldpeak ST depression induced by exercise relative to rest\n# - slope the slope of the peak exercise ST segment\n# - ca number of major vessels (0-3) colored by flourosopy\n# - thal 3 = normal; 6 = fixed defect; 7 = reversable defect\nfeature_columns = [\n fc.numeric_column('age', default_value=-1),\n fc.categorical_column_with_vocabulary_list('sex', [0, 1]),\n fc.numeric_column('cp'),\n fc.numeric_column('trestbps', default_value=-1),\n fc.numeric_column('chol'),\n fc.categorical_column_with_vocabulary_list('fbs', [0, 1]),\n fc.categorical_column_with_vocabulary_list('restecg', [0, 1, 2]),\n fc.numeric_column('thalach'),\n fc.categorical_column_with_vocabulary_list('exang', [0, 1]),\n fc.numeric_column('oldpeak'),\n fc.categorical_column_with_vocabulary_list('slope', [0, 1, 2]),\n fc.numeric_column('ca'),\n fc.categorical_column_with_vocabulary_list(\n 'thal', ['normal', 'fixed', 'reversible']),\n]",
"준비된 TFL estimator는 특성 열의 유형을 사용하여 사용할 보정 레이어 유형을 결정합니다. 숫자 특성 열에는 tfl.layers.PWLCalibration를, 범주형 특성 열에는 tfl.layers.CategoricalCalibration 레이어가 사용됩니다.\n범주형 특성 열은 임베딩 특성 열로 래핑되지 않고 estimator에 직접 공급됩니다.\ninput_fn 만들기\n다른 estimator와 마찬가지로 input_fn을 사용하여 훈련 및 평가를 위해 모델에 데이터를 공급할 수 있습니다. TFL estimator는 특성의 분위수를 자동으로 계산하고 이를 PWL 보정 레이어의 입력 키포인트로 사용할 수 있습니다. 이를 위해서는 훈련 input_fn과 유사하지만 단일 epoch 또는 데이터의 하위 샘플이 있는 feature_analysis_input_fn를 전달해야 합니다.",
"train_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(\n x=train_x,\n y=train_y,\n shuffle=False,\n batch_size=BATCH_SIZE,\n num_epochs=NUM_EPOCHS,\n num_threads=1)\n\n# feature_analysis_input_fn is used to collect statistics about the input.\nfeature_analysis_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(\n x=train_x,\n y=train_y,\n shuffle=False,\n batch_size=BATCH_SIZE,\n # Note that we only need one pass over the data.\n num_epochs=1,\n num_threads=1)\n\ntest_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(\n x=test_x,\n y=test_y,\n shuffle=False,\n batch_size=BATCH_SIZE,\n num_epochs=1,\n num_threads=1)\n\n# Serving input fn is used to create saved models.\nserving_input_fn = (\n tf.estimator.export.build_parsing_serving_input_receiver_fn(\n feature_spec=fc.make_parse_example_spec(feature_columns)))",
"특성 구성\n특성 보정 및 특성별 구성은 tfl.configs.FeatureConfig를 사용하여 설정됩니다. 특성 구성에는 단조 제약 조건, 특성별 정규화(tfl.configs.RegularizerConfig 참조) 및 격자 모델에 대한 격자 크기가 포함됩니다.\n입력 특성에 대한 구성이 정의되지 않은 경우 tfl.config.FeatureConfig의 기본 구성이 사용됩니다.",
"# Feature configs are used to specify how each feature is calibrated and used.\nfeature_configs = [\n tfl.configs.FeatureConfig(\n name='age',\n lattice_size=3,\n # By default, input keypoints of pwl are quantiles of the feature.\n pwl_calibration_num_keypoints=5,\n monotonicity='increasing',\n pwl_calibration_clip_max=100,\n # Per feature regularization.\n regularizer_configs=[\n tfl.configs.RegularizerConfig(name='calib_wrinkle', l2=0.1),\n ],\n ),\n tfl.configs.FeatureConfig(\n name='cp',\n pwl_calibration_num_keypoints=4,\n # Keypoints can be uniformly spaced.\n pwl_calibration_input_keypoints='uniform',\n monotonicity='increasing',\n ),\n tfl.configs.FeatureConfig(\n name='chol',\n # Explicit input keypoint initialization.\n pwl_calibration_input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0],\n monotonicity='increasing',\n # Calibration can be forced to span the full output range by clamping.\n pwl_calibration_clamp_min=True,\n pwl_calibration_clamp_max=True,\n # Per feature regularization.\n regularizer_configs=[\n tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-4),\n ],\n ),\n tfl.configs.FeatureConfig(\n name='fbs',\n # Partial monotonicity: output(0) <= output(1)\n monotonicity=[(0, 1)],\n ),\n tfl.configs.FeatureConfig(\n name='trestbps',\n pwl_calibration_num_keypoints=5,\n monotonicity='decreasing',\n ),\n tfl.configs.FeatureConfig(\n name='thalach',\n pwl_calibration_num_keypoints=5,\n monotonicity='decreasing',\n ),\n tfl.configs.FeatureConfig(\n name='restecg',\n # Partial monotonicity: output(0) <= output(1), output(0) <= output(2)\n monotonicity=[(0, 1), (0, 2)],\n ),\n tfl.configs.FeatureConfig(\n name='exang',\n # Partial monotonicity: output(0) <= output(1)\n monotonicity=[(0, 1)],\n ),\n tfl.configs.FeatureConfig(\n name='oldpeak',\n pwl_calibration_num_keypoints=5,\n monotonicity='increasing',\n ),\n tfl.configs.FeatureConfig(\n name='slope',\n # Partial monotonicity: output(0) <= output(1), output(1) <= output(2)\n monotonicity=[(0, 1), (1, 2)],\n ),\n tfl.configs.FeatureConfig(\n name='ca',\n pwl_calibration_num_keypoints=4,\n monotonicity='increasing',\n ),\n tfl.configs.FeatureConfig(\n name='thal',\n # Partial monotonicity:\n # output(normal) <= output(fixed)\n # output(normal) <= output(reversible) \n monotonicity=[('normal', 'fixed'), ('normal', 'reversible')],\n ),\n]",
"보정된 선형 모델\n준비된 TFL estimator를 구성하려면 tfl.configs에서 모델 구성을 갖추세요. 보정된 선형 모델은 tfl.configs.CalibratedLinearConfig를 사용하여 구성됩니다. 입력 특성에 부분 선형 및 범주형 보정을 적용한 다음 선형 조합 및 선택적 출력 부분 선형 보정을 적용합니다. 출력 보정을 사용하거나 출력 경계가 지정된 경우 선형 레이어는 보정된 입력에 가중치 평균을 적용합니다.\n이 예제에서는 처음 5개 특성에 대해 보정된 선형 모델을 만듭니다. tfl.visualization을 사용하여 보정 플롯으로 모델 그래프를 플롯합니다.",
"# Model config defines the model structure for the estimator.\nmodel_config = tfl.configs.CalibratedLinearConfig(\n feature_configs=feature_configs,\n use_bias=True,\n output_calibration=True,\n regularizer_configs=[\n # Regularizer for the output calibrator.\n tfl.configs.RegularizerConfig(name='output_calib_hessian', l2=1e-4),\n ])\n# A CannedClassifier is constructed from the given model config.\nestimator = tfl.estimators.CannedClassifier(\n feature_columns=feature_columns[:5],\n model_config=model_config,\n feature_analysis_input_fn=feature_analysis_input_fn,\n optimizer=tf.keras.optimizers.Adam(LEARNING_RATE),\n config=tf.estimator.RunConfig(tf_random_seed=42))\nestimator.train(input_fn=train_input_fn)\nresults = estimator.evaluate(input_fn=test_input_fn)\nprint('Calibrated linear test AUC: {}'.format(results['auc']))\nsaved_model_path = estimator.export_saved_model(estimator.model_dir,\n serving_input_fn)\nmodel_graph = tfl.estimators.get_model_graph(saved_model_path)\ntfl.visualization.draw_model_graph(model_graph)",
"보정된 격자 모델\n보정된 격자 모델은 tfl.configs.CalibratedLatticeConfig를 사용하여 구성됩니다. 보정된 격자 모델은 입력 특성에 구간별 선형 및 범주형 보정을 적용한 다음 격자 모델 및 선택적 출력 구간별 선형 보정을 적용합니다.\n이 예제에서는 처음 5개의 특성에 대해 보정된 격자 모델을 만듭니다.",
"# This is calibrated lattice model: Inputs are calibrated, then combined\n# non-linearly using a lattice layer.\nmodel_config = tfl.configs.CalibratedLatticeConfig(\n feature_configs=feature_configs,\n regularizer_configs=[\n # Torsion regularizer applied to the lattice to make it more linear.\n tfl.configs.RegularizerConfig(name='torsion', l2=1e-4),\n # Globally defined calibration regularizer is applied to all features.\n tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-4),\n ])\n# A CannedClassifier is constructed from the given model config.\nestimator = tfl.estimators.CannedClassifier(\n feature_columns=feature_columns[:5],\n model_config=model_config,\n feature_analysis_input_fn=feature_analysis_input_fn,\n optimizer=tf.keras.optimizers.Adam(LEARNING_RATE),\n config=tf.estimator.RunConfig(tf_random_seed=42))\nestimator.train(input_fn=train_input_fn)\nresults = estimator.evaluate(input_fn=test_input_fn)\nprint('Calibrated lattice test AUC: {}'.format(results['auc']))\nsaved_model_path = estimator.export_saved_model(estimator.model_dir,\n serving_input_fn)\nmodel_graph = tfl.estimators.get_model_graph(saved_model_path)\ntfl.visualization.draw_model_graph(model_graph)",
"보정된 격자 앙상블\n특성 수가 많으면 앙상블 모델을 사용할 수 있습니다. 이 모델은 특성의 하위 집합에 대해 여러 개의 작은 격자를 만들고, 하나의 거대한 격자를 만드는 대신 출력을 평균화합니다. 앙상블 격자 모델은 tfl.configs.CalibratedLatticeEnsembleConfig를 사용하여 구성됩니다. 보정된 격자 앙상블 모델은 입력 특성에 구간별 선형 및 범주형 보정을 적용한 다음 격자 모델 앙상블과 선택적 출력 구간별 선형 보정을 적용합니다.\n무작위 격자 앙상블\n다음 모델 구성은 각 격자에 대해 무작위의 특성 하위 집합을 사용합니다.",
"# This is random lattice ensemble model with separate calibration:\n# model output is the average output of separately calibrated lattices.\nmodel_config = tfl.configs.CalibratedLatticeEnsembleConfig(\n feature_configs=feature_configs,\n num_lattices=5,\n lattice_rank=3)\n# A CannedClassifier is constructed from the given model config.\nestimator = tfl.estimators.CannedClassifier(\n feature_columns=feature_columns,\n model_config=model_config,\n feature_analysis_input_fn=feature_analysis_input_fn,\n optimizer=tf.keras.optimizers.Adam(LEARNING_RATE),\n config=tf.estimator.RunConfig(tf_random_seed=42))\nestimator.train(input_fn=train_input_fn)\nresults = estimator.evaluate(input_fn=test_input_fn)\nprint('Random ensemble test AUC: {}'.format(results['auc']))\nsaved_model_path = estimator.export_saved_model(estimator.model_dir,\n serving_input_fn)\nmodel_graph = tfl.estimators.get_model_graph(saved_model_path)\ntfl.visualization.draw_model_graph(model_graph, calibrator_dpi=15)",
"RTL 레이어 무작위 격자 앙상블\n다음 모델 구성은 각 격자에 대해 무작위의 특성 하위 집합을 사용하는 tfl.layers.RTL 레이어를 사용합니다. tfl.layers.RTL은 단조 제약 조건만 지원하며 모든 특성에 대해 동일한 격자 크기를 가져야 하고 특성별 정규화가 없어야 합니다. tfl.layers.RTL 레이어를 사용하면 별도의 tfl.layers.Lattice 인스턴스를 사용하는 것보다 훨씬 더 큰 앙상블로 확장할 수 있습니다.",
"# Make sure our feature configs have the same lattice size, no per-feature\n# regularization, and only monotonicity constraints.\nrtl_layer_feature_configs = copy.deepcopy(feature_configs)\nfor feature_config in rtl_layer_feature_configs:\n feature_config.lattice_size = 2\n feature_config.unimodality = 'none'\n feature_config.reflects_trust_in = None\n feature_config.dominates = None\n feature_config.regularizer_configs = None\n# This is RTL layer ensemble model with separate calibration:\n# model output is the average output of separately calibrated lattices.\nmodel_config = tfl.configs.CalibratedLatticeEnsembleConfig(\n lattices='rtl_layer',\n feature_configs=rtl_layer_feature_configs,\n num_lattices=5,\n lattice_rank=3)\n# A CannedClassifier is constructed from the given model config.\nestimator = tfl.estimators.CannedClassifier(\n feature_columns=feature_columns,\n model_config=model_config,\n feature_analysis_input_fn=feature_analysis_input_fn,\n optimizer=tf.keras.optimizers.Adam(LEARNING_RATE),\n config=tf.estimator.RunConfig(tf_random_seed=42))\nestimator.train(input_fn=train_input_fn)\nresults = estimator.evaluate(input_fn=test_input_fn)\nprint('Random ensemble test AUC: {}'.format(results['auc']))\nsaved_model_path = estimator.export_saved_model(estimator.model_dir,\n serving_input_fn)\nmodel_graph = tfl.estimators.get_model_graph(saved_model_path)\ntfl.visualization.draw_model_graph(model_graph, calibrator_dpi=15)",
"Crystals 격자 앙상블\nTFL은 또한 Crystals라고 하는 휴리스틱 특성 배열 알고리즘을 제공합니다. Crystals 알고리즘은 먼저 쌍별 특성 상호 작용을 예측하는 사전 적합 모델을 훈련합니다. 그런 다음 비 선형 상호 작용이 더 많은 특성이 동일한 격자에 있도록 최종 앙상블을 정렬합니다.\nCrystals 모델의 경우 위에서 설명한 대로 사전 적합 모델을 훈련하는 데 사용되는 prefitting_input_fn도 제공해야 합니다. 사전 적합 모델은 완전하게 훈련될 필요가 없기에 몇 번의 epoch면 충분합니다.",
"prefitting_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(\n x=train_x,\n y=train_y,\n shuffle=False,\n batch_size=BATCH_SIZE,\n num_epochs=PREFITTING_NUM_EPOCHS,\n num_threads=1)",
"그런 다음 모델 구성에서 lattice='crystals' 를 설정하여 Crystal 모델을 만들 수 있습니다.",
"# This is Crystals ensemble model with separate calibration: model output is\n# the average output of separately calibrated lattices.\nmodel_config = tfl.configs.CalibratedLatticeEnsembleConfig(\n feature_configs=feature_configs,\n lattices='crystals',\n num_lattices=5,\n lattice_rank=3)\n# A CannedClassifier is constructed from the given model config.\nestimator = tfl.estimators.CannedClassifier(\n feature_columns=feature_columns,\n model_config=model_config,\n feature_analysis_input_fn=feature_analysis_input_fn,\n # prefitting_input_fn is required to train the prefitting model.\n prefitting_input_fn=prefitting_input_fn,\n optimizer=tf.keras.optimizers.Adam(LEARNING_RATE),\n prefitting_optimizer=tf.keras.optimizers.Adam(LEARNING_RATE),\n config=tf.estimator.RunConfig(tf_random_seed=42))\nestimator.train(input_fn=train_input_fn)\nresults = estimator.evaluate(input_fn=test_input_fn)\nprint('Crystals ensemble test AUC: {}'.format(results['auc']))\nsaved_model_path = estimator.export_saved_model(estimator.model_dir,\n serving_input_fn)\nmodel_graph = tfl.estimators.get_model_graph(saved_model_path)\ntfl.visualization.draw_model_graph(model_graph, calibrator_dpi=15)",
"tfl.visualization 모듈을 사용하여 더 자세한 정보로 특성 calibrator를 플롯할 수 있습니다.",
"_ = tfl.visualization.plot_feature_calibrator(model_graph, \"age\")\n_ = tfl.visualization.plot_feature_calibrator(model_graph, \"restecg\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Kaggle/learntools
|
notebooks/feature_engineering_new/raw/ex5.ipynb
|
apache-2.0
|
[
"Introduction\nIn this exercise, you'll work through several applications of PCA to the Ames dataset.\nRun this cell to set everything up!",
"# Setup feedback system\nfrom learntools.core import binder\nbinder.bind(globals())\nfrom learntools.feature_engineering_new.ex5 import *\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nfrom sklearn.decomposition import PCA\nfrom sklearn.feature_selection import mutual_info_regression\nfrom sklearn.model_selection import cross_val_score\nfrom xgboost import XGBRegressor\n\n# Set Matplotlib defaults\nplt.style.use(\"seaborn-whitegrid\")\nplt.rc(\"figure\", autolayout=True)\nplt.rc(\n \"axes\",\n labelweight=\"bold\",\n labelsize=\"large\",\n titleweight=\"bold\",\n titlesize=14,\n titlepad=10,\n)\n\n\ndef apply_pca(X, standardize=True):\n # Standardize\n if standardize:\n X = (X - X.mean(axis=0)) / X.std(axis=0)\n # Create principal components\n pca = PCA()\n X_pca = pca.fit_transform(X)\n # Convert to dataframe\n component_names = [f\"PC{i+1}\" for i in range(X_pca.shape[1])]\n X_pca = pd.DataFrame(X_pca, columns=component_names)\n # Create loadings\n loadings = pd.DataFrame(\n pca.components_.T, # transpose the matrix of loadings\n columns=component_names, # so the columns are the principal components\n index=X.columns, # and the rows are the original features\n )\n return pca, X_pca, loadings\n\n\ndef plot_variance(pca, width=8, dpi=100):\n # Create figure\n fig, axs = plt.subplots(1, 2)\n n = pca.n_components_\n grid = np.arange(1, n + 1)\n # Explained variance\n evr = pca.explained_variance_ratio_\n axs[0].bar(grid, evr)\n axs[0].set(\n xlabel=\"Component\", title=\"% Explained Variance\", ylim=(0.0, 1.0)\n )\n # Cumulative Variance\n cv = np.cumsum(evr)\n axs[1].plot(np.r_[0, grid], np.r_[0, cv], \"o-\")\n axs[1].set(\n xlabel=\"Component\", title=\"% Cumulative Variance\", ylim=(0.0, 1.0)\n )\n # Set up figure\n fig.set(figwidth=8, dpi=100)\n return axs\n\n\ndef make_mi_scores(X, y):\n X = X.copy()\n for colname in X.select_dtypes([\"object\", \"category\"]):\n X[colname], _ = X[colname].factorize()\n # All discrete features should now have integer dtypes\n discrete_features = [pd.api.types.is_integer_dtype(t) for t in X.dtypes]\n mi_scores = mutual_info_regression(X, y, discrete_features=discrete_features, random_state=0)\n mi_scores = pd.Series(mi_scores, name=\"MI Scores\", index=X.columns)\n mi_scores = mi_scores.sort_values(ascending=False)\n return mi_scores\n\n\ndef score_dataset(X, y, model=XGBRegressor()):\n # Label encoding for categoricals\n for colname in X.select_dtypes([\"category\", \"object\"]):\n X[colname], _ = X[colname].factorize()\n # Metric for Housing competition is RMSLE (Root Mean Squared Log Error)\n score = cross_val_score(\n model, X, y, cv=5, scoring=\"neg_mean_squared_log_error\",\n )\n score = -1 * score.mean()\n score = np.sqrt(score)\n return score\n\n\ndf = pd.read_csv(\"../input/fe-course-data/ames.csv\")",
"Let's choose a few features that are highly correlated with our target, SalePrice.",
"features = [\n \"GarageArea\",\n \"YearRemodAdd\",\n \"TotalBsmtSF\",\n \"GrLivArea\",\n]\n\nprint(\"Correlation with SalePrice:\\n\")\nprint(df[features].corrwith(df.SalePrice))",
"We'll rely on PCA to untangle the correlational structure of these features and suggest relationships that might be usefully modeled with new features.\nRun this cell to apply PCA and extract the loadings.",
"X = df.copy()\ny = X.pop(\"SalePrice\")\nX = X.loc[:, features]\n\n# `apply_pca`, defined above, reproduces the code from the tutorial\npca, X_pca, loadings = apply_pca(X)\nprint(loadings)",
"1) Interpret Component Loadings\nLook at the loadings for components PC1 and PC3. Can you think of a description of what kind of contrast each component has captured? After you've thought about it, run the next cell for a solution.",
"# View the solution (Run this cell to receive credit!)\nq_1.check()",
"Your goal in this question is to use the results of PCA to discover one or more new features that improve the performance of your model. One option is to create features inspired by the loadings, like we did in the tutorial. Another option is to use the components themselves as features (that is, add one or more columns of X_pca to X).\n2) Create New Features\nAdd one or more new features to the dataset X. For a correct solution, get a validation score below 0.140 RMSLE. (If you get stuck, feel free to use the hint below!)",
"X = df.copy()\ny = X.pop(\"SalePrice\")\n\n# YOUR CODE HERE: Add new features to X.\n# ____\n\nscore = score_dataset(X, y)\nprint(f\"Your score: {score:.5f} RMSLE\")\n\n\n# Check your answer\nq_2.check()\n\n# Lines below will give you a hint or solution code\n#_COMMENT_IF(PROD)_\nq_2.hint()\n#_COMMENT_IF(PROD)_\nq_2.solution()\n\n#%%RM_IF(PROD)%%\nX = df.copy()\ny = X.pop(\"SalePrice\")\n\nX[\"Feature1\"] = X.GrLivArea - X.TotalBsmtSF\n\nscore = score_dataset(X, y)\nprint(f\"Your score: {score:.5f} RMSLE\")\n\nq_2.assert_check_failed()\n\n#%%RM_IF(PROD)%%\n# Solution 1: Inspired by loadings\nX = df.copy()\ny = X.pop(\"SalePrice\")\n\nX[\"Feature1\"] = X.GrLivArea + X.TotalBsmtSF\nX[\"Feature2\"] = X.YearRemodAdd * X.TotalBsmtSF\n\nscore = score_dataset(X, y)\nprint(f\"Your score: {score:.5f} RMSLE\")\n\n\n# Solution 2: Uses components\nX = df.copy()\ny = X.pop(\"SalePrice\")\n\nX = X.join(X_pca)\nscore = score_dataset(X, y)\nprint(f\"Your score: {score:.5f} RMSLE\")\n\nq_2.assert_check_passed()",
"The next question explores a way you can use PCA to detect outliers in the dataset (meaning, data points that are unusually extreme in some way). Outliers can have a detrimental effect on model performance, so it's good to be aware of them in case you need to take corrective action. PCA in particular can show you anomalous variation which might not be apparent from the original features: neither small houses nor houses with large basements are unusual, but it is unusual for small houses to have large basements. That's the kind of thing a principal component can show you.\nRun the next cell to show distribution plots for each of the principal components you created above.",
"sns.catplot(\n y=\"value\",\n col=\"variable\",\n data=X_pca.melt(),\n kind='boxen',\n sharey=False,\n col_wrap=2,\n);",
"As you can see, in each of the components there are several points lying at the extreme ends of the distributions -- outliers, that is.\nNow run the next cell to see those houses that sit at the extremes of a component:",
"# You can change PC1 to PC2, PC3, or PC4\ncomponent = \"PC1\"\n\nidx = X_pca[component].sort_values(ascending=False).index\ndf.loc[idx, [\"SalePrice\", \"Neighborhood\", \"SaleCondition\"] + features]",
"3) Outlier Detection\nDo you notice any patterns in the extreme values? Does it seem like the outliers are coming from some special subset of the data?\nAfter you've thought about your answer, run the next cell for the solution and some discussion.",
"# View the solution (Run this cell to receive credit!)\nq_3.check()",
"Keep Going\nApply target encoding to give a boost to categorical features."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
hypergravity/cham_hates_python
|
notebook/cham_hates_python_04_scientific_computing_optimization.ipynb
|
mit
|
[
"<img src=\"https://www.python.org/static/img/python-logo.png\">\nWelcome to my lessons\n\nBo Zhang (NAOC, bozhang@nao.cas.cn) will have a few lessons on python.\n\nThese are very useful knowledge, skills and code styles when you use python to process astronomical data.\nAll materials can be found on my github page.\njupyter notebook (formerly named ipython notebook) is recommeded to use\n\n\nThese lectures are organized as below:\n1. install python\n2. basic syntax\n3. numerical computing\n4. scientific computing\n5. plotting\n6. astronomical data processing\n7. high performance computing\n8. version control\nnumpy\nDocs: http://docs.scipy.org/doc/numpy/user/index.html\nscipy\nDocs: http://docs.scipy.org/doc/scipy/reference/index.html\nscipy.optimize.minimize\nDocs: http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html\noptimization / minimization",
"%pylab inline\nnp.random.seed(0)\np = [3.2, 5.6, 9.2]\nx = np.arange(-8., 5., 0.1)\ny = np.polyval(p, x) + np.random.randn(x.shape[0])*1.\n\nplt.plot(x, y);\n\n# STEP 1 - define your model\ndef my_model(p, x):\n return np.polyval(p, x)\n\n# STEP 2 - define your cost function\ndef my_costfun(p, x, y):\n return np.sum((my_model(p, x) - y)**2)\n\n# STEP 3 - minimize cost function\nfrom scipy.optimize import minimize\nresult = minimize(my_costfun, np.array([2., 3., 5.]), args=(x,y) )\n\nprint result\n\nprint 'RESULT:\\n', result\nprint ''\nprint 'RELATIVE ERROR:\\n', (result.x - p)/p*100., '%'\nprint ''\nprint 'Hessian ERROR:' #err = sqrt(diag(inv(Hessian)))\nhess_err = np.sqrt(np.diag(result['hess_inv']))\nprint hess_err",
"MCMC (emcee)\n\nMCMC is a convenient tool for drawing a sample from a given probability distribution.\nTherefore, is mostly used to estimate parameters in Bayesian way.\nemcee: http://dan.iel.fm/emcee/current/#",
"from emcee import EnsembleSampler",
"a simple example - draw sample from uniformly distribution",
"def lnprob(theta):\n theta = np.array(theta)\n if np.all(theta>-3.) and np.all(theta<3.):\n return 0\n return -np.inf\n\nnwalkers = 10\nndim = 3\np0 = [np.random.rand(ndim) for i in range(nwalkers)]\nsampler = EnsembleSampler(nwalkers, ndim, lnprob)\npos = sampler.run_mcmc(p0, 2000)\n\nnp.corrcoef(sampler.flatchain[0:2000, 0], sampler.flatchain[2000:4000, 0])\n\nfig = plt.figure(figsize=(12,10))\nax = fig.add_subplot(311)\nax.plot(sampler.chain[:,:,0].T, '-', color='k', alpha=0.3)\nax = fig.add_subplot(312)\nax.plot(sampler.chain[:,:,1].T, '-', color='k', alpha=0.3)\nax = fig.add_subplot(313)\nax.plot(sampler.chain[:,:,2].T, '-', color='k', alpha=0.3);\n\nimport corner\nfig = corner.corner(sampler.flatchain, labels=[\"p0\", \"p1\", \"p2\"],\n truths=[0., 0., 0.])\n# fig.savefig(\"triangle.png\")",
"how about Gaussian distribution?\n\n\n1-D Gauassian\n$p(x|\\mu, \\sigma) \\propto \n\\exp{(-\\frac{(x-\\mu)^2}{2\\sigma^2})}$\n\n\nN-D Gauassian\n$p(\\overrightarrow{x}|\\overrightarrow{\\mu}, \\Sigma) \\propto \\exp{(-\\frac{1}{2}(\\overrightarrow{x}-\\overrightarrow{\\mu})^T\\Sigma (\\overrightarrow{x}-\\overrightarrow{\\mu}))}$\nwhere $\\Sigma$ is the covariance matrix",
"def lnprob(x, mu, ivar):\n# if np.all(np.abs(x)<100.):\n x = x.reshape(-1, 1)\n mu = mu.reshape(-1, 1)\n return -np.dot(np.dot((x-mu).T, ivar), x-mu)\n# else:\n# return -np.inf\n\nmu = np.array([0.1, 0.2, 0.5])\ncov = np.array([[1.0, 0.0, 0.0],\n [0.0, 10, 9],\n [0.0, 9, 10]])\nivar = np.linalg.inv(cov)\nprint 'ivar: \\n', ivar\nprint 'det(cov): \\n', np.linalg.det(cov)\nprint 'det(ivar): \\n', np.linalg.det(ivar)\n\nnwalkers = 10\nndim = 3\np0 = [np.random.rand(ndim) for i in range(nwalkers)]\nsampler = EnsembleSampler(nwalkers, ndim, lnprob, args=(mu, ivar), threads=10)\npos,prob,state = sampler.run_mcmc(p0, 2000)\n\np0\n\nfig = plt.figure(figsize=(12,10))\nax = fig.add_subplot(311)\nax.plot(sampler.chain[:,:,0].T, '-', color='k', alpha=0.3)\nax = fig.add_subplot(312)\nax.plot(sampler.chain[:,:,1].T, '-', color='k', alpha=0.3)\nax = fig.add_subplot(313)\nax.plot(sampler.chain[:,:,2].T, '-', color='k', alpha=0.3);\n\nfig = corner.corner(sampler.flatchain, labels=[\"mu1\", \"mu2\", \"mu3\"],\n truths=mu)\n\nprint mu\nprint ivar",
"how to use MCMC to estimate model parameters?\nsuppose you choose a Gaussian likelihood:\n$L(\\theta|x_i,model) \\propto \\exp{(-\\frac{(x_i-x_{i, model})^2}{2\\sigma^2})} $\n$ \\log{(L(\\theta|x_i,model))} \\propto -\\frac{(x_i-x_{i, model})^2}{2\\sigma^2} = -\\frac{1}{2}{\\chi^2}$",
"def lnprior(theta):\n if np.all(np.abs(theta)<10000.):\n return 0\n else:\n return -np.inf\n\ndef lnlike(theta, x, y):\n y_model = np.polyval(theta, x)\n return -np.sum((y_model-y)**2)\n\ndef lnprob(theta, x, y):\n return lnprior(theta)+lnlike(theta, x, y)\n\nnwalkers = 10\nndim = 3\np0 = [np.random.rand(ndim) for i in range(nwalkers)]\nsampler = EnsembleSampler(nwalkers, ndim, lnprob, args=(x, y), threads=10)\npos,prob,state = sampler.run_mcmc(p0, 500)\n\nnp.corrcoef(sampler.flatchain[0:500, 0], sampler.flatchain[500:1000, 0])\n\nfig = plt.figure(figsize=(12,10))\nax = fig.add_subplot(311)\nax.plot(sampler.chain[:,:,0].T, '-', color='k', alpha=0.3)\nax = fig.add_subplot(312)\nax.plot(sampler.chain[:,:,1].T, '-', color='k', alpha=0.3)\nax = fig.add_subplot(313)\nax.plot(sampler.chain[:,:,2].T, '-', color='k', alpha=0.3);\n\nfig = corner.corner(sampler.flatchain, labels=[\"p0\", \"p1\", \"p2\"],\n truths=p)\n\nsampler.reset()\npos,prob,state = sampler.run_mcmc(pos, 2000)\n\nnp.corrcoef(sampler.flatchain[0:2000, 0], sampler.flatchain[4000:6000, 0])\n\nfig = plt.figure(figsize=(12,10))\nax = fig.add_subplot(311)\nax.plot(sampler.chain[:,:,0].T, '-', color='k', alpha=0.3)\nax = fig.add_subplot(312)\nax.plot(sampler.chain[:,:,1].T, '-', color='k', alpha=0.3)\nax = fig.add_subplot(313)\nax.plot(sampler.chain[:,:,2].T, '-', color='k', alpha=0.3);\n\nfig = corner.corner(sampler.flatchain, labels=[\"p0\", \"p1\", \"p2\"],\n truths=p)\nfig = corner.corner(sampler.flatchain, labels=[\"p0\", \"p1\", \"p2\"],\n truths=result.x)",
"comparison with the results from optimization",
"# truth\np\n\n# MCMC results\nnp.percentile(sampler.flatchain, [15., 50., 85.], axis=0)\n\nprint result.x - hess_err\nprint result.x\nprint result.x + hess_err"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gfeiden/Notebook
|
Projects/mlt_calib/resampling_tests.ipynb
|
mit
|
[
"Resampling Posterior Distributions\nTests to explore how sensitive resulting modal parameters are to details regarding kernel density estimates (KDEs). We'll look at the case of a straight KDE on the resulting posterior distribution along with cases where we bootstrap resample the posterior distribution, weighted by the posterior probability, prior to calculating the KDE.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nkde_pdf = np.genfromtxt('data/run08_kde_props.txt') # KDE of full PDF\nkde_pbr = np.genfromtxt('data/run08_kde_props_tmp.txt') # KDE of bootstrap resample on final 75 iterations\nkde_fbr = np.genfromtxt('data/run08_kde_props_tmp2.txt') # KDE of bootstrap resample on full PDF",
"It will first be instructive to see whether there is a difference between parameters estimated from the two different KDEs calculated posteriors that have been bootstrapped resampled.",
"fig, ax = plt.subplots(3, 3, figsize=(12., 12.))\n\nfor i in range(9):\n row = i/3\n col = i%3\n axis = ax[row, col]\n \n axis.plot([min(kde_pbr[:, i]), max(kde_pbr[:, i])], [min(kde_pbr[:, i]), max(kde_pbr[:, i])], \n '-', lw=3, c='#b22222', alpha=0.8)\n axis.plot(kde_pbr[:, i], kde_fbr[:, i], 'o', markersize=5.0, c='#555555', alpha=0.6)",
"Panel 3 in the top right shows estimates for the helium abundance, which was not constrained in this run, and my be safely ignored for comparing the validity of the two methods. Both methods yields similar results with some difference in the inferred age and mixing length. However, they do not appear to significntly affect the results. It is therefore seems most reasonable to use the KDEs computed using bootstrap resampled posteriors from the full MCMC simulations (probability weighted).\nHow does this compare to results where we compute the KDE from the resulting posterior distributions of the MCMC simluation, without weighting results by their probability (a second time)?",
"fig, ax = plt.subplots(3, 3, figsize=(12., 12.))\n\nfor i in range(9):\n row = i/3\n col = i%3\n axis = ax[row, col]\n \n axis.plot([min(kde_fbr[:, i]), max(kde_fbr[:, i])], [min(kde_fbr[:, i]), max(kde_fbr[:, i])], \n '-', lw=3, c='#b22222', alpha=0.8)\n axis.plot(kde_fbr[:, i], kde_pdf[:, i], 'o', markersize=5.0, c='#555555', alpha=0.6)\nfig.tight_layout()",
"Results in this case are quite striking. Ages, mixing lengths, and metallicities all appear quite different between the two modal estimates from their respective KDEs. With metallicities, we find that the KDE applied directly to the posterior distribution function from the MCMC simluation produces metallicities that are up to 0.2 dex higher than when we resample the posterior. We also find that ages tend to be older when using the raw posterior distributions. Similarly, there is a propensity for the raw posteriors to produce higher mixing length parameters as compared to the resampled posterior. \n\nHow do these differences affect the resulting relationships that we derive for the mixing length parameter as a funciton of stellar properties? Let's look at the same two final sets of inferred parameters as the previous figure.",
"fig, ax = plt.subplots(2, 3, figsize=(12., 8.))\n\n# Full Resampled KDE\nax[0, 0].plot(10**kde_fbr[:, 6], kde_fbr[:, 5], 'o', markersize=6., c='#555555', alpha=0.6)\nax[0, 1].plot(kde_fbr[:, 1], kde_fbr[:, 5], 'o', markersize=6., c='#555555', alpha=0.6)\nax[0, 2].plot(kde_fbr[:, 0], kde_fbr[:, 5], 'o', markersize=6., c='#555555', alpha=0.6)\n\n# Raw KDE\nax[1, 0].plot(10**kde_pdf[:, 6], kde_pdf[:, 5], 'o', markersize=6., c='#555555', alpha=0.6)\nax[1, 1].plot(kde_pdf[:, 1], kde_pdf[:, 5], 'o', markersize=6., c='#555555', alpha=0.6)\nax[1, 2].plot(kde_pdf[:, 0], kde_pdf[:, 5], 'o', markersize=6., c='#555555', alpha=0.6)\n\nfig.tight_layout()",
"Herein lies a problem. How we compute the modal value can alter the results. To see what effect this has on inferred correlations, let's compare Spearman, Pearson, and Kendall correlation tests.",
"import scipy.stats as stats",
"First, Spearman $r$ rank-order correlation coefficient. For resampled distribution:",
"stats.spearmanr(10**kde_fbr[:, 6], kde_fbr[:, 5]), \\\n stats.spearmanr(kde_fbr[:, 1], kde_fbr[:, 5]), \\\n stats.spearmanr(kde_fbr[:, 0], kde_fbr[:, 5])\n\nstats.spearmanr(10**kde_pdf[:, 6], kde_pdf[:, 5]), \\\n stats.spearmanr(kde_pdf[:, 1], kde_pdf[:, 5]), \\\n stats.spearmanr(kde_pdf[:, 0], kde_pdf[:, 5])",
"Now for Pearson $\\rho$ correlation coefficients.",
"stats.pearsonr(10**kde_fbr[:, 6], kde_fbr[:, 5]), \\\n stats.pearsonr(kde_fbr[:, 1], kde_fbr[:, 5]), \\\n stats.pearsonr(kde_fbr[:, 0], kde_fbr[:, 5])\n\nstats.pearsonr(10**kde_pdf[:, 6], kde_pdf[:, 5]), \\\n stats.pearsonr(kde_pdf[:, 1], kde_pdf[:, 5]), \\\n stats.pearsonr(kde_pdf[:, 0], kde_pdf[:, 5])",
"And finally, Kendall $\\tau$ correlation coefficients.",
"stats.kendalltau(10**kde_fbr[:, 6], kde_fbr[:, 5]), \\\n stats.kendalltau(kde_fbr[:, 1], kde_fbr[:, 5]), \\\n stats.kendalltau(kde_fbr[:, 0], kde_fbr[:, 5])\n\nstats.kendalltau(10**kde_pdf[:, 6], kde_pdf[:, 5]), \\\n stats.kendalltau(kde_pdf[:, 1], kde_pdf[:, 5]), \\\n stats.kendalltau(kde_pdf[:, 0], kde_pdf[:, 5])",
"The results can largely be inferred by visiual inspection of the two sets of data. In the case of bootstrap resampled inferences, the two dominant correlations are between temperature and mass, while there is not clear correlation with metallicity. For this data, the Pearson $\\rho$ correlation coefficient is not necessarily a reasonable test, given that there is not necessarily a linear relationship between the different parameters. However, both Kendall $\\tau$ and Spearman $r$ correlation coefficients, which do not assume linearity, show what could be significant trends with temperature and mass. These trends are far more significant when we use a bootstrapped resampled KDE as opposed to the raw MCMC simluation results. \nThis is quite concerning, since one wishes to have a single unique modal value that is relatively independent of the method used for estimating it. Is there something behind the differences? I postulate that the differences may be due to the adopted metallicity uncertainties and whether one wishes to place more confidence in the observed values. As a test, I recomputed the bootstrap resampled estimates using a larger uncertainty on the metallicity (0.2 dex instead of 0.05 dex) to see if that produces a significant differences in the results.",
"kde_fbr2 = np.genfromtxt('data/run08_kde_props_tmp3.txt')",
"Comparing with the KDE computed from the raw posterior distributions,",
"fig, ax = plt.subplots(3, 3, figsize=(12., 12.))\n\nfor i in range(9):\n row = i/3\n col = i%3\n axis = ax[row, col]\n \n axis.plot([min(kde_fbr2[:, i]), max(kde_fbr2[:, i])], [min(kde_fbr2[:, i]), max(kde_fbr2[:, i])], \n '-', lw=3, c='#b22222', alpha=0.8)\n axis.plot(kde_fbr2[:, i], kde_pdf[:, i], 'o', markersize=5.0, c='#555555', alpha=0.6)\n \nfig.tight_layout()",
"Here we find that the results, while still producing different estimates for ages and mixing lengths, largely reproduces the metallicities inferred from the raw KDE maximum.\nWhat do the resulting comparisons with stellar properties look like?",
"fig, ax = plt.subplots(1, 3, figsize=(12., 4.))\n\n# Full Resampled KDE, wider [Fe/H] uncertainty (weaker prior)\nax[0].plot(10**kde_fbr2[:, 6], kde_fbr2[:, 5], 'o', markersize=6., c='#555555', alpha=0.6)\nax[1].plot(kde_fbr2[:, 1], kde_fbr2[:, 5], 'o', markersize=6., c='#555555', alpha=0.6)\nax[2].plot(kde_fbr2[:, 0], kde_fbr2[:, 5], 'o', markersize=6., c='#555555', alpha=0.6)\n\nfig.tight_layout()",
"And the Spearman $r$ rank-correlation coefficients from the above comparisons,",
"stats.spearmanr(10**kde_fbr2[:, 6], kde_fbr2[:, 5]), \\\n stats.spearmanr(kde_fbr2[:, 1], kde_fbr2[:, 5]), \\\n stats.spearmanr(kde_fbr2[:, 0], kde_fbr2[:, 5])",
"We recover the metallicity correlation. In fact, the recovered rank-correlation coefficients are fully consistent with modal values computed from the raw posterior distributions. This result highlights the sensitivity of the resulting correlation between metallicity and mixing length parameter with our confidence in the observed metallicities (weak or strong prior).\nCritically, the strength of the metallicity prior does not alter the inferred correlations between mass and $T_{\\rm eff}$ with $\\alpha_{\\rm MLT}$. These two trends appear robust against our prior assumptions regarding the metallicity measurement uncertianty.\n\nTri-linear analysis\nTo compare with other works in the literature, a tri-linear analysis should be performed so that we can simultaneously extract how $\\alpha$ varies as a function of multiple parameters.",
"import statsmodels.api as sm",
"First we need to reorganize the data so that we have an array with only the desired quantities.",
"fit_data_all = kde_fbr2 # include all points\nlogg_all = np.log10(6.67e-8*fit_data_all[:, 0]*1.988e33/(fit_data_all[:, 26]*6.955e10)**2)\n\n# fit with all points\nall_data = np.column_stack((logg_all, fit_data_all[:, 1]))\nall_data = np.column_stack((all_data, fit_data_all[:, 6]))\nall_data = sm.tools.tools.add_constant(all_data, prepend=True)\n\n# remove noted outliers (high and low alphas)\nfit_data_low = np.array([star for star in kde_fbr2 if 0.55 <= star[5] <= 2.5])\nlogg_low = np.log10(6.67e-8*fit_data_low[:, 0]*1.988e33/(fit_data_low[:, 26]*6.955e10)**2)\n\n# fit to lower sequence\nlow_data = np.column_stack((logg_low, fit_data_low[:, 1]))\nlow_data = np.column_stack((low_data, fit_data_low[:, 6]))\nlow_data = sm.tools.tools.add_constant(low_data, prepend=True)",
"Perform a trilinear regression to the data.",
"trifit_all = sm.regression.linear_model.GLS(fit_data_all[:, 5], all_data, sigma=fit_data_all[:, 14]).fit()\ntrifit_low = sm.regression.linear_model.GLS(fit_data_low[:, 5], low_data, sigma=fit_data_low[:, 14]).fit()\n\nprint(trifit_all.summary()) # All data points included",
"When fitting data for all stars, we find probable cause to reject the null hypothesis for perceived correlations between the effective temperature, log(g), and the convective mixing length parameters. However, we find that we are unable to reject the null hypothesis when testing whether metallicities and mixing length parameters are correlated. It should be noted that a linear model does not provide a good description of the data, as indicated by the poor fit quality ($R = 0.66$). Qualitative assessment of figures above suggests this would have been the case, as mixing length parameters for the lowest mass stars form two sub-populations: one at higher $\\alpha$ and the other at low $\\alpha$.",
"print(trifit_low.summary()) # \"outliers\" removed",
"A linear model performs well for fitting the subset of the data where points affected by grid boundaries have been removed ($R = 0.84$). Assuming a significance test of $p < 0.01$, we would reject the null hypothesis for all three parameters, finding plausible evidence that the correlations shown above are not the result of randomness.",
"fig, ax = plt.subplots(4, 2, figsize=(12., 16.))\n\nfor row in ax:\n for axis in row:\n axis.tick_params(which='major', axis='both', length=14., labelsize=14.)\n\n#=================\n# All data\n#\nb0, b1, b2, b3 = trifit_all.params\n# total residuals\nax[0, 0].set_title('All Data Points', fontsize=16.)\nax[0, 0].set_xlabel('Mass (Msun)', fontsize=14.)\nax[0, 0].plot([0.0, 1.0], [0.0, 0.0], dashes=(20., 5.), lw=2, c='#b22222', alpha=0.8)\nax[0, 0].plot(fit_data_all[:, 0], trifit_all.resid, 'o', c='#555555', alpha=0.6)\n\n# variations with b1 (mass)\nloggs = np.arange(4.4, 5.2, 0.05)\ndepend = fit_data_all[:, 5] - b0 - b2*fit_data_all[:, 1] - b3*fit_data_all[:, 6]\nax[1, 0].set_xlabel('log(g)', fontsize=14.)\nax[1, 0].plot(loggs, loggs*b1, dashes=(20., 5.), lw=2, c='#b22222', alpha=0.8)\nax[1, 0].plot(logg_all, depend, 'o', c='#555555', alpha=0.6)\n\n# variations with b2 (metallicity)\nmetals = np.arange(-0.6, 0.60, 0.05)\ndepend = fit_data_all[:, 5] - b0 - b1*logg_all - b3*fit_data_all[:, 6]\nax[2, 0].set_xlabel('[M/H] (dex)', fontsize=14.)\nax[2, 0].plot(metals, metals*b2, dashes=(20., 5.), lw=2, c='#b22222', alpha=0.8)\nax[2, 0].plot(fit_data_all[:, 1], depend, 'o', c='#555555', alpha=0.6)\n\n# variations with b3 (logTeff)\nlogT = np.arange(3.4, 3.75, 0.05)\ndepend = fit_data_all[:, 5] - b0 - b1*logg_all - b2*fit_data_all[:, 1]\nax[3, 0].set_xlabel('log(Teff)', fontsize=14.)\nax[3, 0].plot(logT, logT*b3, dashes=(20., 5.), lw=2, c='#b22222', alpha=0.8)\nax[3, 0].plot(fit_data_all[:, 6], depend, 'o', c='#555555', alpha=0.6)\n\n#=================\n# Outliers removed\n#\nb0, b1, b2, b3 = trifit_low.params\n# total residuals\nax[0, 1].set_title('Subset of Data Points', fontsize=16.)\nax[0, 1].set_xlabel('Mass (Msun)', fontsize=14.)\nax[0, 1].plot([0.0, 1.0], [0.0, 0.0], dashes=(20., 5.), lw=2, c='#b22222', alpha=0.8)\nax[0, 1].plot(fit_data_low[:, 0], trifit_low.resid, 'o', c='#555555', alpha=0.6)\n\n# variations with b1 (mass)\nloggs = np.arange(4.3, 5.2, 0.05)\ndepend = fit_data_low[:, 5] - b0 - b2*fit_data_low[:, 1] - b3*fit_data_low[:, 6]\nax[1, 1].set_xlabel('log(g)', fontsize=14.)\nax[1, 1].plot(loggs, loggs*b1, dashes=(20., 5.), lw=2, c='#b22222', alpha=0.8)\nax[1, 1].plot(logg_low, depend, 'o', c='#555555', alpha=0.6)\n\n# variations with b2 (metallicity)\nmetals = np.arange(-0.6, 0.60, 0.05)\ndepend = fit_data_low[:, 5] - b0 - b1*logg_low - b3*fit_data_low[:, 6]\nax[2, 1].set_xlabel('[M/H] (dex)', fontsize=14.)\nax[2, 1].plot(metals, metals*b2, dashes=(20., 5.), lw=2, c='#b22222', alpha=0.8)\nax[2, 1].plot(fit_data_low[:, 1], depend, 'o', c='#555555', alpha=0.6)\n\n# variations with b3 (logTeff)\nlogT = np.arange(3.4, 3.75, 0.05)\ndepend = fit_data_low[:, 5] - b0 - b1*logg_low - b2*fit_data_low[:, 1]\nax[3, 1].set_xlabel('log(Teff)', fontsize=14.)\nax[3, 1].plot(logT, logT*b3, dashes=(20., 5.), lw=2, c='#b22222', alpha=0.8)\nax[3, 1].plot(fit_data_low[:, 6], depend, 'o', c='#555555', alpha=0.6)\n\nfig.tight_layout()",
"On the left, we have residuals and partial resdiuals when all data points are used in the trilinear analysis. On the right, the same for when points affected by the grid boundaries have been removed. The top two panels show total residuals of the fit to mixing length parameters as a function of inferred stellar mass. There are issues at the lowest masses in both instances, either due to the influence of low mass stars with high values for the inferred mixing length parameters, or because there is a genuine change of slope in the relation that is not adequately reproduced by a linear model. This change of slope may be the result of not treating atmospheric physics self-consistently (fixed $\\alpha = 1.5$), which may affect cool stars where convective is occuring in the outer, optically thin layers.\nPartial residuals are shown in the bottom six panels. These isolate the impact of the given independent variables on the dependent variable (here $\\alpha$). Therefore, we see that in each case, there is a direct correlation between the dependent parameter (stellar properties) and $\\alpha$, with the exception of the metallicity on the left-hand side for which we cannot rule out the null hypothesis that the correlation is the result of random scatter. For the rest of the cases, the correlations are readily apparent. \nWe can thus conclude that, for our sample of stars under the model conditions/assumptions present in the Dartmouth models, the mixing length parameters is: (1) directly correlated with log(g) and log(Teff) and (2) plausibly correlated with metallicity. \nWould like to construct a figure like Figure 5 from Ludwig, Freytag, & Steffen (1999). In lieu of that (for the moment), we can compute our value for the mixing length at some of their grid points.",
"logg = 4.44\nlogT = np.log10(4500.)\nFe_H = 0.0\n\nprint(\"Our alpha: {:5.3f}; LFS alpha: 1.7\".format(b0 + b1*logg + b2*Fe_H + b3*logT))\n\nlogg = 4.44\nlogT = np.log10(5770.)\nFe_H = 0.0\n\nprint(\"Our alpha: {:5.3f}; LFS alpha: 1.6\".format(b0 + b1*logg + b2*Fe_H + b3*logT))",
"Key take away is that, whereas LFS predict a decrease in alpha as a function of Teff, we predict an increase. Curiously, we should note that we very nearly recover our solar calibrated value for the Sun from our fit:",
"logg = 4.44\nlogT = np.log10(5778.) # Note: differs from new IAU recommended value (5771.8)\nFe_H = 0.0\n\nprint(\"Fit alpha: {:5.3f}; Solar alpha: 1.884\".format(b0 + b1*logg + b2*Fe_H + b3*logT))",
"There is an approximately 4% difference between the extrapolated solar mixing length and the true solar calibrated value. For comparison, the mixing length trilinear fit presented by Bonaca et al. (2012) yields a solar mixing length parameters 6% lower than their solar calibrated value -- again, quite good agreement given that this did not necessarily need to be the case.\n\nBF noted that our relations appear to be significantly steeper than those predicted by both Bonaca et al. (2012) and 3D RHD models. The latter is certainly true, but how do our relations compare to Bonaca et al. (2012), suitably re-scaled to our solar calibration point? First we need to define the coefficients in their fit.",
"c0, c1, c2, c3 = [-12.77, 0.54, 3.18, 0.52] # const., log(g), log(Teff), [M/H]",
"Now we can compute values for $\\alpha$ using our fit and those from Bonaca et al. (2012).",
"our_alphas = trifit_low.predict(low_data)/1.884\nb12_alphas = (c0 + c1*low_data[:, 1] + c2*low_data[:, 3] + c3*low_data[:, 2])/1.69\n\nour_solar = (b0 + b1*logg + b2*Fe_H + b3*logT)/1.884\nb12_solar = (c0 + c1*logg + c2*logT + c3*Fe_H)/1.69",
"Now perform a direct comparison, but normalized to solar values (see above),",
"fig, ax = plt.subplots(1, 1, figsize=(5., 5.))\n\nax.set_xlim(0.2, 1.3)\nax.set_ylim(0.5, 1.0)\n\nax.plot(our_solar, b12_solar, '*', markersize=10., c='y')\nax.plot([0.0, 2.5], [0.0, 2.5], dashes=(20., 5.), lw=2, c='#b22222')\nax.plot(our_alphas, b12_alphas, 'o', markersize=6.0, c='#555555', alpha=0.6)",
"Of course, we should perhaps only look at the limited range of our sample that overlaps with the calibration range of Bonaca et al. (2012).",
"low_sub = np.array([star for star in low_data if star[1] <= 4.6])\n\nour_alphas = trifit_low.predict(low_sub)\nb12_alphas = (c0 + c1*low_sub[:, 1] + c2*low_sub[:, 3] + c3*low_sub[:, 2])*1.884/1.69\n\nfig, ax = plt.subplots(1, 1, figsize=(5., 5.))\n\nax.set_xlim(0.9, 2.0)\nax.set_ylim(0.9, 2.0)\n\nax.plot(our_solar*1.884, b12_solar*1.884, '*', markersize=10., c='y')\nax.plot([0.0, 2.5], [0.0, 2.5], dashes=(20., 5.), lw=2, c='#b22222')\nax.plot(our_alphas, b12_alphas, 'o', markersize=6.0, c='#555555', alpha=0.6)",
"There is a clear systematic deviation of our predictions from those of Bonaca et al. (2012), with our derived mixing length parameters covering a broader range of values than theirs. The narrow range of parameters could be due to the narrow range of stellar parameters covered by their investigation. What could be behind the noted systematic difference? Avenues for exploration include:\n 1. Helium abundances. Runs above assume Z-scaled helium abundance.\n 2. Differences in treatment of surface boundary conditions.\n 3. Details of our treatment of surface boundary conditions.\nWe'll explore each, in turn.\nHelium abundances\nInstead of using Z-scaled helium mass fractions, we can helium abundances predicted by models in a separate MCMC run (Run 5). This will require resampling data for stars in that Run.",
"kde_run5 = np.genfromtxt('data/run05_kde_props_tmp3.txt')",
"Now, let's compare reults for the mixing lenght parameter from Run 5 (float Y) and Run 8 (Z-scaled Y).",
"fig, ax = plt.subplots(1, 1, figsize=(5., 5.))\n\nax.set_xlim(0.5, 3.0)\nax.set_ylim(0.5, 3.0)\n\nax.plot(kde_fbr[:, 5], kde_run5[:, 5], 'o', c='#555555', alpha=0.6)",
"Prune the sample to select only stars with logg below 4.6.",
"logg_run5 = np.log10(6.67e-8*kde_run5[:, 0]*1.988e33/(kde_run5[:, 26]*6.955e10)**2)\nkde_run5 = np.column_stack((kde_run5, logg_run5))\n\nsub_run5 = np.array([star for star in kde_run5 if star[-1] <= 4.6 and star[30] > -0.5])\n\nb12_run5 = (c0 + c1*sub_run5[:, -1] + c2*sub_run5[:, 6] + c3*sub_run5[:, 1])*1.884/1.69",
"Compare Bonaca et al. results for Run 5 with our inferred values.",
"fig, ax = plt.subplots(1, 1, figsize=(5., 5.))\n\nax.set_xlim(0.7, 2.0)\nax.set_ylim(0.7, 2.0)\n\nax.plot([0.7, 2.0], [0.7, 2.0], dashes=(20., 5.), lw=2, c='#b22222')\nax.plot(sub_run5[:, 5], b12_run5, 'o', c='#555555', alpha=0.9)\nax.plot(our_alphas, b12_alphas, 'o', markersize=6.0, c='#555555', alpha=0.3)",
"Results from this comparison are nearly identical to those from the previous comparison. Data from the previous run are shown as light gray points above.\nBoundary Condition Fit Depth\nHere, we investigate whether our fitting of the surface boundary conditions below the nominal photosphere ($\\tau = 10$) produces different results as the mixing length parameter is changed. Such a dependence would not be too surprising, as outer radiative layers are typically more sensitive to variations in the mixing length parameter. Given that we are not capturing variation in those layers (fixed atmospheric boundary conditions), this may emphasize the role of deeper convection for setting the stellar radii in our models. The latter is more difficult to perturb, so lower values of the mixing length may be requied to provide accurate models. \nLet's now look at how the stellar radius is affected by our choice of boundary conditions for three separate masses that span the range of stellar masses probed in our study.",
"m0800_t010 = np.genfromtxt('../../../evolve/dmestar/trk/gas07/p000/a0/amlt2202/m0800_GAS07_p000_p0_y26_mlt2.202.trk')\nm0800_t001 = np.genfromtxt('../../../evolve/data/mltcal/model/trks/m0800_GAS07_p000_p0_y26_mlt1.911.trk')\nm0800_t001_a = np.genfromtxt('../../../evolve/data/mltcal/model/trks/m0800_GAS07_p000_p0_y26_mlt1.000.trk')\n\nfig, ax = plt.subplots(1, 1, figsize=(12., 4.))\n\nax.set_xlim(0.01, 5.)\nax.set_ylim(0.6, 1.2)\n\nax.semilogx(m0800_t001[:,0]/1.0e9, 10**m0800_t001[:,4], lw=3)\nax.semilogx(m0800_t001_a[:,0]/1.0e9, 10**m0800_t001_a[:,4], '--', lw=3)\nax.semilogx(m0800_t010[:,0]/1.0e9, 10**m0800_t010[:,4], '-.', lw=3)\n\n10**-0.07019 / 10**-0.12029\n\nm0780_gs98 = np.genfromtxt('../../../evolve/models/grid/p000/a0/amlt1884/m0780_GS98_p000_p0_mlt1.884.trk')\nm0780_gs98_a = np.genfromtxt('../../../evolve/models/mlt/m0780_GS98_m008_p0_y27_mlt1.088.trk')\n\nfig, ax = plt.subplots(1, 1, figsize=(12., 4.))\n\nax.set_xlim(0.01, 5.)\nax.set_ylim(0.6, 1.2)\n\nax.semilogx(m0780_gs98[:,0]/1.0e9, 10**m0780_gs98[:,4], '-', lw=3)\nax.semilogx(m0780_gs98_a[:,0]/1.0e9, 10**m0780_gs98_a[:,4], '--', lw=3)\n\n10**-0.10302 / 10**-0.13523"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gschivley/ghgforcing
|
Example/Example notebook.ipynb
|
mit
|
[
"%matplotlib inline\nfrom __future__ import division\nfrom ghgforcing import CO2, CH4\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns",
"Example uses of ghgforcing\nDefine the time axis along which all emissions and calculations take place, from year 0 to 100. The time-step (tstep) is set to 0.01 years to improve the accuracy of calcuations.",
"tstep = 0.01\ntime = np.linspace(0, 100, 100/tstep+1)",
"Deterministic results\nStart with a pulse emission of CO<sub>2</sub>",
"co2_emission = np.zeros_like(time)\nco2_emission[0] = 1\n\nco2_forcing = CO2(co2_emission, time, kind='RF')",
"The CO2 and CH4 functions return values at 1-year timesteps - 101 values from year 0 to 100. The calculations are done at a default time-step of 0.01 years, but that level of detail seems unnecessary and burdensome for an output.",
"co2_forcing.size",
"Having the outputs on an annual basis makes it really easy to plot.",
"plt.plot(co2_forcing)\nplt.xlabel('Time', size=14)\nplt.ylabel('$W \\ m^{-2}$', size=14)",
"Pulse emissions of fossil CH<sub>4</sub> with and non-fossil CH<sub>4</sub> without climate-carbon feedbacks.",
"ch4_emission = co2_emission.copy()\n\nch4_forcing_with = CH4(ch4_emission, time, kind='RF', cc_fb=True)\nch4_forcing_without = CH4(ch4_emission, time, kind='RF', \n cc_fb=False, decay=False)\n\nplt.plot(ch4_forcing_with, label = 'Fossil CH$_4$ with cc-fb')\nplt.plot(ch4_forcing_without, label = 'Non-fossil CH$_4$ without cc-fb')\nplt.xlabel('Time', size=14)\nplt.ylabel('$W \\ m^{-2}$', size=14)\nplt.legend(fontsize=12)",
"Continuous CO<sub>2</sub> emissions - 1kg/year\nEmissions at each time-step should be equal to the annual emission rate.",
"co2_cont = np.ones_like(time)\nco2_cont_rf = CO2(co2_cont, time)\n\nplt.plot(co2_cont_rf)\nplt.xlabel('Time', size=14)\nplt.ylabel('$W \\ m^{-2}$', size=14)",
"Stochastic (Monte Carlo) results\nJust using the continuous CO<sub>2</sub> emissions scenario.\nMean and +/- 1 sigma\nWhen full_output is False, the functions return a Pandas DataFrame",
"runs = 500\n\nco2_cont_mc = CO2(co2_cont, time, kind='RF',\n runs=runs, full_output=False)\n\nco2_cont_mc.head()\n\nplt.plot(co2_cont_mc['mean'])\nplt.fill_between(x=np.arange(0,101), \n y1=co2_cont_mc['+sigma'],\n y2=co2_cont_mc['-sigma'],\n alpha=0.4)\nplt.xlabel('Time', size=14)\nplt.ylabel('$W \\ m^{-2}$', size=14)",
"Full Monte Carlo in addition to mean and +/- 1 sigma\nWhen full_output is True, the functions return a Pandas DataFrame with the mean and +/- 1 sigma, and a numpy array with the full Monte Carlo results.",
"co2_cont_mc, co2_cont_full = CO2(co2_cont, time, kind='RF',\n runs=runs, full_output=True)",
"The co2_cont_mc results are the same as above",
"co2_cont_mc.head()",
"I'm putting the co2_cont_full into a DataFrame just to make showing some of the results easier.",
"df = pd.DataFrame(co2_cont_full)\n\ndf.loc[:,:200].plot(legend=False, c='b', alpha=0.2)\nplt.xlabel('Time', size=14)\nplt.ylabel('$W \\ m^{-2}$', size=14)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jupyter/nbgrader
|
nbgrader/tests/preprocessors/files/test.ipynb
|
bsd-3-clause
|
[
"For this problem set, we'll be using the Jupyter notebook:\n\n\nPart A (2 points)\nWrite a function that returns a list of numbers, such that $x_i=i^2$, for $1\\leq i \\leq n$. Make sure it handles the case where $n<1$ by raising a ValueError.",
"def squares(n):\n \"\"\"Compute the squares of numbers from 1 to n, such that the \n ith element of the returned list equals i^2.\n \n \"\"\"\n ### BEGIN SOLUTION\n if n < 1:\n raise ValueError(\"n must be greater than or equal to 1\")\n return [i ** 2 for i in range(1, n + 1)]\n ### END SOLUTION",
"Your function should print [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] for $n=10$. Check that it does:",
"squares(10)\n\n\"\"\"Check that squares returns the correct output for several inputs\"\"\"\nfrom nose.tools import assert_equal\nassert_equal(squares(1), [1])\nassert_equal(squares(2), [1, 4])\nassert_equal(squares(10), [1, 4, 9, 16, 25, 36, 49, 64, 81, 100])\nassert_equal(squares(11), [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121])\n\n\"\"\"Check that squares raises an error for invalid inputs\"\"\"\nfrom nose.tools import assert_raises\nassert_raises(ValueError, squares, 0)\nassert_raises(ValueError, squares, -4)",
"Part B (1 point)\nUsing your squares function, write a function that computes the sum of the squares of the numbers from 1 to $n$. Your function should call the squares function -- it should NOT reimplement its functionality.",
"def sum_of_squares(n):\n \"\"\"Compute the sum of the squares of numbers from 1 to n.\"\"\"\n ### BEGIN SOLUTION\n return sum(squares(n))\n ### END SOLUTION",
"The sum of squares from 1 to 10 should be 385. Verify that this is the answer you get:",
"sum_of_squares(10)\n\n\"\"\"Check that sum_of_squares returns the correct answer for various inputs.\"\"\"\nassert_equal(sum_of_squares(1), 1)\nassert_equal(sum_of_squares(2), 5)\nassert_equal(sum_of_squares(10), 385)\nassert_equal(sum_of_squares(11), 506)\n\n\"\"\"Check that sum_of_squares relies on squares.\"\"\"\norig_squares = squares\ndel squares\ntry:\n assert_raises(NameError, sum_of_squares, 1)\nexcept AssertionError:\n raise AssertionError(\"sum_of_squares does not use squares\")\nfinally:\n squares = orig_squares",
"Part C (1 point)\nUsing LaTeX math notation, write out the equation that is implemented by your sum_of_squares function.\n$\\sum_{i=1}^n i^2$\n\nPart D (2 points)\nFind a usecase for your sum_of_squares function and implement that usecase in the cell below.",
"def pyramidal_number(n):\n \"\"\"Returns the n^th pyramidal number\"\"\"\n return sum_of_squares(n)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
azhurb/deep-learning
|
first-neural-network/Your_first_neural_network.ipynb
|
mit
|
[
"Your first neural network\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt",
"Load and prepare the data\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!",
"data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)\n\nrides.head()",
"Checking out the data\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.\nBelow is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.",
"rides[:24*10].plot(x='dteday', y='cnt')",
"Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().",
"dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()",
"Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\nThe scaling factors are saved so we can go backwards when we use the network for predictions.",
"quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std",
"Splitting the data into training, testing, and validation sets\nWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.",
"# Save data for approximately the last 21 days \ntest_data = data[-21*24:]\n\n# Now remove the test data from the data set \ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]",
"We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).",
"# Hold out the last 60 days or so of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]",
"Time to build the network\nBelow you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\n<img src=\"assets/neural_network.png\" width=300px>\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.\n\nHint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.\n2. Implement the forward pass in the train method.\n3. Implement the backpropagation algorithm in the train method, including calculating the output error.\n4. Implement the forward pass in the run method.",
"class NeuralNetwork(object):\n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, \n (self.input_nodes, self.hidden_nodes))\n\n self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n self.lr = learning_rate\n \n #### TODO: Set self.activation_function to your implemented sigmoid function ####\n #\n # Note: in Python, you can define a function with a lambda expression,\n # as shown below.\n self.activation_function = lambda x : 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.\n \n ### If the lambda code above is not something you're familiar with,\n # You can uncomment out the following three lines and put your \n # implementation there instead.\n #\n #def sigmoid(x):\n # return 0 # Replace 0 with your sigmoid calculation here\n #self.activation_function = sigmoid\n \n \n def train(self, features, targets):\n ''' Train the network on batch of features and targets. \n \n Arguments\n ---------\n \n features: 2D array, each row is one data record, each column is a feature\n targets: 1D array of target values\n \n '''\n #print('features')\n ##print(features)\n #print('targets')\n #print(targets)\n n_records = features.shape[0]\n delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)\n #print('delta_weights_i_h')\n #print(delta_weights_i_h)\n delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)\n #print('delta_weights_h_o')\n #print(delta_weights_h_o)\n for X, y in zip(features, targets):\n #### Implement the forward pass here ####\n ### Forward pass ###\n # TODO: Hidden layer - Replace these values with your calculations.\n #print('X')\n #print(X)\n #print('y')\n #print(y)\n hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer\n #print('hidden_inputs')\n #print(hidden_inputs)\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n #print('hidden_outputs')\n #print(hidden_outputs)\n # TODO: Output layer - Replace these values with your calculations.\n final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer\n #print('final_inputs')\n #print(final_inputs)\n final_outputs = final_inputs#self.activation_function(final_inputs) # signals from final output layer\n #print('final_outputs')\n #print(final_outputs)\n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # TODO: Output error - Replace this value with your calculations.\n error = y - final_outputs # Output layer error is the difference between desired target and actual output.\n #print('error')\n #print(error)\n # TODO: Calculate the hidden layer's contribution to the error\n hidden_error = np.dot(self.weights_hidden_to_output, error)\n #print('hidden_error')\n #print(hidden_error) \n # TODO: Backpropagated error terms - Replace these values with your calculations.\n output_error_term = error * 1\n #print('output_error_term')\n #print(output_error_term)\n #hidden_error_term = np.dot(self.weights_hidden_to_output, output_error_term) * hidden_outputs (1 - hidden_outputs)\n \n hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)\n #print('hidden_error_term')\n #print(hidden_error_term)\n # Weight step (input to hidden)\n delta_weights_i_h += hidden_error_term * X[:, None]\n #print('delta_weights_i_h')\n #print(delta_weights_i_h)\n # Weight step (hidden to output)\n delta_weights_h_o += output_error_term * hidden_outputs[:, None]\n #print('delta_weights_h_o')\n #print(delta_weights_h_o)\n\n \n # TODO: Update the weights - Replace these values with your calculations.\n self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step\n self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step\n \n def run(self, features):\n ''' Run a forward pass through the network with input features \n \n Arguments\n ---------\n features: 1D array of feature values\n '''\n \n #### Implement the forward pass here ####\n # TODO: Hidden layer - replace these values with the appropriate calculations.\n hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n \n # TODO: Output layer - Replace these values with the appropriate calculations.\n final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer\n final_outputs = final_inputs # signals from final output layer \n \n return final_outputs\n\ndef MSE(y, Y):\n return np.mean((y-Y)**2)",
"Unit tests\nRun these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.",
"import unittest\n\ninputs = np.array([[0.5, -0.2, 0.1]])\ntargets = np.array([[0.4]])\ntest_w_i_h = np.array([[0.1, -0.2],\n [0.4, 0.5],\n [-0.3, 0.2]])\ntest_w_h_o = np.array([[0.3],\n [-0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n #print('network.weights_hidden_to_output')\n #print(network.weights_hidden_to_output)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328], \n [-0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, -0.20185996], \n [0.39775194, 0.50074398], \n [-0.29887597, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n #print('network.run(inputs)')\n #print(network.run(inputs))\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)",
"Training the network\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\nChoose the number of iterations\nThis is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.\nChoose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\nChoose the number of hidden nodes\nThe more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.",
"import sys\n\n### Set the hyperparameters here ###\niterations = 2000#100\nlearning_rate = 0.08#0.1\nhidden_nodes = 14\noutput_nodes = 1\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor ii in range(iterations):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']\n network.train(X, y)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: {:2.1f}\".format(100 * ii/float(iterations)) \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n sys.stdout.flush()\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)",
"iterations = 2000\nhidden_nodes = 6\nTraining loss: 0.289 ... Validation loss: 0.459\nhidden_nodes = 12\nProgress: 100.0% ... Training loss: 0.295 ... Validation loss: 0.460\nhidden_nodes = 14\nTraining loss: 0.278 ... Validation loss: 0.447\nhidden_nodes = 16\nTraining loss: 0.284 ... Validation loss: 0.451\nhidden_nodes = 18\nTraining loss: 0.294 ... Validation loss: 0.457\nhidden_nodes = 20\nTraining loss: 0.279 ... Validation loss: 0.445\nhidden_nodes = 28\nTraining loss: 0.295 ... Validation loss: 0.464",
"plt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\n_ = plt.ylim()",
"Check out your predictions\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.",
"fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features).T*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)",
"OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).\nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\nNote: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\nYour answer below\nIt seems like the week of holidays has begun. Our features does not contain such information."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mavillan/SciProg
|
02_numpy/02_actividad.ipynb
|
gpl-3.0
|
[
"<h1 align=\"center\">Scientific Programming in Python</h1>\n<h2 align=\"center\">Topic 2: NumPy and Efficient Numerical Programming</h2>\n\nNotebook created by Martín Villanueva - martin.villanueva@usm.cl - DI UTFSM - April 2017.",
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\n\ndef image_plot(data, title='FITS image'):\n plt.figure(figsize=(10,10))\n im = plt.imshow(data, cmap=plt.cm.afmhot, interpolation=None)\n plt.title(title)\n #plt.axis('off')\n divider = make_axes_locatable(plt.gca())\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\n plt.colorbar(im, cax=cax)\n plt.show()\n\n# first we load the data:\ndata = np.load(\"orion.npy\")\n\nimage_plot(data)",
"Paso 1)\nCalcular el RMS de la imagen entregada. \n$$RMS = \\sqrt{\\frac{1}{m\\ n} \\sum_{i=1}^m \\sum_{j=1}^n \\texttt{data[i,j]}^2}$$\nNota: Computarlo de forma vectorizada.\nPaso 2)\nGenere otro arreglo donde los pixeles con intensidades por debajo del RMS son considerados como no usables (con valor =0). Mostrar tal imagen resultante.\nPaso 3)\nCrear la función\npython\ndef apply_filter(data, mask, kernel_filter):\n ...\n return None\nQue reciba el arreglo de datos completo data, el arreglo booleano con los pixeles usables mask (sobre el RMS), y kernel de filtro de 3x3. La función debe convolucionar filter sobre la imagen data, sólo en los pixeles usables. La función no debe retornar nada, pero debe modificar data de forma in place.\nFinalmente mostrar el resultado de convolucionar tal filtro en data (mostrar imágen).\nNota: Debe usar siempre que pueda instrucciones vectorizadas y operaciones inplace.\nImage convolution: https://en.wikipedia.org/wiki/Kernel_(image_processing)#Convolution",
"# Gaussian blur filter: Ocupar este filtro!\nkernel_filter = 1./16. * np.array([[1,2,1], [2,4,2], [1,2,1]])\nprint(kernel_filter)",
"Solución\nPaso 1)",
"rms = np.sqrt( 1./(data.shape[0]*data.shape[1]) * np.sum(data**2) )",
"Paso 2)",
"_data = np.copy(data)\nmask = _data<rms\n_data[mask] = 0.\n\nimage_plot(_data)",
"Paso 3)",
"def apply_filter(data, mask, kernel_filter):\n m,n = data.shape\n _data = data.copy()\n for i in range(m):\n for j in range(n):\n if not mask[i,j]: continue\n data[i,j] = np.sum(_data[i-1:i+2,j-1:j+2]*kernel_filter)\n\napply_filter(data, ~mask, kernel_filter)\n\nimage_plot(data)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/asl-ml-immersion
|
notebooks/feature_engineering/solutions/1_bqml_basic_feat_eng.ipynb
|
apache-2.0
|
[
"Basic Feature Engineering in BQML\nLearning Objectives\n\nCreate SQL statements to evaluate the model\nExtract temporal features\nPerform a feature cross on temporal features\n\nOverview\nIn this lab, we utilize feature engineering to improve the prediction of the fare amount for a taxi ride in New York City. We will use BigQuery ML to build a taxifare prediction model, using feature engineering to improve and create a final model.\nIn this Notebook we set up the environment, create the project dataset, create a feature engineering table, create and evaluate a baseline model, extract temporal features, perform a feature cross on temporal features, and evaluate model performance throughout the process.",
"PROJECT = !gcloud config get-value project\nPROJECT = PROJECT[0]\n\n%env PROJECT=$PROJECT",
"The source dataset\nOur dataset is hosted in BigQuery. The taxi fare data is a publically available dataset, meaning anyone with a GCP account has access. Click here to access the dataset.\nThe Taxi Fare dataset is relatively large at 55 million training rows, but simple to understand, with only six features. The fare_amount is the target, the continuous value we’ll train a model to predict.\nCreate a BigQuery Dataset\nA BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called feat_eng if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.",
"%%bash\n# Create a BigQuery dataset for feat_eng if it doesn't exist\ndatasetexists=$(bq ls -d | grep -w feat_eng)\n\nif [ -n \"$datasetexists\" ]; then\n echo -e \"BigQuery dataset already exists, let's not recreate it.\"\n\nelse\n echo \"Creating BigQuery dataset titled: feat_eng\"\n \n bq --location=US mk --dataset \\\n --description 'Taxi Fare' \\\n $PROJECT:feat_eng\n echo \"Here are your current datasets:\"\n bq ls\nfi ",
"Create the training data table\nSince there is already a publicly available dataset, we can simply create the training data table using this raw input data. Note the WHERE clause in the below query: This clause allows us to TRAIN a portion of the data (e.g. one hundred thousand rows versus one million rows), which keeps your query costs down. If you need a refresher on using MOD() for repeatable splits see this post. \n\nNote: The dataset in the create table code below is the one created previously, e.g. feat_eng. The table name is feateng_training_data. Run the query to create the table.",
"%%bigquery \n\nCREATE OR REPLACE TABLE\n feat_eng.feateng_training_data AS\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n passenger_count*1.0 AS passengers,\n pickup_datetime,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat\nFROM\n `nyc-tlc.yellow.trips`\nWHERE\n MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), 10000) = 1\n AND fare_amount >= 2.5\n AND passenger_count > 0\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45",
"Verify table creation\nVerify that you created the dataset.",
"%%bigquery\n\n# LIMIT 0 is a free query; this allows us to check that the table exists.\nSELECT\n*\nFROM\n feat_eng.feateng_training_data\nLIMIT\n 0",
"Baseline Model: Create the baseline model\nNext, you create a linear regression baseline model with no feature engineering. Recall that a model in BigQuery ML represents what an ML system has learned from the training data. A baseline model is a solution to a problem without applying any machine learning techniques. \nWhen creating a BQML model, you must specify the model type (in our case linear regression) and the input label (fare_amount). Note also that we are using the training data table as the data source.\nNow we create the SQL statement to create the baseline model.",
"%%bigquery\n\nCREATE OR REPLACE MODEL\n feat_eng.baseline_model OPTIONS (model_type='linear_reg',\n input_label_cols=['fare_amount']) AS\nSELECT\n fare_amount,\n passengers,\n pickup_datetime,\n pickuplon,\n pickuplat,\n dropofflon,\n dropofflat\nFROM\n feat_eng.feateng_training_data",
"Note, the query takes several minutes to complete. After the first iteration is complete, your model (baseline_model) appears in the navigation panel of the BigQuery web UI. Because the query uses a CREATE MODEL statement to create a model, you do not see query results.\nYou can observe the model as it's being trained by viewing the Model stats tab in the BigQuery web UI. As soon as the first iteration completes, the tab is updated. The stats continue to update as each iteration completes.\nOnce the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.\nEvaluate the baseline model\nNote that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. After creating your model, you evaluate the performance of the regressor using the ML.EVALUATE function. The ML.EVALUATE function evaluates the predicted values against the actual data.\nNOTE: The results are also displayed in the BigQuery Cloud Console under the Evaluation tab.\nReview the learning and eval statistics for the baseline_model.",
"%%bigquery\n\n# Eval statistics on the held out data.\nSELECT\n *,\n SQRT(loss) AS rmse\nFROM\n ML.TRAINING_INFO(MODEL feat_eng.baseline_model)\n\n%%bigquery\n\nSELECT\n *\nFROM\n ML.EVALUATE(MODEL feat_eng.baseline_model)",
"NOTE: Because you performed a linear regression, the results include the following columns:\n\nmean_absolute_error\nmean_squared_error\nmean_squared_log_error\nmedian_absolute_error\nr2_score\nexplained_variance\n\nResource for an explanation of the Regression Metrics.\nMean squared error (MSE) - Measures the difference between the values our model predicted using the test set and the actual values. You can also think of it as the distance between your regression (best fit) line and the predicted values. \nRoot mean squared error (RMSE) - The primary evaluation metric for this ML problem is the root mean-squared error. RMSE measures the difference between the predictions of a model, and the observed values. A large RMSE is equivalent to a large average error, so smaller values of RMSE are better. One nice property of RMSE is that the error is given in the units being measured, so you can tell very directly how incorrect the model might be on unseen data.\nR2: An important metric in the evaluation results is the R2 score. The R2 score is a statistical measure that determines if the linear regression predictions approximate the actual data. Zero (0) indicates that the model explains none of the variability of the response data around the mean. One (1) indicates that the model explains all the variability of the response data around the mean.\nNext, we write a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model.",
"%%bigquery\n\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL feat_eng.baseline_model)",
"Model 1: EXTRACT dayofweek from the pickup_datetime feature.\n\n\nAs you recall, dayofweek is an enum representing the 7 days of the week. This factory allows the enum to be obtained from the int value. The int value follows the ISO-8601 standard, from 1 (Monday) to 7 (Sunday).\n\n\nIf you were to extract the dayofweek from pickup_datetime using BigQuery SQL, the dataype returned would be integer.\n\n\nNext, we create a model titled \"model_1\" from the benchmark model and extract out the DayofWeek.",
"%%bigquery\n\nCREATE OR REPLACE MODEL\n feat_eng.model_1 OPTIONS (model_type='linear_reg',\n input_label_cols=['fare_amount']) AS\nSELECT\n fare_amount,\n passengers,\n pickup_datetime,\n EXTRACT(DAYOFWEEK\n FROM\n pickup_datetime) AS dayofweek,\n pickuplon,\n pickuplat,\n dropofflon,\n dropofflat\nFROM\n feat_eng.feateng_training_data",
"Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.\nNext, two distinct SQL statements show the TRAINING and EVALUATION metrics of model_1.",
"%%bigquery\n\nSELECT\n *,\n SQRT(loss) AS rmse\nFROM\n ML.TRAINING_INFO(MODEL feat_eng.model_1)\n\n%%bigquery\n\nSELECT\n *\nFROM\n ML.EVALUATE(MODEL feat_eng.model_1)",
"Here we run a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model.",
"%%bigquery\n\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL feat_eng.model_1)",
"Model 2: EXTRACT hourofday from the pickup_datetime feature\nAs you recall, pickup_datetime is stored as a TIMESTAMP, where the Timestamp format is retrieved in the standard output format – year-month-day hour:minute:second (e.g. 2016-01-01 23:59:59). Hourofday returns the integer number representing the hour number of the given date.\nHourofday is best thought of as a discrete ordinal variable (and not a categorical feature), as the hours can be ranked (e.g. there is a natural ordering of the values). Hourofday has an added characteristic of being cyclic, since 12am follows 11pm and precedes 1am.\nNext, we create a model titled \"model_2\" and EXTRACT the hourofday from the pickup_datetime feature to improve our model's rmse.",
"%%bigquery\n\nCREATE OR REPLACE MODEL\n feat_eng.model_2 OPTIONS (model_type='linear_reg',\n input_label_cols=['fare_amount']) AS\nSELECT\n fare_amount,\n passengers,\n #pickup_datetime,\n EXTRACT(DAYOFWEEK\n FROM\n pickup_datetime) AS dayofweek,\n EXTRACT(HOUR\n FROM\n pickup_datetime) AS hourofday,\n pickuplon,\n pickuplat,\n dropofflon,\n dropofflat\nFROM\n `feat_eng.feateng_training_data`\n\n%%bigquery\n\nSELECT\n *\nFROM\n ML.EVALUATE(MODEL feat_eng.model_2)\n\n%%bigquery\n\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL feat_eng.model_2)",
"Model 3: Feature cross dayofweek and hourofday using CONCAT\nFirst, let’s allow the model to learn traffic patterns by creating a new feature that combines the time of day and day of week (this is called a feature cross. \nNote: BQML by default assumes that numbers are numeric features, and strings are categorical features. We need to convert both the dayofweek and hourofday features to strings because the model (Neural Network) will automatically treat any integer as a numerical value rather than a categorical value. Thus, if not cast as a string, the dayofweek feature will be interpreted as numeric values (e.g. 1,2,3,4,5,6,7) and hour ofday will also be interpreted as numeric values (e.g. the day begins at midnight, 00:00, and the last minute of the day begins at 23:59 and ends at 24:00). As such, there is no way to distinguish the \"feature cross\" of hourofday and dayofweek \"numerically\". Casting the dayofweek and hourofday as strings ensures that each element will be treated like a label and will get its own coefficient associated with it.\nCreate the SQL statement to feature cross the dayofweek and hourofday using the CONCAT function. Name the model \"model_3\"",
"%%bigquery\n\nCREATE OR REPLACE MODEL\n feat_eng.model_3 OPTIONS (model_type='linear_reg',\n input_label_cols=['fare_amount']) AS\nSELECT\n fare_amount,\n passengers,\n #pickup_datetime,\n #EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek,\n #EXTRACT(HOUR FROM pickup_datetime) AS hourofday,\n CONCAT(CAST(EXTRACT(DAYOFWEEK\n FROM\n pickup_datetime) AS STRING), CAST(EXTRACT(HOUR\n FROM\n pickup_datetime) AS STRING)) AS hourofday,\n pickuplon,\n pickuplat,\n dropofflon,\n dropofflat\nFROM\n `feat_eng.feateng_training_data`\n\n%%bigquery\n\nSELECT\n *\nFROM\n ML.EVALUATE(MODEL feat_eng.model_3)\n\n%%bigquery\n\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL feat_eng.model_3)",
"Optional: Create a RMSE summary table to evaluate model performance.\n| Model | Taxi Fare | Description |\n|----------------|-----------|----------------------------------------------|\n| baseline_model | 8.62 | Baseline model - no feature engineering |\n| model_1 | 9.43 | EXTRACT dayofweek from the pickup_datetime |\n| model_2 | 8.40 | EXTRACT hourofday from the pickup_datetime |\n| model_3 | 8.32 | FEATURE CROSS hourofday and dayofweek |\nCopyright 2021 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
pkimes/benchmark-fdr
|
datasets/microbiome/explore_schubert_sb_results.ipynb
|
mit
|
[
"The purpose of this notebook is to investigate the Schubert results. Specifically, it looks like 100% of tests are being rejected by qvalue at an alpha cutoff of 0.1. That seems fishy, let's investigate!",
"import pandas as pd\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n\nsns.set_style('white')\n\nfpvals = 'schubert-sb-table.txt'\nfotu = 'data/cdi_schubert_results/RDP/cdi_schubert.otu_table.100.denovo.rdp_assigned'\nfmeta = 'data/cdi_schubert_results/cdi_schubert.metadata.txt'\n\npvals = pd.read_csv(fpvals, sep=' ')\npvals.index = pvals['otu']\ndf = pd.read_csv(fotu, sep='\\t', index_col=0).T\nabundf = df.divide(df.sum(axis=1), axis=0)\nmeta = pd.read_csv(fmeta, sep='\\t', index_col=0)\n\npvals.head()\n\nplt.hist(pvals['unadjusted'])\n\ndf.head()\n\nmeta.head()\n\npvals.columns\n\npvals.sort_values(by='qvalue', ascending=False)[['pval', 'qvalue', 'ubiquity']].head(15)",
"It looks like OTUs with an uncorrected pvalue of 0.9 get smushed down to 0.08 with qvalue - this seems fishy!\nLet's plot a few of these bugs and see if we can understand what's happening with the covariate...",
"# Tidyfy the OTU table\ndf.index.name = 'sample'\ntidydf = df.reset_index().melt(id_vars='sample', var_name='otu', value_name='abun')\n# Add disease state\ntidydf = tidydf.join(meta['DiseaseState'], on='sample')\ntidydf.head()\n\notus = pvals.sort_values(by='qvalue', ascending=False).index[0:12].tolist()\n\nfig, ax = plt.subplots(3, 4, figsize=(14,12))\nax = ax.flatten()\n\nfor i in range(len(ax)):\n o = otus[i]\n sns.stripplot(\n data=tidydf.query('otu == @o'), \n x='DiseaseState', y='abun', \n ax=ax[i],\n jitter=True)\n \n\n# Tidyfy the realtive abundance OTU table\nabundf.index.name = 'sample'\ntidyabundf = abundf.reset_index().melt(id_vars='sample', var_name='otu', value_name='abun')\n# Add disease state\ntidyabundf = tidyabundf.join(meta['DiseaseState'], on='sample')\n\nfig, ax = plt.subplots(3, 4, figsize=(14,12))\nax = ax.flatten()\n\nfor i in range(len(ax)):\n o = otus[i]\n sns.stripplot(\n data=tidyabundf.query('otu == @o'), \n x='DiseaseState', y='abun', \n ax=ax[i],\n jitter=True)\n",
"So these are, for the most part, singletons, maybe?\nLet's check that I calculated the ubiquity correctly here..",
"kept_otus = pvals['otu'].tolist()\ndf_fromR = df.loc[:, kept_otus]\nkeep_dis = ['H', 'CDI', 'nonCDI']\ndf_fromR = df_fromR.loc[meta.query('DiseaseState == @keep_dis').index, :]\ndf_fromR.shape\n\n# Hm, okay - maybe my problems are coming from\ndf_fromR.shape, df_fromR.dropna().shape\n\n# Recalculate ubiquity with python\n\nfig, ax = plt.subplots(2, 2, figsize=(10, 8))\nax = ax.flatten()\n\nax[0].hist((df_fromR.dropna() > 0).sum() / df_fromR.dropna().shape[0])\nax[0].set_title('Ubiquity calculated from python')\nax[1].hist(pvals['ubiquity'])\nax[1].set_title('Ubiquity calculated from R')\n\nax[2].hist(np.log10((df_fromR.dropna() > 0).sum() / df_fromR.dropna().shape[0]))\nax[2].set_title('Ubiquity calculated from python')\nax[3].hist(np.log10(pvals['ubiquity']))\nax[3].set_title('Ubiquity calculated from R')",
"Hm, ok - so it looks like they're the same. I didn't mess up the ubiquity calculation...\nIt looks like the least significant bugs (which get moved down to below q = 0.1) are in the lower covariate group - they have ubiquity = 0.02 - 0.03. I want to look back at the covariate boxplots and see if this makes sense, I guess...?",
"np.log10(0.02)",
"Uncorrected pvalue vs. corrected pvalue?",
"pvals.columns\n\npcols = [u'bonf', u'bh',\n u'qvalue', u'ihw-a10', \n u'bl-df03', u'lfdr']\n\nfig, ax = plt.subplots(2, 3, figsize=(14, 8))\nax = ax.flatten()\n\ni = 0\nfor c in pcols:\n ax[i].scatter(pvals['unadjusted'], pvals[c])\n ax[i].set_title(c)\n\n if i > 2:\n ax[i].set_xlabel('unadjusted')\n if i in [0, 3]:\n ax[i].set_ylabel('corrected')\n \n i += 1\n",
"Y-axis is the corrected qvalue (specified in subplot title), x-axis is the original pvalue\nHow does qvalue compare to the other more methods?\nSpecifically, the methods that have more reasonable results...",
"fig, ax = plt.subplots(1, 4, figsize=(14, 4))\n\npcols = ['unadjusted', 'ihw-a10', 'bl-df03', 'lfdr']\n\ni = 0\nfor c in pcols:\n ax[i].scatter(pvals['qvalue'], pvals[c])\n ax[i].set_title(c)\n ax[i].axhline(0.1)\n ax[i].set_xlabel('qvalue')\n i += 1\n\n\npvals.columns",
"How does effect size correlate with pvalue and adjusted pvals?",
"pcols = ['qvalue', 'unadjusted', 'ihw-a10', 'bl-df03', 'lfdr']\n\nfig, ax = plt.subplots(1, len(pcols), figsize=(15, 4))\n\ni = 0\nfor c in pcols:\n ax[i].scatter(pvals['effect_size'], pvals[c])\n ax[i].set_title(c)\n ax[i].set_xlabel('effect size')\n i += 1\n",
"And covariate?",
"fig, ax = plt.subplots(1, len(pcols), figsize=(15, 4))\n\ni = 0\nfor c in pcols:\n ax[i].scatter(pvals['ubiquity'], -np.log10(pvals[c]))\n ax[i].set_title(c)\n ax[i].set_xlabel('ubiquity')\n #ax[i]\n i += 1\nprint('Ubiquity vs log10(pvalue)')\n\nimport numpy as np\n\nfig, ax = plt.subplots(1, len(pcols), figsize=(15, 4))\n\ni = 0\nfor c in pcols:\n ax[i].scatter(pvals['ubiquity'].rank(), pvals[c])\n ax[i].set_title(c)\n ax[i].set_xlabel('ubiquity')\n #ax[i]\n i += 1\nprint('Rank ubiquity vs pvalue')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dariox2/CADL
|
session-4/session-4.ipynb
|
apache-2.0
|
[
"Session 4: Visualizing Representations\nAssignment: Deep Dream and Style Net\n<p class='lead'>\nCreative Applications of Deep Learning with Google's Tensorflow \nParag K. Mital \nKadenze, Inc.\n</p>\n\nOverview\nIn this homework, we'll first walk through visualizing the gradients of a trained convolutional network. Recall from the last session that we had trained a variational convolutional autoencoder. We also trained a deep convolutional network. In both of these networks, we learned only a few tools for understanding how the model performs. These included measuring the loss of the network and visualizing the W weight matrices and/or convolutional filters of the network.\nDuring the lecture we saw how to visualize the gradients of Inception, Google's state of the art network for object recognition. This resulted in a much more powerful technique for understanding how a network's activations transform or accentuate the representations in the input space. We'll explore this more in Part 1.\nWe also explored how to use the gradients of a particular layer or neuron within a network with respect to its input for performing \"gradient ascent\". This resulted in Deep Dream. We'll explore this more in Parts 2-4.\nWe also saw how the gradients at different layers of a convolutional network could be optimized for another image, resulting in the separation of content and style losses, depending on the chosen layers. This allowed us to synthesize new images that shared another image's content and/or style, even if they came from separate images. We'll explore this more in Part 5.\nFinally, you'll packaged all the GIFs you create throughout this notebook and upload them to Kadenze.\n<a name=\"learning-goals\"></a>\nLearning Goals\n\nLearn how to inspect deep networks by visualizing their gradients\nLearn how to \"deep dream\" with different objective functions and regularization techniques\nLearn how to \"stylize\" an image using content and style losses from different images\n\nTable of Contents\n<!-- MarkdownTOC autolink=true autoanchor=true bracket=round -->\n\n\nPart 1 - Pretrained Networks\nGraph Definition\nPreprocess/Deprocessing\nTensorboard\nA Note on 1x1 Convolutions\nNetwork Labels\nUsing Context Managers\nPart 2 - Visualizing Gradients\nPart 3 - Basic Deep Dream\nPart 4 - Deep Dream Extensions\nUsing the Softmax Layer\nFractal\nGuided Hallucinations\nFurther Explorations\nPart 5 - Style Net\nNetwork\nContent Features\nStyle Features\nRemapping the Input\nContent Loss\nStyle Loss\nTotal Variation Loss\nTraining\nAssignment Submission\n\n<!-- /MarkdownTOC -->",
"# First check the Python version\nimport sys\nif sys.version_info < (3,4):\n print('You are running an older version of Python!\\n\\n',\n 'You should consider updating to Python 3.4.0 or',\n 'higher as the libraries built for this course',\n 'have only been tested in Python 3.4 and higher.\\n')\n print('Try installing the Python 3.5 version of anaconda'\n 'and then restart `jupyter notebook`:\\n',\n 'https://www.continuum.io/downloads\\n\\n')\n\n# Now get necessary libraries\ntry:\n import os\n import numpy as np\n import matplotlib.pyplot as plt\n from skimage.transform import resize\n from skimage import data\n from scipy.misc import imresize\n from scipy.ndimage.filters import gaussian_filter\n import IPython.display as ipyd\n import tensorflow as tf\n from libs import utils, gif, datasets, dataset_utils, vae, dft, vgg16, nb_utils\nexcept ImportError:\n print(\"Make sure you have started notebook in the same directory\",\n \"as the provided zip file which includes the 'libs' folder\",\n \"and the file 'utils.py' inside of it. You will NOT be able\",\n \"to complete this assignment unless you restart jupyter\",\n \"notebook inside the directory created by extracting\",\n \"the zip file or cloning the github repo. If you are still\")\n\n# We'll tell matplotlib to inline any drawn figures like so:\n%matplotlib inline\nplt.style.use('ggplot')\n\n# Bit of formatting because I don't like the default inline code style:\nfrom IPython.core.display import HTML\nHTML(\"\"\"<style> .rendered_html code { \n padding: 2px 4px;\n color: #c7254e;\n background-color: #f9f2f4;\n border-radius: 4px;\n} </style>\"\"\")",
"<a name=\"part-1---pretrained-networks\"></a>\nPart 1 - Pretrained Networks\nIn the libs module, you'll see that I've included a few modules for loading some state of the art networks. These include:\n\nInception v3\nThis network has been trained on ImageNet and its finaly output layer is a softmax layer denoting 1 of 1000 possible objects (+ 8 for unknown categories). This network is about only 50MB!\n\n\nInception v5\nThis network has been trained on ImageNet and its finaly output layer is a softmax layer denoting 1 of 1000 possible objects (+ 8 for unknown categories). This network is about only 50MB! It presents a few extensions to v5 which are not documented anywhere that I've found, as of yet...\n\n\nVisual Group Geometry @ Oxford's 16 layer\nThis network has been trained on ImageNet and its finaly output layer is a softmax layer denoting 1 of 1000 possible objects. This model is nearly half a gigabyte, about 10x larger in size than the inception network. The trade off is that it is very fast.\n\n\nVisual Group Geometry @ Oxford's Face Recognition\nThis network has been trained on the VGG Face Dataset and its final output layer is a softmax layer denoting 1 of 2622 different possible people.\n\n\nIllustration2Vec\nThis network has been trained on illustrations and manga and its final output layer is 4096 features.\n\n\nIllustration2Vec Tag\nPlease do not use this network if you are under the age of 18 (seriously!)\nThis network has been trained on manga and its final output layer is one of 1539 labels.\n\n\n\nWhen we use a pre-trained network, we load a network's definition and its weights which have already been trained. The network's definition includes a set of operations such as convolutions, and adding biases, but all of their values, i.e. the weights, have already been trained. \n<a name=\"graph-definition\"></a>\nGraph Definition\nIn the libs folder, you will see a few new modules for loading the above pre-trained networks. Each module is structured similarly to help you understand how they are loaded and include example code for using them. Each module includes a preprocess function for using before sending the image to the network. And when using deep dream techniques, we'll be using the deprocess function to undo the preprocess function's manipulations.\nLet's take a look at loading one of these. Every network except for i2v includes a key 'labels' denoting what labels the network has been trained on. If you are under the age of 18, please do not use the i2v_tag model, as its labels are unsuitable for minors.\nLet's load the libaries for the different pre-trained networks:",
"from libs import vgg16, inception, i2v",
"Now we can load a pre-trained network's graph and any labels. Explore the different networks in your own time.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"# Stick w/ Inception for now, and then after you see how\n# the next few sections work w/ this network, come back\n# and explore the other networks.\n\nnet = inception.get_inception_model(version='v5')\n# net = inception.get_inception_model(version='v3')\n# net = vgg16.get_vgg_model()\n# net = vgg16.get_vgg_face_model()\n# net = i2v.get_i2v_model()\n# net = i2v.get_i2v_tag_model()",
"Each network returns a dictionary with the following keys defined. Every network has a key for \"labels\" except for \"i2v\", since this is a feature only network, e.g. an unsupervised network, and does not have labels.",
"print(net.keys())",
"<a name=\"preprocessdeprocessing\"></a>\nPreprocess/Deprocessing\nEach network has a preprocessing/deprocessing function which we'll use before sending the input to the network. This preprocessing function is slightly different for each network. Recall from the previous sessions what preprocess we had done before sending an image to a network. We would often normalize the input by subtracting the mean and dividing by the standard deviation. We'd also crop/resize the input to a standard size. We'll need to do this for each network except for the Inception network, which is a true convolutional network and does not require us to do this (will be explained in more depth later). \nWhenever we preprocess the image, and want to visualize the result of adding back the gradient to the input image (when we use deep dream), we'll need to use the deprocess function stored in the dictionary. Let's explore how these work. We'll confirm this is performing the inverse operation, let's try to preprocess the image, then I'll have you try to deprocess it.",
"# First, let's get an image:\nog = plt.imread('clinton.png')[..., :3]\nplt.imshow(og)\nprint(og.min(), og.max())",
"Let's now try preprocessing this image. The function for preprocessing is inside the module we used to load it. For instance, for vgg16, we can find the preprocess function as vgg16.preprocess, or for inception, inception.preprocess, or for i2v, i2v.preprocess. Or, we can just use the key preprocess in our dictionary net, as this is just convenience for us to access the corresponding preprocess function.",
"# Now call the preprocess function. This will preprocess our\n# image ready for being input to the network, except for changes\n# to the dimensions. I.e., we will still need to convert this\n# to a 4-dimensional Tensor once we input it to the network.\n# We'll see how that works later.\nimg = net['preprocess'](og)\nprint(img.min(), img.max())",
"Let's undo the preprocessing. Recall that the net dictionary has the key deprocess which is the function we need to use on our processed image, img.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"deprocessed = ...\nplt.imshow(deprocessed)\nplt.show()",
"<a name=\"tensorboard\"></a>\nTensorboard\nI've added a utility module called nb_utils which includes a function show_graph. This will use Tensorboard to draw the computational graph defined by the various Tensorflow functions. I didn't go over this during the lecture because there just wasn't enough time! But explore it in your own time if it interests you, as it is a really unique tool which allows you to monitor your network's training progress via a web interface. It even lets you monitor specific variables or processes within the network, e.g. the reconstruction of an autoencoder, without having to print to the console as we've been doing. We'll just be using it to draw the pretrained network's graphs using the utility function I've given you.\nBe sure to interact with the graph and click on the various modules.\nFor instance, if you've loaded the inception v5 network, locate the \"input\" to the network. This is where we feed the image, the input placeholder (typically what we've been denoting as X in our own networks). From there, it goes to the \"conv2d0\" variable scope (i.e. this uses the code: with tf.variable_scope(\"conv2d0\") to create a set of operations with the prefix \"conv2d0/\". If you expand this scope, you'll see another scope, \"pre_relu\". This is created using another tf.variable_scope(\"pre_relu\"), so that any new variables will have the prefix \"conv2d0/pre_relu\". Finally, inside here, you'll see the convolution operation (tf.nn.conv2d) and the 4d weight tensor, \"w\" (e.g. created using tf.get_variable), used for convolution (and so has the name, \"conv2d0/pre_relu/w\". Just after the convolution is the addition of the bias, b. And finally after exiting the \"pre_relu\" scope, you should be able to see the \"conv2d0\" operation which applies the relu nonlinearity. In summary, that region of the graph can be created in Tensorflow like so:\npython\ninput = tf.placeholder(...)\nwith tf.variable_scope('conv2d0'):\n with tf.variable_scope('pre_relu'):\n w = tf.get_variable(...)\n h = tf.nn.conv2d(input, h, ...)\n b = tf.get_variable(...)\n h = tf.nn.bias_add(h, b)\n h = tf.nn.relu(h)",
"nb_utils.show_graph(net['graph_def'])",
"If you open up the \"mixed3a\" node above (double click on it), you'll see the first \"inception\" module. This network encompasses a few advanced concepts that we did not have time to discuss during the lecture, including residual connections, feature concatenation, parallel convolution streams, 1x1 convolutions, and including negative labels in the softmax layer. I'll expand on the 1x1 convolutions here, but please feel free to skip ahead if this isn't of interest to you.\n<a name=\"a-note-on-1x1-convolutions\"></a>\nA Note on 1x1 Convolutions\nThe 1x1 convolutions are setting the ksize parameter of the kernels to 1. This is effectively allowing you to change the number of dimensions. Remember that you need a 4-d tensor as input to a convolution. Let's say its dimensions are $\\text{N}\\ x\\ \\text{W}\\ x\\ \\text{H}\\ x\\ \\text{C}_I$, where $\\text{C}_I$ represents the number of channels the image has. Let's say it is an RGB image, then $\\text{C}_I$ would be 3. Or later in the network, if we have already convolved it, it might be 64 channels instead. Regardless, when you convolve it w/ a $\\text{K}_H\\ x\\ \\text{K}_W\\ x\\ \\text{C}_I\\ x\\ \\text{C}_O$ filter, where $\\text{K}_H$ is 1 and $\\text{K}_W$ is also 1, then the filters size is: $1\\ x\\ 1\\ x\\ \\text{C}_I$ and this is perfomed for each output channel $\\text{C}_O$. What this is doing is filtering the information only in the channels dimension, not the spatial dimensions. The output of this convolution will be a $\\text{N}\\ x\\ \\text{W}\\ x\\ \\text{H}\\ x\\ \\text{C}_O$ output tensor. The only thing that changes in the output is the number of output filters.\nThe 1x1 convolution operation is essentially reducing the amount of information in the channels dimensions before performing a much more expensive operation, e.g. a 3x3 or 5x5 convolution. Effectively, it is a very clever trick for dimensionality reduction used in many state of the art convolutional networks. Another way to look at it is that it is preseving the spatial information, but at each location, there is a fully connected network taking all the information from every input channel, $\\text{C}_I$, and reducing it down to $\\text{C}_O$ channels (or could easily also be up, but that is not the typical use case for this). So it's not really a convolution, but we can use the convolution operation to perform it at every location in our image.\nIf you are interested in reading more about this architecture, I highly encourage you to read Network in Network, Christian Szegedy's work on the Inception network, Highway Networks, Residual Networks, and Ladder Networks.\nIn this course, we'll stick to focusing on the applications of these, while trying to delve as much into the code as possible.\n<a name=\"network-labels\"></a>\nNetwork Labels\nLet's now look at the labels:",
"net['labels']\n\nlabel_i = 851\nprint(net['labels'][label_i])",
"<a name=\"using-context-managers\"></a>\nUsing Context Managers\nUp until now, we've mostly used a single tf.Session within a notebook and didn't give it much thought. Now that we're using some bigger models, we're going to have to be more careful. Using a big model and being careless with our session can result in a lot of unexpected behavior, program crashes, and out of memory errors. The VGG network and the I2V networks are quite large. So we'll need to start being more careful with our sessions using context managers.\nLet's see how this works w/ VGG:\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"# Load the VGG network. Scroll back up to where we loaded the inception\n# network if you are unsure. It is inside the \"vgg16\" module...\nnet = ..\n\nassert(net['labels'][0] == (0, 'n01440764 tench, Tinca tinca'))\n\n# Let's explicity use the CPU, since we don't gain anything using the GPU\n# when doing Deep Dream (it's only a single image, benefits come w/ many images).\ndevice = '/cpu:0'\n\n# We'll now explicitly create a graph\ng = tf.Graph()\n\n# And here is a context manager. We use the python \"with\" notation to create a context\n# and create a session that only exists within this indent, as soon as we leave it,\n# the session is automatically closed! We also tel the session which graph to use.\n# We can pass a second context after the comma,\n# which we'll use to be explicit about using the CPU instead of a GPU.\nwith tf.Session(graph=g) as sess, g.device(device):\n \n # Now load the graph_def, which defines operations and their values into `g`\n tf.import_graph_def(net['graph_def'], name='net')\n\n# Now we can get all the operations that belong to the graph `g`:\nnames = [op.name for op in g.get_operations()]\nprint(names)",
"<a name=\"part-2---visualizing-gradients\"></a>\nPart 2 - Visualizing Gradients\nNow that we know how to load a network and extract layers from it, let's grab only the pooling layers:\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"# First find all the pooling layers in the network. You can\n# use list comprehension to iterate over all the \"names\" we just\n# created, finding whichever ones have the name \"pool\" in them.\n# Then be sure to append a \":0\" to the names\nfeatures = ...\n\n# Let's print them\nprint(features)\n\n# This is what we want to have at the end. You could just copy this list\n# if you are stuck!\nassert(features == ['net/pool1:0', 'net/pool2:0', 'net/pool3:0', 'net/pool4:0', 'net/pool5:0'])",
"Let's also grab the input layer:\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"# Use the function 'get_tensor_by_name' and the 'names' array to help you\n# get the first tensor in the network. Remember you have to add \":0\" to the\n# name to get the output of an operation which is the tensor.\nx = ...\n\nassert(x.name == 'net/images:0')",
"We'll now try to find the gradient activation that maximizes a layer with respect to the input layer x.",
"def plot_gradient(img, x, feature, g, device='/cpu:0'):\n \"\"\"Let's visualize the network's gradient activation\n when backpropagated to the original input image. This\n is effectively telling us which pixels contribute to the\n predicted layer, class, or given neuron with the layer\"\"\"\n \n # We'll be explicit about the graph and the device\n # by using a context manager:\n with tf.Session(graph=g) as sess, g.device(device):\n saliency = tf.gradients(tf.reduce_mean(feature), x)\n this_res = sess.run(saliency[0], feed_dict={x: img})\n grad = this_res[0] / np.max(np.abs(this_res))\n return grad",
"Let's try this w/ an image now. We're going to use the plot_gradient function to help us. This is going to take our input image, run it through the network up to a layer, find the gradient of the mean of that layer's activation with respect to the input image, then backprop that gradient back to the input layer. We'll then visualize the gradient by normalizing it's values using the utils.normalize function.",
"og = plt.imread('clinton.png')[..., :3]\nimg = net['preprocess'](og)[np.newaxis]\n\nfig, axs = plt.subplots(1, len(features), figsize=(20, 10))\n\nfor i in range(len(features)):\n axs[i].set_title(features[i])\n grad = plot_gradient(img, x, g.get_tensor_by_name(features[i]), g)\n axs[i].imshow(utils.normalize(grad))",
"<a name=\"part-3---basic-deep-dream\"></a>\nPart 3 - Basic Deep Dream\nIn the lecture we saw how Deep Dreaming takes the backpropagated gradient activations and simply adds it to the image, running the same process again and again in a loop. We also saw many tricks one can add to this idea, such as infinitely zooming into the image by cropping and scaling, adding jitter by randomly moving the image around, or adding constraints on the total activations.\nHave a look here for inspiration:\nhttps://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html \nhttps://photos.google.com/share/AF1QipPX0SCl7OzWilt9LnuQliattX4OUCj_8EP65_cTVnBmS1jnYgsGQAieQUc1VQWdgQ?key=aVBxWjhwSzg2RjJWLWRuVFBBZEN1d205bUdEMnhB \nhttps://mtyka.github.io/deepdream/2016/02/05/bilateral-class-vis.html\nLet's stick the necessary bits in a function and try exploring how deep dream amplifies the representations of the chosen layers:",
"def dream(img, gradient, step, net, x, n_iterations=50, plot_step=10):\n # Copy the input image as we'll add the gradient to it in a loop\n img_copy = img.copy()\n\n fig, axs = plt.subplots(1, n_iterations // plot_step, figsize=(20, 10))\n\n with tf.Session(graph=g) as sess, g.device(device):\n for it_i in range(n_iterations):\n\n # This will calculate the gradient of the layer we chose with respect to the input image.\n this_res = sess.run(gradient[0], feed_dict={x: img_copy})[0]\n\n # Let's normalize it by the maximum activation\n this_res /= (np.max(np.abs(this_res) + 1e-8))\n \n # Or alternatively, we can normalize by standard deviation\n # this_res /= (np.std(this_res) + 1e-8)\n \n # Or we could use the `utils.normalize function:\n # this_res = utils.normalize(this_res)\n \n # Experiment with all of the above options. They will drastically\n # effect the resulting dream, and really depend on the network\n # you use, and the way the network handles normalization of the\n # input image, and the step size you choose! Lots to explore!\n\n # Then add the gradient back to the input image\n # Think about what this gradient represents?\n # It says what direction we should move our input\n # in order to meet our objective stored in \"gradient\"\n img_copy += this_res * step\n\n # Plot the image\n if (it_i + 1) % plot_step == 0:\n m = net['deprocess'](img_copy[0])\n axs[it_i // plot_step].imshow(m)\n\n# We'll run it for 3 iterations\nn_iterations = 3\n\n# Think of this as our learning rate. This is how much of\n# the gradient we'll add back to the input image\nstep = 1.0\n\n# Every 1 iterations, we'll plot the current deep dream\nplot_step = 1",
"Let's now try running Deep Dream for every feature, each of our 5 pooling layers. We'll need to get the layer corresponding to our feature. Then find the gradient of this layer's mean activation with respect to our input, x. Then pass these to our dream function. This can take awhile (about 10 minutes using the CPU on my Macbook Pro).",
"for feature_i in range(len(features)):\n with tf.Session(graph=g) as sess, g.device(device):\n # Get a feature layer\n layer = g.get_tensor_by_name(features[feature_i])\n\n # Find the gradient of this layer's mean activation\n # with respect to the input image\n gradient = tf.gradients(tf.reduce_mean(layer), x)\n \n # Dream w/ our image\n dream(img, gradient, step, net, x, n_iterations=n_iterations, plot_step=plot_step)",
"Instead of using an image, we can use an image of noise and see how it \"hallucinates\" the representations that the layer most responds to:",
"noise = net['preprocess'](\n np.random.rand(256, 256, 3) * 0.1 + 0.45)[np.newaxis]",
"We'll do the same thing as before, now w/ our noise image:\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"for feature_i in range(len(features)):\n with tf.Session(graph=g) as sess, g.device(device):\n # Get a feature layer\n layer = ...\n\n # Find the gradient of this layer's mean activation\n # with respect to the input image\n gradient = ...\n \n # Dream w/ the noise image. Complete this!\n dream(...)",
"<a name=\"part-4---deep-dream-extensions\"></a>\nPart 4 - Deep Dream Extensions\nAs we saw in the lecture, we can also use the final softmax layer of a network to use during deep dream. This allows us to be explicit about the object we want hallucinated in an image.\n<a name=\"using-the-softmax-layer\"></a>\nUsing the Softmax Layer\nLet's get another image to play with, preprocess it, and then make it 4-dimensional.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"# Load your own image here\nog = ...\nplt.imshow(og)\n\n# Preprocess the image and make sure it is 4-dimensional by adding a new axis to the 0th dimension:\nimg = ...\n\nassert(img.ndim == 4)\n\n# Let's get the softmax layer\nprint(names[-2])\nlayer = g.get_tensor_by_name(names[-2] + \":0\")\n\n# And find its shape\nwith tf.Session(graph=g) as sess, g.device(device):\n layer_shape = tf.shape(layer).eval(feed_dict={x:img})\n\n# We can find out how many neurons it has by feeding it an image and\n# calculating the shape. The number of output channels is the last dimension.\nn_els = layer_shape[-1]\n\n# Let's pick a label. First let's print out every label and then find one we like:\nprint(net['labels'])",
"<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"# Pick a neuron. Or pick a random one. This should be 0-n_els\nneuron_i = ...\n\nprint(net['labels'][neuron_i])\nassert(neuron_i >= 0 and neuron_i < n_els)\n\n# And we'll create an activation of this layer which is very close to 0\nlayer_vec = np.ones(layer_shape) / 100.0\n\n# Except for the randomly chosen neuron which will be very close to 1\nlayer_vec[..., neuron_i] = 0.99",
"Let's decide on some parameters of our deep dream. We'll need to decide how many iterations to run for. And we'll plot the result every few iterations, also saving it so that we can produce a GIF. And at every iteration, we need to decide how much to ascend our gradient.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"# Explore different parameters for this section.\nn_iterations = 51\n\nplot_step = 5\n\n# If you use a different network, you will definitely need to experiment\n# with the step size, as each network normalizes the input image differently.\nstep = 0.2",
"Now let's dream. We're going to define a context manager to create a session and use our existing graph, and make sure we use the CPU device, as there is no gain in using GPU, and we have much more CPU memory than GPU memory.",
"imgs = []\nwith tf.Session(graph=g) as sess, g.device(device):\n gradient = tf.gradients(tf.reduce_max(layer), x)\n\n # Copy the input image as we'll add the gradient to it in a loop\n img_copy = img.copy()\n\n with tf.Session(graph=g) as sess, g.device(device):\n for it_i in range(n_iterations):\n\n # This will calculate the gradient of the layer we chose with respect to the input image.\n this_res = sess.run(gradient[0], feed_dict={\n x: img_copy, layer: layer_vec})[0]\n \n # Let's normalize it by the maximum activation\n this_res /= (np.max(np.abs(this_res) + 1e-8))\n \n # Or alternatively, we can normalize by standard deviation\n # this_res /= (np.std(this_res) + 1e-8)\n\n # Then add the gradient back to the input image\n # Think about what this gradient represents?\n # It says what direction we should move our input\n # in order to meet our objective stored in \"gradient\"\n img_copy += this_res * step\n\n # Plot the image\n if (it_i + 1) % plot_step == 0:\n m = net['deprocess'](img_copy[0])\n\n plt.figure(figsize=(5, 5))\n plt.grid('off')\n plt.imshow(m)\n plt.show()\n \n imgs.append(m)\n \n\n# Save the gif\ngif.build_gif(imgs, saveto='softmax.gif')\n\nipyd.Image(url='softmax.gif?i={}'.format(\n np.random.rand()), height=300, width=300)",
"<a name=\"fractal\"></a>\nFractal\nDuring the lecture we also saw a simple trick for creating an infinite fractal: crop the image and then resize it. This can produce some lovely aesthetics and really show some strong object hallucinations if left long enough and with the right parameters for step size/normalization/regularization. Feel free to experiment with the code below, adding your own regularizations as shown in the lecture to produce different results!\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"n_iterations = 101\nplot_step = 10\nstep = 0.1\ncrop = 1\nimgs = []\n\nn_imgs, height, width, *ch = img.shape\n\nwith tf.Session(graph=g) as sess, g.device(device):\n # Explore changing the gradient here from max to mean\n # or even try using different concepts we learned about\n # when creating style net, such as using a total variational\n # loss on `x`.\n gradient = tf.gradients(tf.reduce_max(layer), x)\n\n # Copy the input image as we'll add the gradient to it in a loop\n img_copy = img.copy()\n\n with tf.Session(graph=g) as sess, g.device(device):\n for it_i in range(n_iterations):\n\n # This will calculate the gradient of the layer\n # we chose with respect to the input image.\n this_res = sess.run(gradient[0], feed_dict={\n x: img_copy, layer: layer_vec})[0]\n \n # This is just one way we could normalize the\n # gradient. It helps to look at the range of your image's\n # values, e.g. if it is 0 - 1, or -115 to +115,\n # and then consider the best way to normalize the gradient.\n # For some networks, it might not even be necessary\n # to perform this normalization, especially if you\n # leave the dream to run for enough iterations.\n # this_res = this_res / (np.std(this_res) + 1e-10)\n this_res = this_res / (np.max(np.abs(this_res)) + 1e-10)\n\n # Then add the gradient back to the input image\n # Think about what this gradient represents?\n # It says what direction we should move our input\n # in order to meet our objective stored in \"gradient\"\n img_copy += this_res * step\n \n # Optionally, we could apply any number of regularization\n # techniques... Try exploring different ways of regularizing\n # gradient. ascent process. If you are adventurous, you can\n # also explore changing the gradient above using a\n # total variational loss, as we used in the style net\n # implementation during the lecture. I leave that to you\n # as an exercise!\n\n # Crop a 1 pixel border from height and width\n img_copy = img_copy[:, crop:-crop, crop:-crop, :]\n\n # Resize (Note: in the lecture, we used scipy's resize which\n # could not resize images outside of 0-1 range, and so we had\n # to store the image ranges. This is a much simpler resize\n # method that allows us to `preserve_range`.)\n img_copy = resize(img_copy[0], (height, width), order=3,\n clip=False, preserve_range=True\n )[np.newaxis].astype(np.float32)\n\n # Plot the image\n if (it_i + 1) % plot_step == 0:\n m = net['deprocess'](img_copy[0])\n\n plt.grid('off')\n plt.imshow(m)\n plt.show()\n \n imgs.append(m)\n\n# Create a GIF\ngif.build_gif(imgs, saveto='fractal.gif')\n\nipyd.Image(url='fractal.gif?i=2', height=300, width=300)",
"<a name=\"guided-hallucinations\"></a>\nGuided Hallucinations\nInstead of following the gradient of an arbitrary mean or max of a particular layer's activation, or a particular object that we want to synthesize, we can also try to guide our image to look like another image. One way to try this is to take one image, the guide, and find the features at a particular layer or layers. Then, we take our synthesis image and find the gradient which makes it's own layers activations look like the guide image.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"# Replace these with your own images!\nguide_og = plt.imread(...)[..., :3]\ndream_og = plt.imread(...)[..., :3]\n\nassert(guide_og.ndim == 3 and guide_og.shape[-1] == 3)\nassert(dream_og.ndim == 3 and dream_og.shape[-1] == 3)",
"Preprocess both images:",
"guide_img = net['preprocess'](guide_og)[np.newaxis]\ndream_img = net['preprocess'](dream_og)[np.newaxis]\n\nfig, axs = plt.subplots(1, 2, figsize=(7, 4))\naxs[0].imshow(guide_og)\naxs[1].imshow(dream_og)",
"Like w/ Style Net, we are going to measure how similar the features in the guide image are to the dream images. In order to do that, we'll calculate the dot product. Experiment with other measures such as l1 or l2 loss to see how this impacts the resulting Dream!\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"x = g.get_tensor_by_name(names[0] + \":0\")\n\n# Experiment with the weighting\nfeature_loss_weight = 1.0\n\nwith tf.Session(graph=g) as sess, g.device(device):\n feature_loss = tf.Variable(0.0)\n \n # Explore different layers/subsets of layers. This is just an example.\n for feature_i in features[3:5]:\n\n # Get the activation of the feature\n layer = g.get_tensor_by_name(feature_i)\n \n # Do the same for our guide image\n guide_layer = sess.run(layer, feed_dict={x: guide_img})\n \n # Now we need to measure how similar they are!\n # We'll use the dot product, which requires us to first reshape both\n # features to a 2D vector. But you should experiment with other ways\n # of measuring similarity such as l1 or l2 loss.\n \n # Reshape each layer to 2D vector\n layer = tf.reshape(layer, [-1, 1])\n guide_layer = guide_layer.reshape(-1, 1)\n \n # Now calculate their dot product\n correlation = tf.matmul(guide_layer.T, layer)\n \n # And weight the loss by a factor so we can control its influence\n feature_loss += feature_loss_weight * correlation",
"We'll now use another measure that we saw when developing Style Net during the lecture. This measure the pixel to pixel difference of neighboring pixels. What we're doing when we try to optimize a gradient that makes the mean differences small is saying, we want the difference to be low. This allows us to smooth our image in the same way that we did using the Gaussian to blur the image. \n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"n_img, height, width, ch = dream_img.shape\n\n# We'll weight the overall contribution of the total variational loss\n# Experiment with this weighting\ntv_loss_weight = 1.0\n\nwith tf.Session(graph=g) as sess, g.device(device):\n # Penalize variations in neighboring pixels, enforcing smoothness\n dx = tf.square(x[:, :height - 1, :width - 1, :] - x[:, :height - 1, 1:, :])\n dy = tf.square(x[:, :height - 1, :width - 1, :] - x[:, 1:, :width - 1, :])\n \n # We will calculate their difference raised to a power to push smaller\n # differences closer to 0 and larger differences higher.\n # Experiment w/ the power you raise this to to see how it effects the result\n tv_loss = tv_loss_weight * tf.reduce_mean(tf.pow(dx + dy, 1.2))",
"Now we train just like before, except we'll need to combine our two loss terms, feature_loss and tv_loss by simply adding them! The one thing we have to keep in mind is that we want to minimize the tv_loss while maximizing the feature_loss. That means we'll need to use the negative tv_loss and the positive feature_loss. As an experiment, try just optimizing the tv_loss and removing the feature_loss from the tf.gradients call. What happens?\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"# Experiment with the step size!\nstep = 0.1\n\nimgs = []\n\nwith tf.Session(graph=g) as sess, g.device(device):\n # Experiment with just optimizing the tv_loss or negative tv_loss to understand what it is doing!\n gradient = tf.gradients(-tv_loss + feature_loss, x)\n\n # Copy the input image as we'll add the gradient to it in a loop\n img_copy = dream_img.copy()\n\n with tf.Session(graph=g) as sess, g.device(device):\n sess.run(tf.initialize_all_variables())\n \n for it_i in range(n_iterations):\n\n # This will calculate the gradient of the layer we chose with respect to the input image.\n this_res = sess.run(gradient[0], feed_dict={x: img_copy})[0]\n \n # Let's normalize it by the maximum activation\n this_res /= (np.max(np.abs(this_res) + 1e-8))\n \n # Or alternatively, we can normalize by standard deviation\n # this_res /= (np.std(this_res) + 1e-8)\n\n # Then add the gradient back to the input image\n # Think about what this gradient represents?\n # It says what direction we should move our input\n # in order to meet our objective stored in \"gradient\"\n img_copy += this_res * step\n\n # Plot the image\n if (it_i + 1) % plot_step == 0:\n m = net['deprocess'](img_copy[0])\n\n plt.figure(figsize=(5, 5))\n plt.grid('off')\n plt.imshow(m)\n plt.show()\n \n imgs.append(m)\n\ngif.build_gif(imgs, saveto='guided.gif') \n\nipyd.Image(url='guided.gif?i=0', height=300, width=300)",
"<a name=\"further-explorations\"></a>\nFurther Explorations\nIn the libs module, I've included a deepdream module which has two functions for performing Deep Dream and the Guided Deep Dream. Feel free to explore these to create your own deep dreams.\n<a name=\"part-5---style-net\"></a>\nPart 5 - Style Net\nWe'll now work on creating our own style net implementation. We've seen all the steps for how to do this during the lecture, and you can always refer to the Lecture Transcript if you need to. I want to you to explore using different networks and different layers in creating your content and style losses. This is completely unexplored territory so it can be frustrating to find things that work. Think of this as your empty canvas! If you are really stuck, you will find a stylenet implementation under the libs module that you can use instead.\nHave a look here for inspiration:\nhttps://mtyka.github.io/code/2015/10/02/experiments-with-style-transfer.html \nhttp://kylemcdonald.net/stylestudies/\n<a name=\"network\"></a>\nNetwork\nLet's reset the graph and load up a network. I'll include code here for loading up any of our pretrained networks so you can explore each of them!\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"sess.close()\ntf.reset_default_graph()\n\n# Stick w/ VGG for now, and then after you see how\n# the next few sections work w/ this network, come back\n# and explore the other networks.\n\nnet = vgg16.get_vgg_model()\n# net = vgg16.get_vgg_face_model()\n# net = inception.get_inception_model(version='v5')\n# net = inception.get_inception_model(version='v3')\n# net = i2v.get_i2v_model()\n# net = i2v.get_i2v_tag_model()\n\n# Let's explicity use the CPU, since we don't gain anything using the GPU\n# when doing Deep Dream (it's only a single image, benefits come w/ many images).\ndevice = '/cpu:0'\n\n# We'll now explicitly create a graph\ng = tf.Graph()",
"Let's now import the graph definition into our newly created Graph using a context manager and specifying that we want to use the CPU.",
"# And here is a context manager. We use the python \"with\" notation to create a context\n# and create a session that only exists within this indent, as soon as we leave it,\n# the session is automatically closed! We also tel the session which graph to use.\n# We can pass a second context after the comma,\n# which we'll use to be explicit about using the CPU instead of a GPU.\nwith tf.Session(graph=g) as sess, g.device(device):\n \n # Now load the graph_def, which defines operations and their values into `g`\n tf.import_graph_def(net['graph_def'], name='net')",
"Let's then grab the names of every operation in our network:",
"names = [op.name for op in g.get_operations()]",
"Now we need an image for our content image and another one for our style image.",
"content_og = plt.imread('arles.png')[..., :3]\nstyle_og = plt.imread('clinton.png')[..., :3]\n\nfig, axs = plt.subplots(1, 2)\naxs[0].imshow(content_og)\naxs[0].set_title('Content Image')\naxs[0].grid('off')\naxs[1].imshow(style_og)\naxs[1].set_title('Style Image')\naxs[1].grid('off')\n\n# We'll save these with a specific name to include in your submission\nplt.imsave(arr=content_og, fname='content.png')\nplt.imsave(arr=style_og, fname='style.png')\n\ncontent_img = net['preprocess'](content_og)[np.newaxis]\nstyle_img = net['preprocess'](style_og)[np.newaxis]",
"Let's see what the network classifies these images as just for fun:\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"# Grab the tensor defining the input to the network\nx = ...\n\n# And grab the tensor defining the softmax layer of the network\nsoftmax = ...\n\nfor img in [content_img, style_img]:\n with tf.Session(graph=g) as sess, g.device('/cpu:0'):\n # Remember from the lecture that we have to set the dropout\n # \"keep probability\" to 1.0.\n res = softmax.eval(feed_dict={x: img,\n 'net/dropout_1/random_uniform:0': [[1.0]],\n 'net/dropout/random_uniform:0': [[1.0]]})[0]\n print([(res[idx], net['labels'][idx])\n for idx in res.argsort()[-5:][::-1]])",
"<a name=\"content-features\"></a>\nContent Features\nWe're going to need to find the layer or layers we want to use to help us define our \"content loss\". Recall from the lecture when we used VGG, we used the 4th convolutional layer.",
"print(names)",
"Pick a layer for using for the content features. If you aren't using VGG remember to get rid of the dropout stuff!\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"# Experiment w/ different layers here. You'll need to change this if you \n# use another network!\ncontent_layer = 'net/conv3_2/conv3_2:0'\n\nwith tf.Session(graph=g) as sess, g.device('/cpu:0'):\n content_features = g.get_tensor_by_name(content_layer).eval(\n session=sess,\n feed_dict={x: content_img,\n 'net/dropout_1/random_uniform:0': [[1.0]],\n 'net/dropout/random_uniform:0': [[1.0]]})",
"<a name=\"style-features\"></a>\nStyle Features\nLet's do the same thing now for the style features. We'll use more than 1 layer though so we'll append all the features in a list. If you aren't using VGG remember to get rid of the dropout stuff!\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"# Experiment with different layers and layer subsets. You'll need to change these\n# if you use a different network!\nstyle_layers = ['net/conv1_1/conv1_1:0',\n 'net/conv2_1/conv2_1:0',\n 'net/conv3_1/conv3_1:0',\n 'net/conv4_1/conv4_1:0',\n 'net/conv5_1/conv5_1:0']\nstyle_activations = []\n\nwith tf.Session(graph=g) as sess, g.device('/cpu:0'):\n for style_i in style_layers:\n style_activation_i = g.get_tensor_by_name(style_i).eval(\n feed_dict={x: style_img,\n 'net/dropout_1/random_uniform:0': [[1.0]],\n 'net/dropout/random_uniform:0': [[1.0]]})\n style_activations.append(style_activation_i)",
"Now we find the gram matrix which we'll use to optimize our features.",
"style_features = []\nfor style_activation_i in style_activations:\n s_i = np.reshape(style_activation_i, [-1, style_activation_i.shape[-1]])\n gram_matrix = np.matmul(s_i.T, s_i) / s_i.size\n style_features.append(gram_matrix.astype(np.float32))",
"<a name=\"remapping-the-input\"></a>\nRemapping the Input\nWe're almost done building our network. We just have to change the input to the network to become \"trainable\". Instead of a placeholder, we'll have a tf.Variable, which allows it to be trained. We could set this to the content image, another image entirely, or an image of noise. Experiment with all three options!\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"tf.reset_default_graph()\ng = tf.Graph()\n\n# Get the network again\nnet = vgg16.get_vgg_model()\n\n# Load up a session which we'll use to import the graph into.\nwith tf.Session(graph=g) as sess, g.device('/cpu:0'):\n # We can set the `net_input` to our content image\n # or perhaps another image\n # or an image of noise\n# net_input = tf.Variable(content_img / 255.0)\n net_input = tf.get_variable(\n name='input',\n shape=content_img.shape,\n dtype=tf.float32,\n initializer=tf.random_normal_initializer(\n mean=np.mean(content_img), stddev=np.std(content_img)))\n \n # Now we load the network again, but this time replacing our placeholder\n # with the trainable tf.Variable\n tf.import_graph_def(\n net['graph_def'],\n name='net',\n input_map={'images:0': net_input})",
"<a name=\"content-loss\"></a>\nContent Loss\nIn the lecture we saw that we'll simply find the l2 loss between our content layer features.",
"with tf.Session(graph=g) as sess, g.device('/cpu:0'):\n content_loss = tf.nn.l2_loss((g.get_tensor_by_name(content_layer) -\n content_features) /\n content_features.size)",
"<a name=\"style-loss\"></a>\nStyle Loss\nInstead of straight l2 loss on the raw feature activations, we're going to calculate the gram matrix and find the loss between these. Intuitively, this is finding what is common across all convolution filters, and trying to enforce the commonality between the synthesis and style image's gram matrix.",
"with tf.Session(graph=g) as sess, g.device('/cpu:0'):\n style_loss = np.float32(0.0)\n for style_layer_i, style_gram_i in zip(style_layers, style_features):\n layer_i = g.get_tensor_by_name(style_layer_i)\n layer_shape = layer_i.get_shape().as_list()\n layer_size = layer_shape[1] * layer_shape[2] * layer_shape[3]\n layer_flat = tf.reshape(layer_i, [-1, layer_shape[3]])\n gram_matrix = tf.matmul(tf.transpose(layer_flat), layer_flat) / layer_size\n style_loss = tf.add(style_loss, tf.nn.l2_loss((gram_matrix - style_gram_i) / np.float32(style_gram_i.size)))",
"<a name=\"total-variation-loss\"></a>\nTotal Variation Loss\nAnd just like w/ guided hallucinations, we'll try to enforce some smoothness using a total variation loss.",
"def total_variation_loss(x):\n h, w = x.get_shape().as_list()[1], x.get_shape().as_list()[1]\n dx = tf.square(x[:, :h-1, :w-1, :] - x[:, :h-1, 1:, :])\n dy = tf.square(x[:, :h-1, :w-1, :] - x[:, 1:, :w-1, :])\n return tf.reduce_sum(tf.pow(dx + dy, 1.25))\n\nwith tf.Session(graph=g) as sess, g.device('/cpu:0'):\n tv_loss = total_variation_loss(net_input)",
"<a name=\"training\"></a>\nTraining\nWe're almost ready to train! Let's just combine our three loss measures and stick it in an optimizer.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"with tf.Session(graph=g) as sess, g.device('/cpu:0'):\n # Experiment w/ the weighting of these! They produce WILDLY different\n # results.\n loss = 5.0 * content_loss + 1.0 * style_loss + 0.001 * tv_loss\n optimizer = tf.train.AdamOptimizer(0.05).minimize(loss)",
"And now iterate! Feel free to play with the number of iterations or how often you save an image. If you use a different network to VGG, then you will not need to feed in the dropout parameters like I've done here.",
"imgs = []\nn_iterations = 100\n\nwith tf.Session(graph=g) as sess, g.device('/cpu:0'):\n sess.run(tf.initialize_all_variables())\n\n # map input to noise\n og_img = net_input.eval()\n \n for it_i in range(n_iterations):\n _, this_loss, synth = sess.run([optimizer, loss, net_input], feed_dict={\n 'net/dropout_1/random_uniform:0': np.ones(\n g.get_tensor_by_name(\n 'net/dropout_1/random_uniform:0'\n ).get_shape().as_list()),\n 'net/dropout/random_uniform:0': np.ones(\n g.get_tensor_by_name(\n 'net/dropout/random_uniform:0'\n ).get_shape().as_list())\n })\n print(\"%d: %f, (%f - %f)\" %\n (it_i, this_loss, np.min(synth), np.max(synth)))\n if it_i % 5 == 0:\n m = vgg16.deprocess(synth[0])\n imgs.append(m)\n plt.imshow(m)\n plt.show()\n gif.build_gif(imgs, saveto='stylenet.gif')\n\nipyd.Image(url='stylenet.gif?i=0', height=300, width=300)",
"<a name=\"assignment-submission\"></a>\nAssignment Submission\nAfter you've completed the notebook, create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as:\n<pre>\n session-4/\n session-4.ipynb\n softmax.gif\n fractal.gif\n guided.gif\n content.png\n style.png\n stylenet.gif\n</pre>\n\nYou'll then submit this zip file for your third assignment on Kadenze for \"Assignment 4: Deep Dream and Style Net\"! Remember to complete the rest of the assignment, gallery commenting on your peers work, to receive full credit! If you have any questions, remember to reach out on the forums and connect with your peers or with me.\nTo get assessed, you'll need to be a premium student! This will allow you to build an online portfolio of all of your work and receive grades. If you aren't already enrolled as a student, register now at http://www.kadenze.com/ and join the #CADL community to see what your peers are doing! https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info\nAlso, if you share any of the GIFs on Facebook/Twitter/Instagram/etc..., be sure to use the #CADL hashtag so that other students can find your work!",
"utils.build_submission('session-4.zip',\n ('softmax.gif',\n 'fractal.gif',\n 'guided.gif',\n 'content.png',\n 'style.png',\n 'stylenet.gif',\n 'session-4.ipynb'))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
eds-uga/csci1360-fa16
|
lectures/L6.ipynb
|
mit
|
[
"Lecture 6: Advanced Data Structures\nCSCI 1360: Foundations for Informatics and Analytics\nOverview and Objectives\nWe've covered list, tuples, sets, and dictionaries. These are the foundational data structures in Python. In this lecture, we'll go over some more advanced topics that are related to these datasets. By the end of this lecture, you should be able to\n\nCompare and contrast generators and comprehensions, and how to construct them\nExplain the benefits of generators, especially in the case of huge datasets\nLoop over multiple lists simultaneously with zip() and index them with enumerate()\nDive into advanced iterations using the itertools module\n\nPart 1: List Comprehensions\nHere's some good news: if we get right down to it, having done loops and lists already, there's nothing new here.\nHere's the bad news: it's a different, and possibly less-easy-to-understand, but much more concise way of creating lists. We'll go over it bit by bit.\nLet's look at an example from a previous lecture: creating a list of squares.",
"squares = []\nfor element in range(10):\n squares.append(element ** 2)\nprint(squares)",
"Let's break it down.\nfor element in range(10):\n\n\nIt's a standard \"for\" loop header.\n\n\nThe thing we're iterating over is at the end: range(10), or a list[-like thing] of numbers [0, 10) by 1s.\n\n\nIn each loop, the current element from range(10) is stored in element.\n\n\nsquares.append(element ** 2)\n\n\nInside the loop, we append a new item to our list squares\n\n\nThe item is computed by taking the current its, element, and computing its square\n\n\nWe'll see these same pieces show up again, just in a slightly different order.",
"squares = [element ** 2 for element in range(10)]\nprint(squares)",
"There it is: a list comprehension. Let's break it down.\n\n\nNotice, first, that the entire expression is surrounded by the square brackets [ ] of a list. This is for the exact reason you'd think: we're building a list!\n\n\nThe \"for\" loop is completely intact, too; the entire header appears just as before.\n\n\nThe biggest wrinkle is the loop body. It appears right after the opening bracket, before the loop header. The rationale for this is that it's easy to see from the start of the line that\n\nWe're building a list (revealed by the opening square bracket), and\nThe list is built by successfully squaring a variable element\n\nLet's say we have some dictionary of word counts:",
"word_counts = {\n 'the': 10,\n 'race': 2,\n 'is': 3,\n 'on': 5\n}",
"and we want to generate a list of sentences:",
"sentences = ['\"{}\" appears {} times.'.format(word, count) for word, count in word_counts.items()]\nprint(sentences)",
"Start with the loop header--you see it on the far right: for word, count in word_counts.items()\nThen look at the loop body: '\"{}\" appears {} times.'.format(word, count)\nAll wrapped in square brackets [ ]\nAssigned to the variable sentences\n\nHere's another example: going from a dictionary of word counts to a list of squared counts.",
"squared_counts = [value ** 2 for key, value in word_counts.items()]\nprint(squared_counts)",
"We used the items() method on the dictionary again, which gives us a list of tuples\n\n\nSince we know the items in the list are tuples of two elements, we use unpacking\n\n\nWe provide our element-by-element construction of the list with our statement value ** 2, squaring the value\n\n\nPart 2: Generators\nGenerators are cool twists on lists (see what I did there). They've been around since Python 2 but took on a whole new life in Python 3.\nThat said, if you ever get confused about generators, just think of them as lists. This can potentially get you in trouble with weird errors, but 90% of the time it'll work every time.\nLet's start with an example you're probably already quite familiar with: range()",
"x = range(10)",
"As we know, this will create a list[-like thing] with the numbers 0 through 9, inclusive, and assign it to the variable x.\nNow you'll see why I've been using the \"list[-like thing]\" notation: it's not really a list!",
"print(x)\nprint(type(x))",
"To get a list, we've been casting the generator to a list:",
"list(x)",
"and we get a vanilla Python list.\nSo range() gives us a generator! Great! ...what does that mean, exactly?\nFor most practical purposes, generators and lists are indistinguishable. However, there are some key differences to be aware of:\n\n\nGenerators are \"lazy\". This means when you call range(10), not all 10 numbers are immediately computed; in fact, none of them are. They're computed on-the-fly in the loop itself! This really comes in handy if, say, you wanted to loop through 1 trillion numbers, or call range(1000000000000). With vanilla lists, this would immediately create 1 trillion numbers in memory and store them, taking up a whole lot of space. With generators, only 1 number is ever computed at a given loop iteration. Huge memory savings!\n\n\nGenerators only work once. This is where you can get into trouble. Let's say you're trying to identify the two largest numbers in a generator of numbers. You'd loop through once and identify the largest number, then use that as a point of comparison to loop through again to find the second-largest number (you could do it with just one loop, but for the sake of discussion let's assume you did it this way). With a list, this would work just fine. Not with a generator, though. You'd need to explicitly recreate the generator.\n\n\nHow do we build generators? Aside from range(), that is.\nRemember list comprehensions? Just replace the brackets of a list comprehension [ ] with parentheses ( ).",
"x = [i for i in range(10)] # Brackets -> list\nprint(x)\n\nx = (i for i in range(10)) # Parentheses -> generator\nprint(x)",
"Also--where have we seen parentheses before? TUPLES! You can think of a generator as a sort of tuple. After all, like a tuple, a generator is immutable (cannot be changed once created). Be careful with this, though: all generators are very like tuples, but not all tuples are like generators.\nIn sum, use lists if:\n\nyou're working with a relatively small amount of elements\nyou want to add to / edit / remove from the elements\nyou need direct access to arbitrary elements, e.g. some_list[431]\n\nOn the other hand, use generators if:\n\nyou're working with a giant collection of elements\nyou'll only loop through the elements once or twice\nwhen looping through elements, you're fine going in sequential order\n\nPart 3: Other looping mechanisms\nThere are a few other advanced looping mechanisms in Python that are a little complex, but can make your life a lot easier when used correctly (especially if you're a convert from something like C++ or Java).\nzip()\nzip() is a small method that packs a big punch. It \"zips\" multiple lists together into something of one big mega-list for the sole purpose of being able to iterate through them all simultaneously.\nWe've already seen something like this before: the items() method in dictionaries. Dictionaries are more or less two lists stacked right up against each other: one list holds the keys, and the corresponding elements of the other list holds the values for each key. items() lets us loop through both simultaneously, giving us the corresponding elements from each list, one at a time:",
"d = {\n 'uga': 'University of Georgia',\n 'gt': 'Georgia Tech',\n 'upitt': 'University of Pittsburgh',\n 'cmu': 'Carnegie Mellon University'\n}\nfor key, value in d.items():\n print(\"'{}' stands for '{}'.\".format(key, value))",
"zip() does pretty much the same thing, but on steroids: rather than just \"zipping\" together two lists, it can zip together as many as you want.\nHere's an example: first names, last names, and favorite programming languages.",
"first_names = ['Shannon', 'Jen', 'Natasha', 'Benjamin']\nlast_names = ['Quinn', 'Benoit', 'Romanov', 'Button']\nfave_langs = ['Python', 'Java', 'Assembly', 'Go']",
"I want to loop through these three lists simultaneously, so I can print out the person's first name, last name, and their favorite language on the same line. Since I know they're the same length, I could just do a range(len(fname)), but this is arguably more elegant:",
"for fname, lname, lang in zip(first_names, last_names, fave_langs):\n print(\"{} {}'s favorite language is {}.\".format(fname, lname, lang))",
"enumerate()\nOf course, there are always those situations where it's really, really nice to have an index variable in the loop. Let's take a look at that previous example:",
"for fname, lname, lang in zip(first_names, last_names, fave_langs):\n print(\"{} {}'s favorite language is {}.\".format(fname, lname, lang))",
"This is great if all I want to do is loop through the lists simultaneously. But what if the ordering of the elements matters? For example, I want to prefix each sentence with the line number. How can I track what index I'm on in a loop if I don't use range()?\nenumerate() handles this. By wrapping the object we loop over inside enumerate(), on each loop iteration we not only get the next object of interest, but also the index of that object. To wit:",
"x = ['a', 'list', 'of', 'strings']\nfor index, element in enumerate(x):\n print(\"Found '{}' at index {}.\".format(element, index))",
"This comes in handy anytime you need to loop through a list or generator, but also need to know what index you're on.\nbreak and continue\n\nWith for loops, you specify how many times to run the loop.\nWith while loops, you iterate until some condition is met.\n\nFor the vast majority of cases, this works well. But sometimes you need just a little more control for extenuating circumstances.\nTake the example of a web server:\n\nListens for incoming requests\nServes those requests (e.g. returns web pages)\nGoes back to listening for more requests\n\nThis is essentially implemented using a purposefully-infinite loop:",
"while True:\n # Listen for incoming requests\n \n # Handle the request",
"How do you get out of this infinite loop? With a break statement.",
"import numpy as np\n\ndef handle_request():\n return np.random.randint(100)\n\nloops = 0\nwhile True:\n loops += 1\n req = handle_request()\n break\n \nprint(\"Exiting program after {} loop.\".format(loops))",
"Just break. That will snap whatever loop you're currently in and immediately dump you out just after it.\nSame thing with for loops:",
"for i in range(100000): # Loop 100,000 times!\n break\nprint(i)",
"Similar to break is continue, though you use this when you essentially want to \"skip\" certain iterations.\ncontinue will also halt the current iteration, but instead of ending the loop entirely, it basically skips you on to the next iteration of the loop without executing any code that may be below it.",
"for i in range(100):\n continue\n print(\"This will never be printed.\")\nprint(i)",
"Notice how the print statement inside the loop is never executed, but our loop counter i is still incremented through the very end.\nPart 4: itertools\nAside: Welcome to the big wide world of Beyond Core Python.\nTechnically we're still in base Python, but in order to use more advanced iterating tools, we have to actually pull in an external package--itertools.\nJustin Duke has an excellent web tutorial, which I'll reproduce in part here. Let's say we have a couple of lists we want to operate on:",
"letters = ['a', 'b', 'c', 'd', 'e', 'f']\nbooleans = [1, 0, 1, 0, 0, 1]\nnumbers = [23, 20, 44, 32, 7, 12]\ndecimals = [0.1, 0.7, 0.4, 0.4, 0.5]",
"Now: I want you to string all four of these lists together, end-to-end, into one long list. How would you do it?\nHere's a way simpler way, though it requires pulling in an external package. You can do this with the keyword import:",
"import itertools",
"Now, we have access to all the functions available in the itertools package--to use them, just type the package name, a dot \".\", and the function you want to call.\nIn this example, we want to use the itertools.chain() function:",
"monster = itertools.chain(letters, booleans, numbers, decimals)\nprint(monster)",
"Err, what's an itertools.chain object?\nDon't panic--any thoughts as to what kind of object this might be?\nIt's an iterable, and we know how to handle those!",
"for item in monster:\n print(item, end = \" \")",
"And there they are--all four lists, joined at the hip.\nAnother phenomenal function is combinations.\nIf you've ever taken a combinatorics class, or are at all interested in the idea of finding all the possible combinations of a certain collection of things, this is your function.\nA common task in data science is finding combinations of configuration values that work well together--e.g., plotting your data in two dimensions. Which two dimensions will give the nicest plot?\nHere's a list of numbers. How many possible pairings are there?",
"items = ['one', 'two', 'three', 'four', 'five', 'six']\n\ncombos = itertools.combinations(items, 2) # The \"2\" means pairs.\nfor combo in combos:\n print(combo)",
"It doesn't have to be pairs; we can also try to find all the triplets of items.",
"combos = itertools.combinations(items, 3) # Now it's a 3.\nfor combo in combos:\n print(combo)",
"Review Questions\nSome questions to discuss and consider:\n1: I want a list of all possible combinations of (x, y) values for the range [0, 9]. Show how this can be done with a list comprehension using two for-loops.\n2: Without consulting Google University, consider how generators might work under the hood. How do you think they're implemented?\n3: Go back to the example with three lists (first names, last names, and programming languages). Show how you could use enumerate to prepend a line number (the current index of the lists) to the sentence printed for each person, e.g.: \"17: Joe Schmo's favorite language is C++.\"\n4: Consider that you have a list of lists--representing a matrix--and you want to convert each \"row\" of the matrix to a tuple, where the first element is an integer row index, and the second element is the row itself (the original list). Assuming the list-of-lists matrix already exists, how could you add the list of indices to it?\n5: How would you implement itertools.combinations on your own, just using loops?\nCourse Administrivia\n\"Cell was changed and shouldn't have\" errors on your assignments. If you're getting these errors, it's because you put your code in the wrong cell. Make sure you edit only the cells that say # YOUR CODE HERE or YOUR ANSWER HERE. Also, be sure to delete or comment out the line that says raise NotImplementedError().\nIf you need to re-fetch an assignment, you have to delete the entire directory of the old version. For example, in the case where errors are found in the assignment and a new version needs to be pushed, you'll have to delete your current version as well as the folder it's in--so, everything--in order to re-fetch a new version.\nFirst review session on Thursday! Come with questions. I'll have some exercises for everyone to work on as well--exercises that could very likely end up on the midterm or final in some form.\nHow is A1 going? A2 will be out on Thursday as well.\nAdditional Resources\n\nMatthes, Eric. Python Crash Course. 2016. ISBN-13: 978-1593276034\nGrus, Joel. Data Science from Scratch. 2015. ISBN-13: 978-1491901427\nDuke, Justin. *A Gentle Introduction to itertools. http://jmduke.com/posts/a-gentle-introduction-to-itertools/"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
rasbt/algorithms_in_ipython_notebooks
|
ipython_nbs/sorting/bubble_sort.ipynb
|
gpl-3.0
|
[
"%load_ext watermark\n%watermark -a 'Sebastian Raschka' -u -d -v",
"Bubble Sort\nQuick note about Bubble sort\nI don't want to get into the details about sorting algorithms here, but there is a great report\n\"Sorting in the Presence of Branch Prediction and Caches - Fast Sorting on Modern Computers\" written by Paul Biggar and David Gregg, where they describe and analyze elementary sorting algorithms in very nice detail (see chapter 4). \nAnd for a quick reference, this website has a nice animation of this algorithm.\nA long story short: The \"worst-case\" complexity of the Bubble sort algorithm (i.e., \"Big-O\")\n $\\Rightarrow \\pmb O(n^2)$\n<br>\n<br>\nBubble sort implemented in (C)Python",
"def python_bubblesort(a_list):\n \"\"\" Bubblesort in Python for list objects (sorts in place).\"\"\"\n length = len(a_list)\n for i in range(length):\n for j in range(1, length):\n if a_list[j] < a_list[j-1]:\n a_list[j-1], a_list[j] = a_list[j], a_list[j-1]\n return a_list",
"<br>\nBelow is a improved version that quits early if no further swap is needed.",
"def python_bubblesort_improved(a_list):\n \"\"\" Bubblesort in Python for list objects (sorts in place).\"\"\"\n length = len(a_list)\n swapped = 1\n for i in range(length):\n if swapped: \n swapped = 0\n for ele in range(length-i-1):\n if a_list[ele] > a_list[ele + 1]:\n temp = a_list[ele + 1]\n a_list[ele + 1] = a_list[ele]\n a_list[ele] = temp\n swapped = 1\n return a_list",
"Verifying that all implementations work correctly",
"import random\nimport copy\nrandom.seed(4354353)\n\nl = [random.randint(1,1000) for num in range(1, 1000)]\nl_sorted = sorted(l)\nfor f in [python_bubblesort, python_bubblesort_improved]:\n assert(l_sorted == f(copy.copy(l)))\nprint('Bubblesort works correctly')\n",
"Performance comparison",
"# small list\n\nl_small = [random.randint(1,100) for num in range(1, 100)]\nl_small_cp = copy.copy(l_small)\n\n%timeit python_bubblesort(l_small)\n%timeit python_bubblesort_improved(l_small_cp)\n\n# larger list\n\nl_small = [random.randint(1,10000) for num in range(1, 10000)]\nl_small_cp = copy.copy(l_small)\n\n%timeit python_bubblesort(l_small)\n%timeit python_bubblesort_improved(l_small_cp)",
"<br>\n<br>"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
StingraySoftware/notebooks
|
Simulator/Power Spectral Models.ipynb
|
mit
|
[
"Contents\nThis notebook covers the pre-defined spectral models available for light curve simulation. Specifically, the notebook describes the meaning of different parameters that describe these models.\nSetup\nImport relevant stingray libraries.",
"from stingray.simulator import simulator, models",
"Import pyplot from matplotlib for plotting light curves.",
"from matplotlib import pyplot as plt\n%matplotlib inline",
"Power Spectral Models\nCurrently, stingray has two spectral models namely generalized lorenzian function and smooth broken power law function. More models might be added in future, but, as explained in the rest of the section, Astropy models can be used to create most power spectral shapes one might be interested in.\nGeneralized Lorenzian Function\nApart from the frequencies, the lorenzian function needs the following parameters specified.\np: iterable\np[0] = peak centeral frequency\np[1] = FWHM of the peak (gamma)\np[2] = peak value at x=x0\np[3] = power coefficient [n]\n\nSmooth Broken Power Law Model\nApart from the frequencies which need to be passed as a numpy array, smooth broken power law needs the following parameters specified.\np: iterable\np[0] = normalization frequency\np[1] = power law index for f --> zero\np[2] = power law index for f --> infinity\np[3] = break frequency\n\nLight Curve Simulation\nThese models can be imported while simulating lightcurve(s).",
"sim = simulator.Simulator(N=1024, mean=0.5, dt=0.125, rms=0.2)\n\nlc = sim.simulate('generalized_lorentzian', [1.5, .2, 1.2, 1.4])\nplt.plot(lc.counts[1:400])\n\nlc = sim.simulate('smoothbknpo', [.6, 0.9, .2, 4])\nplt.plot(lc.counts[1:400])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
toddschultz/data-512-a2
|
hcds-a2-biastryipynb.ipynb
|
mit
|
[
"Bias on Wikipedia\nTodd Schultz\nDue: November 2, 2017\nFor this assignment (https://wiki.communitydata.cc/HCDS_(Fall_2017)/Assignments#A2:_Bias_in_data), your job is to analyze what the nature of political articles on Wikipedia - both their existence, and their quality - can tell us about bias in Wikipedia's content.\nImports",
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport requests\nimport json\n\n%matplotlib notebook",
"Import data of politicians by country\nImport the data of policitcians by country provided by Oliver Keyes found at https://figshare.com/articles/Untitled_Item/5513449.",
"politicianFile = 'PolbyCountry_data.csv'\npoliticianNames = pd.read_csv(politicianFile)\n\n# rename variables\npoliticianNames.rename(columns = {'page':'article_name'}, inplace = True)\npoliticianNames.rename(columns = {'last_edit':'revision_id'}, inplace = True)\npoliticianNames[0:4]\n\npoliticianNames.shape",
"Import population by country\nImport the population by country provide PRB found at http://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=14. The data is from mid-2015.",
"countryFile = 'Population Mid-2015.csv'\ntempDF = pd.read_csv(countryFile, header=1)\ncountryPop = pd.DataFrame(data={'country': tempDF['Location'], 'population': tempDF['Data']})\n\ncountryPop[0:5]",
"Combined data\nCombine the data frames into a single data frame with the following variables. \nColumn, country, article_name, revision_id, article_quality, population\nMake a placeholder, empty variable for article_quality to be filled in next.",
"# First add placeholder to politicianNames dataframe for article quality\npoliticianNames = politicianNames.assign(article_quality = \"\")\narticle_quality = politicianNames['article_quality']\n\n# Next, join politicianNames with countryPop\npoliticData = politicianNames.merge(countryPop,how = 'inner')\n\n#politicianNames[0:5]\npoliticData[0:5]\n\npoliticData.shape",
"ORES article quality data\nWikimedia API endpoint for a machine learning system called ORES (\"Objective Revision Evaluation Service\") found at https://www.mediawiki.org/wiki/ORES and documentaiton found at https://ores.wikimedia.org/v3/#!/scoring/get_v3_scores_context_revid_model. ORES estimates the quality of an article (at a particular point in time), and assigns a series of probabilities that the article is in one of 6 quality categories. The options are, from best to worst:\nFA - Featured article\nGA - Good article\nB - B-class article\nC - C-class article\nStart - Start-class article\nStub - Stub-class article\nBelow is an example of how to make a request through the ORES system in Python to find out the current quality of the article on Aaron Halfaker (the person who created ORES):",
"# ORES\nendpoint = 'https://ores.wikimedia.org/v3/scores/{project}/{revid}/{model}'\nheaders = {'User-Agent' : 'https://github.com/your_github_username', 'From' : 'your_uw_email@uw.edu'}\n\nfor irevid in range(0, politicData.shape[0]):\n revidstr = str(politicData['revision_id'][irevid])\n #print(revidstr)\n params = {'project' : 'enwiki',\n 'model' : 'wp10',\n 'revid' : revidstr\n }\n \n try:\n api_call = requests.get(endpoint.format(**params))\n response = api_call.json()\n #print(json.dumps(response, indent=4, sort_keys=True))\n \n # Create data frame and add numeric values for the plotting variable\n politicData.loc[irevid,'article_quality'] = response['enwiki']['scores'][revidstr]['wp10']['score']['prediction']\n #print(response['enwiki']['scores'][revidstr]['wp10']['score']['prediction'])\n except:\n print('Error at ' + str(irevid))\n \n if irevid % 500 == 0:\n print(irevid)\n\n# Write out csv file\npoliticData.to_csv('en-wikipedia_bias_2015.csv', index=False)\npoliticData[0:4]\n\npoliticData.shape[0]\n#politicData[-5:]",
"Importing the other data is just a matter of reading CSV files in! (and for the R programmers - we'll have an R example up as soon as the Hub supports the language).",
"## getting the data from the CSV files\nimport csv\n\ndata = []\nwith open('page_data.csv') as csvfile:\n reader = csv.reader(csvfile)\n for row in reader:\n data.append([row[0],row[1],row[2]])\n\nprint(data[782])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ssunkara1/bqplot
|
examples/Introduction.ipynb
|
apache-2.0
|
[
"bqplot\nThis notebook is meant to guide you through the first stages of using the bqplot visualization library. bqplot is a Grammar of Graphics based interactive visualization library for the Jupyter notebook where every single component of a plot is an interactive iPython widget. What this means is that even after a plot is drawn, you can change almost any aspect of it. This makes the creation of advanced Graphical User Interfaces attainable through just a few simple lines of Python code.",
"# Let's begin by importing some libraries we'll need\nimport numpy as np\nfrom __future__ import print_function # So that this notebook becomes both Python 2 and Python 3 compatible\n\n# And creating some random data\nsize = 100\nnp.random.seed(0)\nx_data = np.arange(size)\ny_data = np.cumsum(np.random.randn(size) * 100.0)",
"Your First Plot\nLet's start by creating a simple Line chart. bqplot has two different APIs, the first one is a matplotlib inspired simple API called pyplot. So let's import that.",
"from bqplot import pyplot as plt",
"Let's plot y_data against x_data, and then show the plot.",
"plt.figure(title='My First Plot')\nplt.plot(x_data, y_data)\nplt.show()",
"Use the buttons above to Pan (or Zoom), Reset or save the Figure.\nUsing bqplot's interactive elements\nNow, let's try creating a new plot. First, we create a brand new Figure. The Figure is the final element of any plot that is eventually displayed. You can think of it as a Canvas on which we put all of our other plots.",
"# Creating a new Figure and setting it's title\nplt.figure(title='My Second Chart')\n\n# Let's assign the scatter plot to a variable\nscatter_plot = plt.scatter(x_data, y_data)\n\n# Let's show the plot\nplt.show()",
"Since both the x and the y attributes of a bqplot chart are interactive widgets, we can change them. So, let's \nchange the y attribute of the chart.",
"scatter_plot.y = np.cumsum(np.random.randn(size) * 100.0)",
"Re-run the above cell a few times, the same plot should update everytime. But, thats not the only thing that can be changed once a plot has been rendered. Let's try changing some of the other attributes.",
"# Say, the color\nscatter_plot.colors = ['Red']\n\n# Or, the marker style\nscatter_plot.marker = 'diamond'",
"It's important to remember that an interactive widget means that the JavaScript and the Python communicate. So, the plot can be changed through a single line of python code, or a piece of python code can be triggered by a change in the plot. Let's go through a simple example. Say we have a function foo:",
"def foo(change):\n print('This is a trait change. Foo was called by the fact that we moved the Scatter')\n print('In fact, the Scatter plot sent us all the new data: ')\n print('To access the data, try modifying the function and printing the data variable')",
"We can call foo everytime any attribute of our scatter is changed. Say, the y values:",
"# First, we hook up our function `foo` to the colors attribute (or Trait) of the scatter plot\nscatter_plot.observe(foo, 'y')",
"To allow the points in the Scatter to be moved interactively, we set the enable_move attribute to True",
"scatter_plot.enable_move = True",
"Go ahead, head over to the chart and move any point in some way. This move (which happens on the JavaScript side should trigger our Python function foo.\nUnderstanding how bqplot uses the Grammar of Graphics paradigm\nbqplot has two different APIs. One is the matplotlib inspired pyplot which we used above (you can think of it as similar to qplot in ggplot2). The other one, the verbose API, is meant to expose every element of a plot individually, so that their attriutes can be controlled in an atomic way. In order to truly use bqplot to build complex and feature-rich GUIs, it pays to understand the underlying theory that is used to create a plot.\nTo understand this verbose API, it helps to revisit what exactly the components of a plot are. The first thing we need is a Scale.\nA Scale is a mapping from (function that converts) data coordinates to figure coordinates. What this means is that, a Scale takes a set of values in any arbitrary unit (say number of people, or $, or litres) and converts it to pixels (or colors for a ColorScale).",
"# First, we import the scales\nfrom bqplot import LinearScale\n\n# Let's create a scale for the x attribute, and a scale for the y attribute\nx_sc = LinearScale()\ny_sc = LinearScale()",
"Now, we need to create the actual Mark that will visually represent the data. Let's pick a Scatter chart to start.",
"from bqplot import Scatter\n\nscatter_chart = Scatter(x=x_data, y=y_data, scales={'x': x_sc, 'y': y_sc})",
"Most of the time, the actual Figure co-ordinates don't really mean anything to us. So, what we need is the visual representation of our Scale, which is called an Axis.",
"from bqplot import Axis\n\nx_ax = Axis(label='X', scale=x_sc)\ny_ax = Axis(label='Y', scale=y_sc, orientation='vertical')",
"And finally, we put it all together on a canvas, which is called a Figure.",
"from bqplot import Figure\n\nfig = Figure(marks=[scatter_chart], title='A Figure', axes=[x_ax, y_ax])\nfig",
"The IPython display machinery displays the last returned value of a cell. If you wish to explicitly display a widget, you can call IPython.display.display.",
"from IPython.display import display\n\ndisplay(fig)",
"Now, that the plot has been generated, we can control every single attribute of it. Let's say we wanted to color the chart based on some other data.",
"# First, we generate some random color data.\ncolor_data = np.random.randint(0, 2, size=100)",
"Now, we define a ColorScale to map the color_data to actual colors",
"from bqplot import ColorScale\n\n# The colors trait controls the actual colors we want to map to. It can also take a min, mid, max list of\n# colors to be interpolated between for continuous data.\ncol_sc = ColorScale(colors=['MediumSeaGreen', 'Red'])\n\nscatter_chart.scales = {'x': x_sc, 'y': y_sc, 'color': col_sc}\n# We pass the color data to the Scatter Chart through it's color attribute\nscatter_chart.color = color_data",
"The grammar of graphics framework allows us to overlay multiple visualizations on a single Figure by having the visualization share the Scales. So, for example, if we had a Bar chart that we would like to plot alongside the Scatter plot, we just pass it the same Scales.",
"from bqplot import Bars\n\nnew_size = 50\nscale = 100.\nx_data_new = np.arange(new_size)\ny_data_new = np.cumsum(np.random.randn(new_size) * scale)\n\n# All we need to do to add a bar chart to the Figure is pass the same scales to the Mark\nbar_chart = Bars(x=x_data_new, y=y_data_new, scales={'x': x_sc, 'y': y_sc})",
"Finally, we add the new Mark to the Figure to update the plot!",
"fig.marks = [scatter_chart, bar_chart]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
joshnsolomon/phys202-2015-work
|
assignments/assignment11/OptimizationEx01.ipynb
|
mit
|
[
"Optimization Exercise 1\nImports",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.optimize as opt",
"Hat potential\nThe following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the \"hat potential\":\n$$ V(x) = -a x^2 + b x^4 $$\nWrite a function hat(x,a,b) that returns the value of this function:",
"def hat(x,a,b):\n return -1*a*(x**2) + b*(x**4)\n\nassert hat(0.0, 1.0, 1.0)==0.0\nassert hat(0.0, 1.0, 1.0)==0.0\nassert hat(1.0, 10.0, 1.0)==-9.0",
"Plot this function over the range $x\\in\\left[-3,3\\right]$ with $b=1.0$ and $a=5.0$:",
"a = 5.0\nb = 1.0\n\nx = np.linspace(-3,3,100)\nv = hat(x,a,b)\ngraph = plt.plot(x,v)\n\n\nassert True # leave this to grade the plot",
"Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.\n\nUse scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.\nPrint the x values of the minima.\nPlot the function as a blue line.\nOn the same axes, show the minima as red circles.\nCustomize your visualization to make it beatiful and effective.",
"f = lambda g: hat(g,a,b)\nx1 = float(opt.minimize(f,-2 ).x)\nx2 = float(opt.minimize(f,2 ).x)\nprint(x1)\nprint(x2)\ngraph = plt.plot(x,v)\nplt.plot([x1,x2],[f(x1),f(x2)],'ro')\n\nassert True # leave this for grading the plot",
"To check your numerical results, find the locations of the minima analytically. Show and describe the steps in your derivation using LaTeX equations. Evaluate the location of the minima using the above parameters.\n\\begin{align}\nV(x) = -a x^2 + b x^4 \\\nx(4bx^2-2a) = 0 \\\n4bx^2=2a \\\nx = \\pm \\sqrt{\\frac{a}{2b}} \\\nx = \\pm \\sqrt{\\frac{5}{2(1)}} \\approx \\pm 1.5811388\n\\end{align}"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
zhaojijet/UdacityDeepLearningProject
|
tv-script-generation/dlnd_tv_script_generation.ipynb
|
apache-2.0
|
[
"TV Script Generation\nIn this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.\nGet the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]",
"Explore the Data\nPlay around with view_sentence_range to view different parts of the data.",
"view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))",
"Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\nLookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call vocab_to_int\n- Dictionary to go from the id to word, we'll call int_to_vocab\nReturn these dictionaries in the following tuple (vocab_to_int, int_to_vocab)",
"import numpy as np\nimport problem_unittests as tests\nfrom collections import Counter\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n # TODO: Implement Function\n counts = Counter(text)\n vocab = sorted(counts, key=counts.get, reverse=True)\n vocab_to_int = {word: ii for ii, word in enumerate(vocab)}\n int_to_vocab = {ii: word for word, ii in vocab_to_int.items()}\n return vocab_to_int, int_to_vocab\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)",
"Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\nImplement the function token_lookup to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".",
"def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n # TODO: Implement Function\n token_dict = {'.': '||Period||', ',': '||Comma||', '\"': '||Quotation_Mark||',\n ';': '||Semicolon||', '!': '||Exclamation_Mark||', '?': '||Question_Mark||',\n '(': '||Left_Parentheses||', ')': '||Right_Parentheses||', '--': '||Dash||',\n '\\n': '||Return||'}\n return token_dict\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)",
"Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()",
"Build the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\nCheck the Version of TensorFlow and Access to GPU",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))",
"Input\nImplement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the TF Placeholder name parameter.\n- Targets placeholder\n- Learning Rate placeholder\nReturn the placeholders in the following the tuple (Input, Targets, LearingRate)",
"def get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n # TODO: Implement Function\n Input = tf.placeholder(tf.int32, [None, None], name='input')\n Targets = tf.placeholder(tf.int32, [None, None], name='targets')\n LearningRate = tf.placeholder(tf.float32, name='learningrate')\n return Input, Targets, LearningRate\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)",
"Build RNN Cell and Initialize\nStack one or more BasicLSTMCells in a MultiRNNCell.\n- The Rnn size should be set using rnn_size\n- Initalize Cell State using the MultiRNNCell's zero_state() function\n - Apply the name \"initial_state\" to the initial state using tf.identity()\nReturn the cell and initial state in the following tuple (Cell, InitialState)",
"def get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n # TODO: Implement Function\n rnn = tf.contrib.rnn.BasicLSTMCell(rnn_size)\n Cell = tf.contrib.rnn.MultiRNNCell([rnn])\n \n # Getting an initial state of all zeros\n initial_state = Cell.zero_state(batch_size, tf.float32)\n initialState = tf.identity(initial_state, name='initial_state')\n return Cell, initialState\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)",
"Word Embedding\nApply embedding to input_data using TensorFlow. Return the embedded sequence.",
"def get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n # TODO: Implement Function\n embedding = tf.Variable(tf.truncated_normal((vocab_size, embed_dim), stddev=0.1))\n embed = tf.nn.embedding_lookup(embedding, input_data)\n return embed\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)",
"Build RNN\nYou created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.\n- Build the RNN using the tf.nn.dynamic_rnn()\n - Apply the name \"final_state\" to the final state using tf.identity()\nReturn the outputs and final_state state in the following tuple (Outputs, FinalState)",
"def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n # TODO: Implement Function\n Outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)\n FinalState = tf.identity(final_state, name='final_state')\n return Outputs, FinalState\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)",
"Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.\n- Build RNN using cell and your build_rnn(cell, inputs) function.\n- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.\nReturn the logits and final state in the following tuple (Logits, FinalState)",
"def build_nn(cell, rnn_size, input_data, vocab_size):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :return: Tuple (Logits, FinalState)\n \"\"\"\n # TODO: Implement Function\n embed = get_embed(input_data, vocab_size, rnn_size)\n Outputs, FinalState = build_rnn(cell, embed)\n \n Logits = tf.contrib.layers.fully_connected(Outputs, vocab_size, activation_fn=None)\n return Logits, FinalState\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_nn(build_nn)",
"Batches\nImplement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:\n- The first element is a single batch of input with the shape [batch size, sequence length]\n- The second element is a single batch of targets with the shape [batch size, sequence length]\nIf you can't fill the last batch with enough data, drop the last batch.\nFor exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2 3], [ 7 8 9]],\n # Batch of targets\n [[ 2 3 4], [ 8 9 10]]\n ],\n# Second Batch\n [\n # Batch of Input\n [[ 4 5 6], [10 11 12]],\n # Batch of targets\n [[ 5 6 7], [11 12 13]]\n ]\n]\n```",
"def get_batches(int_text, batch_size, seq_length):\n \"\"\"\n Return batches of input and target\n :param int_text: Text with the words replaced by their ids\n :param batch_size: The size of batch\n :param seq_length: The length of sequence\n :return: Batches as a Numpy array\n \"\"\"\n # TODO: Implement Function\n text_length = len(int_text)\n batch_length = batch_size*seq_length\n seq_num = text_length//batch_length \n \n xdata = np.array(int_text[: seq_num * batch_size * seq_length])\n ydata = np.array(int_text[1: seq_num * batch_size * seq_length + 1])\n x_batches = np.split(xdata.reshape(batch_size, -1), seq_num, 1)\n y_batches = np.split(ydata.reshape(batch_size, -1), seq_num, 1)\n Batches = np.array(list(zip(x_batches, y_batches)))\n return Batches\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_batches(get_batches)",
"Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet num_epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet seq_length to the length of sequence.\nSet learning_rate to the learning rate.\nSet show_every_n_batches to the number of batches the neural network should print progress.",
"# Number of Epochs\nnum_epochs = 100\n# Batch Size\nbatch_size = 256\n# RNN Size\nrnn_size = 256\n# Sequence Length\nseq_length = 10\n# Learning Rate\nlearning_rate = 0.01\n# Show stats for every n number of batches\nshow_every_n_batches = 10\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'",
"Build the Graph\nBuild the graph using the neural network you implemented.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]\n train_op = optimizer.apply_gradients(capped_gradients)",
"Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')",
"Save Parameters\nSave seq_length and save_dir for generating a new TV script.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))",
"Checkpoint",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()",
"Implement Generate Functions\nGet Tensors\nGet tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\nReturn the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)",
"def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n # TODO: Implement Function\n InputSensor = loaded_graph.get_tensor_by_name(\"input:0\")\n InitalStateTensor = loaded_graph.get_tensor_by_name(\"initial_state:0\")\n FinalStateTensor = loaded_graph.get_tensor_by_name(\"final_state:0\")\n ProbsTensor = loaded_graph.get_tensor_by_name(\"probs:0\")\n return (InputSensor, InitalStateTensor, FinalStateTensor, ProbsTensor)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)",
"Choose Word\nImplement the pick_word() function to select the next word using probabilities.",
"def pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n # TODO: Implement Function\n #nextword = int_to_vocab[np.argmax(probabilities)]\n nextword = int_to_vocab[np.random.choice(np.arange(len(int_to_vocab)), p=probabilities)]\n return nextword\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)",
"Generate TV Script\nThis will generate the TV script for you. Set gen_length to the length of TV script you want to generate.",
"gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)",
"The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ffmmjj/intro_to_data_science_workshop
|
01-Predicao de precos de casas.ipynb
|
apache-2.0
|
[
"Problemas de regressão envolvem a predição de um valor numérico contínuo a partir de um conjunto de característiacs.\nNeste exemplo, vamos construir um modelo para prever preços de casas a partir de características delas, como número de quartos e taxa de crimes na localização da casa.\nLeitura dos dados\nUsaremos o pacote pandas para ler os dados.\nPandas é uma biblioteca de código aberto que permite a leitura de dados a partir de diversos formatos para uma estrutura tabular que pode ser acessada e processada por scripts Python.",
"# Testando se a biblioteca está instalada corretamente e consegue ser importada\nimport pandas as pd",
"Neste exercício, usaremos o dataset [Boston Housinh]((http://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) para prever preços de casas a partir de características delas e de sua vizinhança.",
"# Carregue o arquivo 'datasets/boston.csv' usando o pandas\n",
"Pandas permite a leitura de nossos dados a partir de diferentes formatos. Veja esse link para uma lista de formatos suportados e as respectivas funções usadas para lê-los.\nO tipo usado pelo pandas para representar essa tabela com nosso dataset carregado é chamada de DataFrame.",
"# Use o método head() para exibir as primeiras cinco linhas do dataset\n",
"O método head() imprime as primeiras cinco linhas por padrão. Ele pode receber opcionalmente um argumento que especifique quantas linhas devem ser exibidas, como boston.head(n=10).",
"# Use o método info() para exibir algumas informações sobre o dataset\n",
"O método info() exibe vários detalhes sobre o dataset, como a sua quantidade de linhas, quais features estão presentes, qual é o tipo de cada feature e se existem valores em branco.",
"# Use o método describe() apra exibir algumas estatísticas do dataset\n",
"O método describe() apenas mostra estatísticas de features numéricas. Se uma feature contém strings, por exemplo, ele não será capaz de mostrar informações sobre ela.\nVisualização de dados\nApós ler os dados em um DataFrame do pandas e ter obtido uma visão geral do dataset, podemos criar gráficos para visualizar o \"formato\" desses dados.\nUsaremos a biblitoeca Matplotlib para criar esses gráficos.\nExemplo\nSuponha que lhe seja dada a seguinte informação sobre quatro datasets:",
"datasets = pd.read_csv('./datasets/anscombe.csv')\n\nfor i in range(1, 5):\n dataset = datasets[datasets.Source == 1]\n print('Dataset {} (X, Y) mean: {}'.format(i, (dataset.x.mean(), dataset.y.mean())))\n\nprint('\\n')\nfor i in range(1, 5):\n dataset = datasets[datasets.Source == 1]\n print('Dataset {} (X, Y) std deviation: {}'.format(i, (dataset.x.std(), dataset.y.std())))\n\nprint('\\n')\nfor i in range(1, 5):\n dataset = datasets[datasets.Source == 1]\n print('Dataset {} correlation between X and Y: {}'.format(i, dataset.x.corr(dataset.y)))",
"Todos eles possuem aproximadamente a mesma média, desvio-padrão e correlação. Quão parecidos esses datasets devem ser?\n\nEsse conjunto de datasets são conhecidos como o Quarteto de Anscombe e eles costumam ser usados para ilustrar como confiar apenas em estatísticas como uma forma de caracterizar conjuntos de dados podem induzir a conclusões incorretas.",
"# Na primeira vez que o matplotlib é importado, pode ser exibido algum tipo\n# de alerta relacionado às fontes do sistema dependendo da sua configuração\nimport matplotlib.pyplot as plt\n# Essa linha permite que os gráficos gerados apareçam diretamente no notebook\n# ao invés de serem abertos em uma janela ou arquivo separado.\n%matplotlib inline\n\n# Extraia os preços das casas e a quantidade média de cômodos em duas variáveis separadas\nprices = \nrooms = \n\n# Crie um scatterplot dessas duas features usando plt.scatter()\n\n# Especifique labels para os eixos X e Y\n\n# Exiba o gráfico\n\n\n# Extraia os preços das casas e o índice de poluição da vizinhança em duas variáveis separadas\nprices = \nnox = \n\n# Crie um scatterplot dessas duas features usando plt.scatter()\n\n# Especifique labels para os eixos X e Y\n\n# Exiba o gráfico\n",
"Previsão de preços\nVimos nos gráficos anteriores que algumas features parecem ter uma relação linear com os preços das casas. Usaremos então a classe LinearRegression to Scikit-Learn para modelar essa relação e conseguir prever preços de casas a partir dessas features.\nO exemplo abaixo constrói um modelo LinearRegression usando o número médio de cômodos para prever o preço da casa:",
"# Primeiro, extraia os preditores (as features que serão usadas para\n# prever o preço das casas) e a saída (os preços das casas) em\n# variáveis separadas.\n\nx = # Extraia os valores da coluna 'rm' aqui\ny = # Extraia os valores da coluna 'medv' aqui\n\nprint('x: {}'.format(x[0:3, :]))\nprint('y: {}'.format(y[0:3]))",
"A chamada values.reshape(-1, 1) é necessária nesse caso porque o scikit-learn espera que os preditores estejam na forma de uma matriz - isto é, em um formato de array bidimensional. Como estamos usando apenas um preditor, o pandas acaba representando isso como um array unidimensional, então precisamos \"reformatá-lo\" em uma \"matriz de uma coluna só\". Esse passo não é necessário se estivermos usando mais de um preditor para treinar um modelo do scikit-learn, como será visto no próximo exemplo.\nAgora que temos o dataset isolado em preditores e saídas, eles precisam ser divididos em dois conjuntos diferentes: um conjunto de treinamento e um conjunto de teste.\nEsse passo é necessário caso você precise estimar o quão bem seu modelo treinado se comportará quando for usado para prever preços de novas casas: é necessário usar o conjunto de treinamento para treinar o modelo e então calcular a sua taxa de erros no conjunto de teste.",
"# Use a função train_test_split() do scikit-learn para dividir os dados em dois conjuntos.\n# http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html\nfrom sklearn.model_selection import train_test_split\n\nRANDOM_STATE = 4321\nxtr, xts, ytr, yts = # Chame a função train_test_split aqui",
"*Se tentarmos estimar a performance do modelo no mesmo conjunto de dados que foi usado para treiná-lo, obteremos uma estimativa enviesada já que o modelo foi treinado para minimizar sua taxa de erro exatamente nos exemplos presentes no conjunto de treinamento. Para estimar corretamente o quão bem o modelo se comportará na prática, ele precisa se testado em um conjunto de dados com o qual ele nunca teve contato.",
"from sklearn.linear_model import LinearRegression\n\nlr = # Treine um modelo LinearRegression aqui usando o conjunto de treinamento\n\nlr.predict(6)\n\n# Calcule os preços previstos pelo modelo treinado\npredicted_prices = lr.predict(x)\n\n# Crie um scatterplot dessas duas propriedades usando plt.scatter()\nplt.scatter(rooms, prices)\n# Crie um line plot exibindo os valores previstos em vermelho\nplt.plot(rooms, predicted_prices, 'r')\n# Crie labels para os eixos X e Y\nplt.xlabel('Number of rooms')\nplt.ylabel('House price')\n# Exiba o gráfico\nplt.show()",
"Podemos agora usar a função mean_squared_error do Scikit-Learn para calcular o erro total médio do modelo nos dados do conjunto de teste.",
"# Use o conjunto de testes para avaliar a performace do modelo\nfrom sklearn.metrics import mean_squared_error\n\n# Calcule o mean_squared_error do modelo aqui\n",
"O erro aqui provavelmente será bem alto. Usaremos então todas as features do dataset como preditores para tentar prever os preços das casas e vamos checar o quanto isso melhora a performance do modelo.",
"X = # Use o método drop() aqui para descartar a coluna 'medv' e manter as demais.\ny = # Extraia o preço das casas aqui a partir da coluna 'medv'.\n\nX.head()\n\nfrom sklearn.model_selection import train_test_split\n\nANOTHER_RANDOM_STATE=1234\n# Divida o dataset em treinamente e teste\nXtr, Xts, ytr, yts = \n\n# Use o conjunto de treinamento para treinar um modelo LinearRegression\nlr = \n\n# Calcule o mean_squared_error do modelo no conjunto de teste aqui\n",
"Quais melhorias você acha que ainda poderiam ser feitas para se obter melhores resultados?\nUma observação final sobre a divisão dos dados\nOs dados usados em machine learning geralmente são divididos em três partes:\n* Conjunto de treinamento: Os dados usados para se treinar o modelo;\n* Conjunto de validação (não discutido nesse workshop): Esse conjunto é usado para selecionar o melhor modelo dentre diferentes algoritmos ou hiperparâmetros. A ideia é que você não use esse conjunto para treinar seu modelo diretamente, mas sim para selecionar qual algoritmo, valores de hiperparâmetros, etc dão o melhor resultado durante a fase de treinamento;\n* Conjunto de teste: Este é um conjunto usado para estimar a performance do modelo após ter-se treinado e selecionado o modelo final que será usado na prática. Esse conjunto é idealmente usado apenas uma vez para avaliar o quão bem o modelo se comporta quando ele trabalha com dados com os quais ele nunca teve contato antes.\nSe quiser mais detalhes, pode checar esse texto(em inglês) para mais informações. Existem outras abordagens bastante populares como Validação cruzada para se fazer seleção e avaliação de modelos mas elas não serão cobertas nesse workshop."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
camillescott/barf
|
barf/Presentation.ipynb
|
mit
|
[
"barf: a drop-in bioinformatics file format validator\n\nCamille Scott\nLab for Data Intensive Biology\nMarch 18, 2016\n\n\nBackground\n\n\nHigh-throughput DNA sequencing creates HUGE volumes of data\nOften times this data is processed through complex pipelines\n\n\n\n\n\n\nMotivation\n\n\nMost bioinformatics software is developed by academic labs; and\nmost academic labs don't have the time or money for formal verification; and\nmost academic labs can't even afford software engineers; \nAND, most users of the software are barely computationally literate\n\n\nThe result?\n\nMotivation: The Story of \"L\"\n\n\"L is a new graduate student with a background in bench biology who has been diving deeper into bioinformatics as a part of her PhD research. “L” is assembling a genome, and her analysis pipeline includes the widely-used program Trimmomatic [1] to remove low-quality sequences. Some days later, when the pipeline has completed, she starts to look more closely at her results, and realizes that one of the sequence files output by Trimmomatic is truncated: the FASTQ formatted file ends part-way through a DNA sequence, and includes no quality score. This does not trigger a failure until a few steps down the pipeline, when another program mysteriously crashes. As it turns out, Trimmomatic occasionally fails due to some unpredictable error which cannot be reproduced, and instead of returning an error code, returns 0 and truncates its output. Had the program behaved more appropriately, “L” would have identified the problem early-on and saved significant time.\"\n\n\nProblem!\n\nThis story is common\nReporting bugs is time consuming, fixing them moreso\nMany bugs are unpredictabe or system-dependent\nBad data gives bad results: junk in, junk out\n\n\nbarf tries to solve this problem by allowing easy drop-in data validation for any bioinformatics program.\n\nAside: why the name?\nOur lab likes silly names, and we discussed this concept a while back. It goes along well with my mRNA annotator, dammit :)\nCase: FASTA Format\n\nThis barf prototype targets FASTA format\nWidely used, poorly defined, often broken\n\n\n\nThe expected format can be defined in BNF form as follows:\n<file> ::= <token> | <token> <file>\n<token> ::= <ignore> | <seq>\n<ignore> ::= <whitespace> | <comment> <newline>\n<seq> ::= <header> <molecule> <newline>\n<header> ::= \">\" <arbitrary text> <newline>\n<molecule> ::= <mol-line> | <mol-line> <molecule>\n<mol-line> ::= <nucl-line> | <prot-line>\n<nucl-line>::= \"^[ACGTURYKMSWBDHVNX-]+$\"\n<prot-line>::= \"^[ABCDEFGHIKLMNOPQRSTUVWYZX*-]+$\"\n\n\nin reality....\n\nIn reality, this format is often toyed with\nMany programs fail on the header, many mangle the sequence with line breaks, many parsers don't follow convention\nThe format itself is trivial to parse; the data is what needs to be checked\n\n\nApproach\n\n\nInstead of focusing on parsing, we focus on a limited model of the data\nThis is a crude type system based on regular expressions\nCan be arbitrary python code",
"import re\nimport string\n\nclass SequenceModel(object):\n\n def __init__(self, alphabet, flags=re.IGNORECASE):\n self.alphabet = alphabet\n self.pattern = re.compile(r'[{alphabet}]*$'.format(alphabet=alphabet),\n flags=flags)\n\n def __str__(self):\n return 'SequenceModel<{0}>'.format(self.alphabet)\n\n def checkValid(self, data):\n if self.pattern.match(data) is None:\n raise AssertionError('{0} failed to match \"{1}\"'.format(self, data))\n\ndnaModel = SequenceModel('ACGT')\ndnanModel = SequenceModel('ACGTN')\niupacModel = SequenceModel('ARNDCQEGHILKMFPSTWXVBZX')\n\nmodels = {'DNA': dnaModel,\n 'DNA+N': dnanModel,\n 'IUPAC': iupacModel}",
"Gives a simple framework for defined what the different fields in the data should look like\nThe parsing is done with third-party libraries: we assume the parsers make a best-effort to consume that data\nIn a way, we validate both the parser and the program\n\n\nWhat about \"L\"?\n\n\nOnly validating data elements is not enough: we need to validate the data is a whole\nIntroduce a collection: keep track of what inputs and outputs\nWe want $OUTPUT \\subseteq INPUT$ where $INPUT$ and $OUTPUT$ are sets of some record (in thise case, FASTA)\n\n\nBloom Filters\n\n\nThis data is BIG! Hundreds of millions of elements!\nExact counting not an option\nInstead, use a bloom filter to represent the set\n\n\nThis way, we can assert that each element in the output is an element of the input.\n\nImplementation\n\n\nThe invocation format is based on GNU time\nPass the target program and arguments to barf; pipe input to barf; output on standard out\nbarf manages the subprocess in the background: validates input, sends it to a FIFO for the program to consume",
"# a no-op\n!cat test.500.fasta | ./barf --sequence-model DNA cat > test.out.fa\n!head test.out.fa\n\n# a bad sequence\n!cat badfasta.fa | ./barf --sequence-model DNA cat > test.out.fa\n\n# we don't check biological meaning\n!cat badfasta.fa | ./barf --sequence-model IUPAC cat > test.out.fa\n\n# adding in a new sequence\n!cat test.500.fasta | ./barf --sequence-model DNA ./fraudster.py > /dev/null",
"Conclusions\n\n\nThis is a simple prototype of a way to approach the problem\nLab is hoping to expand this to a general tool for the community\nNeeds more formats, and better performance"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tkphd/pycalphad
|
examples/ViscosityModel.ipynb
|
mit
|
[
"Custom Models in pycalphad: Viscosity\nViscosity Model Background\nWe are going to take a CALPHAD-based property model from the literature and use it to predict the viscosity of Al-Cu-Zr liquids.\nFor a binary alloy liquid under small undercooling, Gąsior suggested an entropy model of the form\n$$\\eta = (\\sum_i x_i \\eta_i ) (1 - 2\\frac{S_{ex}}{R})$$\nwhere $\\eta_i$ is the viscosity of the element $i$, $x_i$ is the mole fraction, $S_{ex}$ is the excess entropy, and $R$ is the gas constant.\nFor more details on this model, see \n\n\nM.E. Trybula, T. Gancarz, W. Gąsior, Density, surface tension and viscosity of liquid binary Al-Zn and ternary Al-Li-Zn alloys, Fluid Phase Equilibria 421 (2016) 39-48, doi:10.1016/j.fluid.2016.03.013.\n\n\nWładysław Gąsior, Viscosity modeling of binary alloys: Comparative studies, Calphad 44 (2014) 119-128, doi:10.1016/j.calphad.2013.10.007.\n\n\nChenyang Zhou, Cuiping Guo, Changrong Li, Zhenmin Du, Thermodynamic assessment of the phase equilibria and prediction of glass-forming ability of the Al–Cu–Zr system, Journal of Non-Crystalline Solids 461 (2017) 47-60, doi:10.1016/j.jnoncrysol.2016.09.031.",
"from pycalphad import Database",
"TDB Parameters\nWe can calculate the excess entropy of the liquid using the Al-Cu-Zr thermodynamic database from Zhou et al.\nWe add three new parameters to describe the viscosity (in Pa-s) of the pure elements Al, Cu, and Zr:\n$ Viscosity test parameters\n PARAMETER ETA(LIQUID,AL;0) 2.98150E+02 +0.000281*EXP(12300/(8.3145*T)); 6.00000E+03 \n N REF:0 !\n PARAMETER ETA(LIQUID,CU;0) 2.98150E+02 +0.000657*EXP(21500/(8.3145*T)); 6.00000E+03 \n N REF:0 !\n PARAMETER ETA(LIQUID,ZR;0) 2.98150E+02 +4.74E-3 - 4.97E-6*(T-2128) ; 6.00000E+03 \n N REF:0 !\nGreat! However, if we try to load the database now, we will get an error. This is because ETA parameters are not supported by default in pycalphad, so we need to tell pycalphad's TDB parser that \"ETA\" should be on the list of supported parameter types.",
"dbf = Database('alcuzr-viscosity.tdb')",
"Adding the ETA parameter to the TDB parser",
"import pycalphad.io.tdb_keywords\npycalphad.io.tdb_keywords.TDB_PARAM_TYPES.append('ETA')",
"Now the database will load:",
"dbf = Database('alcuzr-viscosity.tdb')",
"Writing the Custom Viscosity Model\nNow that we have our ETA parameters in the database, we need to write a Model class to tell pycalphad how to compute viscosity. All custom models are subclasses of the pycalphad Model class.\nWhen the ViscosityModel is constructed, the build_phase method is run and we need to construct the viscosity model after doing all the other initialization using a new method build_viscosity. The implementation of build_viscosity needs to do four things:\n1. Query the Database for all the ETA parameters\n2. Compute their weighted sum\n3. Compute the excess entropy of the liquid\n4. Plug all the values into the Gąsior equation and return the result\nSince the build_phase method sets the attribute viscosity to the ViscosityModel, we can access the property using viscosity as the output in pycalphad caluclations.",
"from tinydb import where\nimport sympy\nfrom pycalphad import Model, variables as v\n\nclass ViscosityModel(Model):\n def build_phase(self, dbe):\n super(ViscosityModel, self).build_phase(dbe)\n self.viscosity = self.build_viscosity(dbe)\n\n def build_viscosity(self, dbe):\n if self.phase_name != 'LIQUID':\n raise ValueError('Viscosity is only defined for LIQUID phase')\n phase = dbe.phases[self.phase_name]\n param_search = dbe.search\n # STEP 1\n eta_param_query = (\n (where('phase_name') == phase.name) & \\\n (where('parameter_type') == 'ETA') & \\\n (where('constituent_array').test(self._array_validity))\n )\n # STEP 2\n eta = self.redlich_kister_sum(phase, param_search, eta_param_query)\n # STEP 3\n excess_energy = self.GM - self.models['ref'] - self.models['idmix']\n #liquid_mod = Model(dbe, self.components, self.phase_name)\n ## we only want the excess contributions to the entropy\n #del liquid_mod.models['ref']\n #del liquid_mod.models['idmix']\n excess_entropy = -excess_energy.diff(v.T)\n ks = 2\n # STEP 4\n result = eta * (1 - ks * excess_entropy / v.R)\n self.eta = eta\n return result",
"Performing Calculations\nNow we can create an instance of ViscosityModel for the liquid phase using the Database object we created earlier. We can verify this model has a viscosity attribute containing a symbolic expression for the viscosity.",
"mod = ViscosityModel(dbf, ['CU', 'ZR'], 'LIQUID')\nprint(mod.viscosity)",
"Finally we calculate and plot the viscosity.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom pycalphad import calculate\n\nmod = ViscosityModel(dbf, ['CU', 'ZR'], 'LIQUID')\n\ntemp = 2100\n# NOTICE: we need to tell pycalphad about our model for this phase\nmodels = {'LIQUID': mod}\nres = calculate(dbf, ['CU', 'ZR'], 'LIQUID', P=101325, T=temp, model=models, output='viscosity') \n\nfig = plt.figure(figsize=(6,6))\nax = fig.gca()\nax.scatter(res.X.sel(component='ZR'), 1000 * res.viscosity.values)\nax.set_xlabel('X(ZR)')\nax.set_ylabel('Viscosity (mPa-s)')\nax.set_xlim((0,1))\nax.set_title('Viscosity at {}K'.format(temp));",
"We repeat the calculation for Al-Cu.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom pycalphad import calculate\n\ntemp = 1300\nmodels = {'LIQUID': ViscosityModel} # we can also use Model class\nres = calculate(dbf, ['CU', 'AL'], 'LIQUID', P=101325, T=temp, model=models, output='viscosity')\n\nfig = plt.figure(figsize=(6,6))\nax = fig.gca()\nax.scatter(res.X.sel(component='CU'), 1000 * res.viscosity.values)\nax.set_xlabel('X(CU)')\nax.set_ylabel('Viscosity (mPa-s)')\nax.set_xlim((0,1))\nax.set_title('Viscosity at {}K'.format(temp));"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
squishbug/DataScienceProgramming
|
04-Pandas-Data-Tables/HW04/CheckHomework04.ipynb
|
cc0-1.0
|
[
"Check Homework HW04\nUse this notebook to check your solutions. This notebook will not be graded.",
"import pandas as pd\nimport numpy as np",
"Now, import your solutions from hw4_answers.py. The following code looks a bit redundant. However, we do this to allow reloading the hw4_answers.py in case you made some changes. Normally, Python assumes that modules don't change and therefore does not try to import them again.",
"import hw4_answers\nreload(hw4_answers)\nfrom hw4_answers import *",
"Problem 1\nCreate a function load_employees that loads the employees table from\nthe file /home/data/AdventureWorks/Employees.xls and sets the index of the DataFrame to the EmployeeID. The function should return a table with the EmployeeID as the index and the remaining 25 columns.",
"employees_df = load_employees()\nprint \"Number of rows: %d\\nNumber of cols: %d\\n\" % (employees_df.shape[0], employees_df.shape[1])\nprint \"Head of index: %s\\n\" % (employees_df.index[:10])\nprint \"Record of employee with ID=999\\n\"\nprint employees_df.loc[999]",
"The output should be \n<pre>\nNumber of rows: 291\nNumber of cols: 25\n\nHead of index: Int64Index([259, 278, 204, 78, 255, 66, 270, 22, 161, 124], dtype='int64', name=u'EmployeeID')\n\nRecord of employee with ID=999\n\nManagerID 1\nTerritoryID NaN\nTitle NaN\nFirstName Chadwick\nMiddleName NaN\nLastName Smith\nSuffix NaN\nJobTitle BI Professor\nNationalIDNumber 123456789\nBirthDate 1967-07-05\nMaritalStatus M\nGender M\nHireDate 2003-12-31 23:59:59.997000\nSalariedFlag 0\nVacationHours 55\nSickLeaveHours 47\nPhoneNumber 555-887-9788\nPhoneNumberType Work\nEmailAddress chadwick.smith@rentpath.com\nAddressLine1 565 Peachtree Rd.\nAddressLine2 NaN\nCity Atlanta\nStateProvinceName Georgia\nPostalCode 30084\nCountryName United States\nName: 999, dtype: object\n</pre>\n\nProblem 2\nDefine a function getFullName which takes the employees table and a single employee ID as arguments, and returns a string with the full name of the employee in the format \"LAST, FIRST MIDDLE\".\nIf the given ID does not belong to any employee return the string \"UNKNOWN\" (in all caps)\nIf no middle name is given only return \"LAST, FIRST\". Make sure there are not trailing spaces!\nIf only the middle initial is given the return the full name in the format \"LAST, FIRST M.\" with the middle initial followed by a '.'.\nArguments:\n- df (DataFrame): Employee Table\n- empid (int): Employee ID\nReturns:\n- String with full name",
"for eid in [274, 999, 102]:\n print '%d, \"%s\"' %(eid, getFullName(employees_df, eid))",
"The output should be\n<pre>\n274, \"Jiang, Stephen Y.\"\n999, \"Smith, Chadwick\"\n102, \"Mu, Zheng W.\"\n</pre>\n\nProblem 3\nDefine a function isSales that takes the job title of an employee as string as an argument and return either True if the job title indicates this person works in sales, and False otherwise.\nArgument:\n- jobtitle (str)\nReturns:\n- True or False",
"for jt in ['Chief Data Scientist', 'Sales Manager', 'Vice President of Sales']:\n if isSales(jt):\n print \"The job title '%s' is part of the Sales Department.\" % jt\n else:\n print \"The job title '%s' belongs to a different department.\" % jt",
"The output should be\n<pre>\nThe job title 'Chief Data Scientist' belongs to a different department.\nThe job title 'Sales Manager' is part of the Sales Department.\nThe job title 'Vice President of Sales' is part of the Sales Department.\n</pre>\n\nProblem 4\nDefine a function filterSales with the employee tables as an argument, that returns a new table of the same schema (i.e. columns and index) containing only row of sales people. You should use the isSales function from the previous problem.\nArguments:\n- employees (DataFrame)\nReturns:\n- DataFrame with only people form the Sales Department",
"sales_df = filterSales(employees_df)\nprint \"Number of rows: %d\\nNumber of cols: %d\\n\" % (sales_df.shape[0], sales_df.shape[1])\nprint \"Head of index: %s\\n\" % (sales_df.index[:10])\nprint \"Record of sales employee with ID=280\\n\"\nprint sales_df.loc[280]",
"The output should be\n<pre>\nNumber of rows: 18\nNumber of cols: 25\n\nHead of index: Int64Index([278, 283, 274, 276, 286, 284, 287, 281, 280, 285], dtype='int64', name=u'EmployeeID')\n\nRecord of sales employee with ID=280\n\nManagerID 274\nTerritoryID 1\nTitle NaN\nFirstName Pamela\nMiddleName O\nLastName Ansman-Wolfe\nSuffix NaN\nJobTitle Sales Representative\nNationalIDNumber 61161660\nBirthDate 1969-01-06\nMaritalStatus S\nGender F\nHireDate 2005-10-01 00:00:00\nSalariedFlag 1\nVacationHours 22\nSickLeaveHours 31\nPhoneNumber 340-555-0193\nPhoneNumberType Cell\nEmailAddress pamela0@yahoo.com\nAddressLine1 636 Vine Hill Way\nAddressLine2 NaN\nCity Portland\nStateProvinceName Oregon\nPostalCode 97205\nCountryName United States\nName: 280, dtype: object\n</pre>\n\nProblem 5\nDefine a function getEmailList with that returns a Series of strings of all email addresses of employees in this state or province. The email addresses should be separated by a given character, usually a comma ',' or semicolon ';'.\nArguments:\n- employees (DataFrame)\n- delimiter (str)\nReturns:\n- Series of email addresses, concatenated by the given delimiter. The Series is indexed by the state or province.",
"emails = getEmailListByState(sales_df, \", \")\nfor state in sorted(emails.index):\n print \"%15s: %s\" % (state, emails[state])",
"The output should be\n<pre>\n Alberta: garrett1@mapleleafmail.ca\n California: shu0@adventure-works.com\n England: jae0@aol.co.uk\n Gironde: ranjit0@adventure-works.com\n Hamburg: rachel0@adventure-works.com\n Massachusetts: tete0@adventure-works.com\n Michigan: michael9@adventure-works.com\n Minnesota: jillian0@adventure-works.com\n Ontario: josé1@safe-mail.net\n Oregon: pamela0@yahoo.com\n Tennessee: tsvi0@adventure-works.com\n Utah: linda3@adventure-works.com\n Victoria: lynn0@adventure-works.com\n Washington: david8@adventure-works.com, stephen0@adventure-works.com, amy0@yahoo.com, syed0@yahoo.com, brian3@aol.com\n<pre>\n\n## Problem 6 (Bonus)\nDefine a function `managementCounts` which produces a Series of how many employees report to a manager. The Series is indexed by the `ManagerID`, the count should be performed on the `EmployeeID` because this is the only field that is guaranteed to be unique. The resulting Series should be order by the number of employees in **descending order**.\n\nArguments:\n- employees (DataFrame)\n\nReturns:\n- Series of counts (int), indexed by `ManagerID`",
"print managementCounts(employees_df)",
"The output should be\n<pre>\nManagerID\n26.0 178\n25.0 30\n250.0 17\n274.0 10\n263.0 9\n249.0 9\n16.0 8\n1.0 8\n227.0 6\n235.0 5\n287.0 3\n273.0 3\n234.0 3\n285.0 1\nName: EmployeeID, dtype: int64\n</pre>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.